id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
65687957
https://en.wikipedia.org/wiki/Carlos%20Dotson
Carlos Dotson
Carlos Dotson (born September 20, 1996) is an American professional basketball player for U.D. Oliveirense of the Liga Portuguesa de Basquetebol. He played college basketball for the Anderson Trojans, the College of Central Florida Patriots, and the Western Carolina Catamounts. High school career Dotson grew up in Riverdale Park, Maryland but moved to South Carolina to attend Paul M. Dorman High School. He was cut from the basketball team as a freshman, but made the team as a sophomore. As a senior, Dotson averaged 13 points and 10 rebounds per game, leading the team to a 20–5 record. He was selected to the North-South All-Star game where he scored 14 points. Dotson also played defensive end on the football team before focusing on basketball. He committed to play college basketball at Anderson, choosing the Trojans over Lincoln Memorial, among other Division II offers. College career Dotson played sparingly as a freshman at Anderson as he was hampered by ankle injuries and took a medical redshirt. As a redshirt freshman, he averaged 10.9 points and 8.1 rebounds per game, shooting 56.5 percent from the floor. Dotson earned South Atlantic Conference All-Freshman Team honors. For his sophomore season, he transferred to the College of Central Florida. Dotson averaged 13.3 points and 7.8 rebounds per game for the Patriots while shooting 60.3 percent. He transferred to Western Carolina prior to his junior season. Dotson averaged 13.9 points and 9.5 rebounds per game as a junior, earning Third Team All-SoCon honors. On February 12, 2020, he scored a career-high 32 points and had 12 rebounds in a 82–62 loss to UNC Greensboro. As a senior, Dotson averaged 15.5 points and 9.7 rebounds per game and had 18 double-doubles. He was named to the First Team All-SoCon, the SoCon All-Tournament Team, and the Lou Henson All-America Team. Professional career On October 7, 2020, Dotson signed his first professional contract with JSA Bordeaux Basket of the Nationale Masculine 1. In four games, he averaged 7.3 points and 3.0 rebounds per game. On March 10, 2021, Dotson signed with Club Trouville of the Liga Uruguaya de Básquetbol. In the summer of 2021, he joined the Charlotte Tribe of the East Coast Basketball League. On November 26, Dotson signed with U.D. Oliveirense of the Liga Portuguesa de Basquetebol. Coaching career Dotson joined Clemson as a graduate assistant for the 2021–22 season. He left the team in late November 2021 to continue his professional career. Career statistics College NCAA Division I |- | style="text-align:left;"| 2018–19 | style="text-align:left;"| Western Carolina | 31 || 28 || 27.7 || .596 || .000 || .526 || 9.5 || 1.5 || .7 || .5 || 13.9 |- | style="text-align:left;"| 2019–20 | style="text-align:left;"| Western Carolina | 30 || 29 || 27.2 || .610 || – || .573 || 9.7 || 1.9 || .6 || .2 || 15.5 |- class="sortbottom" | style="text-align:center;" colspan="2"| Career | 61 || 57 || 27.5 || .603 || .000 || .551 || 9.6 || 1.7 || .7 || .3 || 14.7 NCAA Division II |- | style="text-align:left;"| 2015–16 | style="text-align:left;"| Anderson | 4 || 0 || 7.0 || .600 || – || .286 || 1.3 || .3 || .0 || .3 || 2.0 |- | style="text-align:left;"| 2016–17 | style="text-align:left;"| Anderson | 29 || 29 || 26.1 || .565 || – || .486 || 8.1 || .3 || .8 || .6 || 10.9 |- class="sortbottom" | style="text-align:center;" colspan="2"| Career | 33 || 29 || 23.8 || .565 || – || .475 || 7.3 || .3 || .7 || .6 || 9.8 JUCO |- | style="text-align:left;"| 2017–18 | style="text-align:left;"| College of Central Florida | 32 || 32 || 24.5 || .603 || .000 || .530 || 7.8 || .8 || 1.0 || .4 || 13.3 References External links Western Carolina Catamounts bio College of Central Florida Patriots bio Anderson Trojans bio 1996 births Living people American men's basketball players American expatriate basketball people in France American expatriate basketball people in Uruguay Anderson Trojans men's basketball players College of Central Florida Patriots men's basketball players Western Carolina Catamounts men's basketball players Basketball players from Maryland People from Riverdale Park, Maryland Power forwards (basketball)
47058330
https://en.wikipedia.org/wiki/Ratter%20%28film%29
Ratter (film)
Ratter is a 2015 American found-footage horror thriller film written and directed by Branden Kramer in his feature debut. The film is based on a short film also written and co-directed by Kramer titled Webcam. It stars Ashley Benson and Matt McGorry. The film had its world premiere at the 2015 Slamdance Film Festival on January 24, 2015. The film was released on February 12, 2016, in a limited release by Destination Films and Vertical Entertainment. Plot Emma Taylor, a graduate student, moves to New York City for a fresh start after her recent break-up with her boyfriend Alex. After getting settled into her new Brooklyn apartment, someone begins anonymously hacking all her electronic devices and watching her through the cameras. One day while attending college, Emma meets Michael, who asks her out. The two begin dating. However, one night Emma pleasures herself, after which she becomes suspicious after the hacker begins stealing private photos of her with Michael and sending her messages and videos pretending to be Michael. She confronts Michael, who denies sending the messages, but Emma does not believe him and starts avoiding him. To relieve some stress, Emma and her best friend, Nicole, go clubbing. While they are out the hacker breaks into her apartment. She returns home drunk and passes out on her couch without knowing her hacker is out on her balcony. When she wakes up he is gone but does not pay attention as she goes into the bathroom to relieve herself. Michael comes over later that day to see how she is doing and to give her a cat so she is not so lonely. The two then reconcile about before and have sex. Sometime later, Michael calls her and tells her someone emailed him telling him to leave Emma alone. Emma freaks out and calls the cops who do nothing. She begins feeling isolated and depressed because there is nothing anyone can do about the situation. One day Emma comes home to find her apartment door unlocked. She walks into her home and finds her cat dead. She tries to call Michael and tell him the news. However, he never answers or calls back. Feeling more vulnerable than ever, Emma spends the day wandering the city so she does not have to be home alone. She makes plans with Nicole to hang out at Emma's apartment later so she returns home. While waiting for Nicole, Emma and her mother video chat. During the middle of the chat, Emma's power cuts off and she begins screaming. The hacker appears and begins attacking Emma while the chat with her mother is still going. Emma's screams abruptly stop and her mother calls the cops. The hacker then turns the laptop towards him and Emma's mother begins pleading with him to leave her alone. He shuts the laptop. After the credits roll, the cops show up and begin looking for Emma's whereabouts. Cast Ashley Benson as Emma Taylor Matt McGorry as Michael Rebecca Naomi Jones as Nicole Kalli Vernoff as Emma's Mom Michael William Freeman as Alex Alex Cranmer as Professor John Anderson as Kent Karl Glusman as Brent Jeremy Fiorentino as Officer Rodriguez Release The film had its world premiere at the 2015 Slamdance Film Festival on January 24, 2015, In October 2015, Sony Pictures Worldwide Acquisitions had acquired all global distribution rights to the film. The film was released on February 12, 2016, in a limited release. Home media The film was released on March 1, 2016, through video on demand and home media formats. The film was released direct-to-video in Germany on March 17, 2016, and in Spain on May 11, 2016. Reception Ratter has grossed $110,834 with sales of its DVD/Blu-ray releases. References External links 2015 films 2015 directorial debut films 2015 horror thriller films 2015 independent films 2015 thriller drama films 2010s horror drama films American films American horror drama films American horror thriller films American independent films American thriller drama films Features based on short films Films about mobile phones Films about social media Films about stalking Films set in apartment buildings Films set in New York City Films shot in New York City Found footage films Vertical Entertainment films
21967962
https://en.wikipedia.org/wiki/Critical%20Software
Critical Software
Critical Software is a Portuguese international information systems and software company, headquartered in Coimbra. The company was established in 1998, from the University of Coimbra's business incubator and technology transfer centre, Instituto Pedro Nunes (IPN). The company has other offices in Porto and Lisbon (Portugal), Southampton (United Kingdom), Munich (Germany) and California (United States). Critical Software develops systems and software services for safety, mission and business-critical applications in several markets, including aerospace, defense, automotive, railway, telecoms, finance, and energy & utilities. Core competencies include system planning and analysis, system design and development, embedded and real-time systems, command & control systems, security and infrastructure, systems integration, business intelligence, independent software verification & validation, UxD, AI, digital transformation and smart meter testing. Critical Software's delivery unit was the first in Portugal to be rated at CMMI Maturity Level 5. The company is one of a few dozen organizations in the world which have both waterfall and agile software development units rated at Maturity Level 5. Critical Group The Critical Group comprises a series of technology companies, many of which were formed from ideas and solutions originally developed within Critical Software. Critical Materials develops products that provide diagnostics and prognostics for critical structural systems. The company was acquired by the original co-founders of the company and rebranded Stratosphere S.A. onCaring is a technology company addressing the needs of 20 million seniors living in long-term care across Europe, the United States, and Brazil. Critical Links provides purpose-built solutions to simplify the delivery of digitized content to schools. Coimbra Genomics is a company that develops solutions that aim to bridge the gap between genomic knowledge and medical practice. Retmarker provides software solutions for the screening and progression monitoring of retinal diseases. Critical Manufacturing provides automation and manufacturing software for high-tech industries. The company was acquired by ASMPT in 2018. Watchful Software provided software solutions that kept sensitive information safe from disclosure or security breaches. Symantec acquired it in 2017. Critical Ventures aims to partner with and invest in entrepreneurs developing innovative technologies with the potential to disrupt global markets. External links Official site Critical Software S.A., BusinessWeek Achievements Small & Medium Sized Enterprises, European Space Agency Critical Software Shows Portugal Can Grow, Forbes Critical Software – Pioneers, Up Magazine Notes and references Software companies of Portugal Portuguese brands
34640785
https://en.wikipedia.org/wiki/LinHES
LinHES
LinHES (Linux Home Entertainment Server) is a Linux distribution designed for use on Home Theater PCs (HTPCs). Before version 6, it was called KnoppMyth. The most recent release (R8), for 64-bit machines only, is based on Arch Linux, though previous versions were based on Knoppix and Debian. LinHES includes custom scripts that install and configure the MythTV PVR software as well as a number of add-ons. Most standard HTPC hardware is supported, and much of it is even configured automatically, making the often complex installation and configuration process relatively easy and pain-free. Cecil Watson developed and maintains the LinHES operating system. Details Practical explanation LinHES is a Linux distribution equivalent to the Windows Media Center. LinHES comes as a CD-ROM software distribution which automates the setup of the popular MythTV package as well as several HTPC-related add-ons. Ultimately, LinHES is used to create a home theater PC. These HTPCs are commonly plugged into a standard-definition television (SDTV) or high-definition television (HDTV) rather than monitors for a complete home theater experience. HTPCs bring the power of PCs to the living room in an "all in one" device. Ease of installation and features A common complaint about MythTV is that it is difficult and time-consuming to install and configure. The goal of LinHES is to make creating and maintaining a home theater PC as simple as possible. A blank system can be transformed to a fully functional HTPC in around 20 minutes capable of: watching & recording television pausing live TV playback of DVDs and most popular video formats retrieve fan art and information about your videos jukebox supporting multiple audio formats view images get the latest news & weather play games Applications Complete Installation (Front-end and Back-end) LinHES can be used to install a full MythTV client and server system. This means that the front-end is stored on the same device as the back-end. The front-end is the software required for the visual elements (or the GUI) that the regular user can utilize to find, play and manipulate media files etc. The back-end is the server where the media files are actually stored. A full front-end and back-end system may have an advantage in that it has 'portability', i.e. it is a standalone device that is not dependent on a separate server (like a video game console for example). Front-end only installations Alternatively, LinHES can be used to install a MythTV client, front-end-only system. For example, users may have a central storage device (server) in their house, the server can then be accessed from numerous other devices throughout the house, these other devices needing only a front-end installation on devices containing minimal hardware. LinHES can also run directly from a CD-ROM (i.e. without installation) providing that there is a network connection to a PC with a 'complete installation' (a MythTV back-end server). Using a 'server' separate from one or more front-end units has the obvious advantages of multiple simultaneous access to shared media files. The server used would generally have hardware of a relatively high specification and would be kept outside of the main living room. An advantage of keeping the server PC outside the living room is that the cooling fan required to accommodate a 'fast' processor can be quite noisy (as can certain hard drives), it can be expensive to invest in fanless/heat sinking equipment to avoid such noise problems. LinHES can also be used to upgrade existing LinHES and KnoppMyth installations. LinHES community LinHES users generally discuss ideas and help others at the official forum website. Version history LinHES R7.4 was the last 32-bit release. KnoppMyth releases LinHES releases See also Arch Linux List of free television software Mythbuntu MythTV References External links Arch-based Linux distributions Free television software Linux distributions
14722
https://en.wikipedia.org/wiki/Irssi
Irssi
Irssi ( ) is an IRC client program for Linux, FreeBSD, macOS and Microsoft Windows. It was originally written by Timo Sirainen, and released under the terms of the GNU GPL-2.0-or-later in January 1999. Features Irssi is written in the C programming language and in normal operation uses a text-mode user interface. According to the developers, Irssi was written from scratch, not based on ircII (like BitchX and epic). This freed the developers from having to deal with the constraints of an existing codebase, allowing them to maintain tighter control over issues such as security and customization. Numerous Perl scripts have been made available for Irssi to customise how it looks and operates. Plugins are available which add encryption and protocols such as ICQ and XMPP. Irssi may be configured by using its user interface or by manually editing its configuration files, which use a syntax resembling Perl data structures. Distributions Irssi was written primarily to run on Unix-like operating systems, and binaries and packages are available for Gentoo Linux, Debian, Slackware, SUSE (openSUSE), Frugalware, Fedora, FreeBSD, OpenBSD, NetBSD, DragonFly BSD, Solaris, Arch Linux, Ubuntu, NixOS, and others. Irssi builds and runs on Microsoft Windows under Cygwin, and in 2006, an official Windows standalone build became available. For the Unix-based macOS, text mode ports are available from the Homebrew, MacPorts, and Fink package managers, and two graphical clients have been written based on Irssi, IrssiX, and MacIrssi. The Cocoa client Colloquy was previously based on Irssi, but it now uses its own IRC core implementation. See also Comparison of Internet Relay Chat clients Shell account WeeChat References External links irssi on GitHub on Libera.chat Internet Relay Chat clients Free Internet Relay Chat clients MacOS Internet Relay Chat clients Unix Internet Relay Chat clients Windows Internet Relay Chat clients Free software programmed in C Cross-platform software 1999 software Software that uses ncurses Console applications Software developed in Finland
71406
https://en.wikipedia.org/wiki/PagePlus
PagePlus
PagePlus was a desktop publishing (page layout) program developed by Serif for Microsoft Windows. The first version was released in 1991 as the first commercial sub-£100 DTP package for Microsoft Windows. The final release was PagePlus X9, which was released in November 2016. In June 2019 it was officially replaced by Serif with Affinity Publisher. History PagePlus was first launched in 1990 and was the first sub-£100 desktop publishing program for Windows 3.0. Three years later, in spring 1993, PagePlus 2 was released and provided full colour printing support. Following this release, a new version of the product was released on a roughly annual basis. Serif did a complete rewrite of the original program source code for the release of PagePlus version 8. Despite the rewrite, at that time the program name was retained and the version number was simply incremented. Version 8 was able to read data files created in previous versions 1 to 7. Replacement by Affinity Publisher Serif announced that PagePlus X9 was to be the final PagePlus release. The last build issued to date is v19.0.2.22 from 28 April 2017. Serif ceased further development of all "Plus" products to focus efforts on their 'Affinity' product line. Serif began rewriting their DTP software, to allow a multi-platform implementation, and allow new methods of internal program operation with more modern Operating Systems and the typical current (2018/9) configuration of PCs. A public beta of Serif's Affinity Publisher (the closest of the Affinity applications to PagePlus functionality) was launched in August 2018, followed by the first full version of the application in June 2019. Overview While PagePlus was generally targeted at the "entry level" DTP user, some of the functionality present in the market leading applications (Quark's XPress and Adobe's InDesign) is present in PagePlus, such as working in the CMYK colour space, OpenType Feature support, and Optical margin alignment (Optical Justification). PagePlus also has the ability to view, create, edit and publish PDF files, and publish E-books in *.epub or *mobi formats suitable for the Kindle store. It also includes support for EPUB3 fixed layout eBooks for textbooks, children's books etc. PagePlus is primarily written in C++ using Visual Studio 2008, with a heavy dependence on the MFC framework. The Windows GDI library was discarded early in development in favour of an in-house composition engine supporting advanced bitmap and typeface operations. The text engine supports Unicode text entry. Supported platforms PagePlus was first developed for 16-bit Microsoft Windows v3.0 running on PC/MS DOS but the final releases support Windows XP, Windows Vista (32/64bit), Windows 7 (32/64bit), Windows 8 (32/64bit) and Windows 10 (32/64bit). PagePlus data file compatibility The format of the .ppp data file has also evolved over time. Until the switch to an XML-based format with version X3, each release of PagePlus could read current and previous version data file formats. Before X3 there was no facility to save back into an earlier format, so a modified file could not be read by any version previous to the version that was last used to save it. However, once the change was made to XML format at X3, later data files from release X4 to X9 inclusive could be read by earlier versions (back to X3), though with the loss of any unsupported features. The backwards compatibility of being able to read older non-XML .ppp datafile versions was dropped from later 64-bit PagePlus releases. As a result, PPX6 (2011) is the last release that can read PP5 (1997) and PP3 (1994) format data files after a standard install on a Windows 64-bit system. To read older files with PagePlus versions X7, X8 and X9 on a 64-bit Windows system, a special manual 32-bit PagePlus installation must be done from the program disc or downloaded file. Also, when Serif ended development of PagePlus with version X9 and began concentrating on its Affinity line, they did not include in Affinity Publisher the ability to import .ppp format files from the X6–X9 versions into Affinity; neither did they provide a batch conversion program into Affinity format. This upset many long-time PagePlus users, who felt they had supported the company for many years, and often had hundreds of documents in the .ppp format. Serif's suggestion was to redo the document in Affinity Publisher, or export the file in .pdf format, and then import into Affinity (which makes it a picture and loses all page formatting info). Many users did not feel this was adequate. Version history PagePlus: 1990 PagePlus 2: 1993 PagePlus 3: 1994 PagePlus 4: 1996 PagePlus 5: 1997 (revised for XP compatibility and reissued in 2002) PagePlus 6: 1999 PagePlus 7: 1 October 2000 PagePlus 8: 2001 PagePlus 8: PDF Edition, 9 September 2002 PagePlus 9: September 2003 PagePlus 10: 11 October 2004 PagePlus 11: 3 October 2005 PagePlus X2: 19 February 2007 PagePlus X3: 21 April 2008 PagePlus X4: 11 September 2009 PagePlus X5: 18 October 2010 PagePlus X6: 5 December 2011 PagePlus X7: 3 June 2013 PagePlus X8: 4 August 2014 PagePlus X9: 16 November 2015 See also Comparison of desktop publishing software Desktop publishing DrawPlusvector graphics editor List of desktop publishing software References Bibliography PagePlus official user guide External links Community Plus Official Forums Serif Forums Old Forums Reviews #1 DTP product on TopTenReviews.com Affordable Desktop Publishing From Serif – PC Mag Comparative Review of Recent Versions Review of PagePlus X5 – PC PRO 1990 software C++ software Desktop publishing software Windows-only software
40369937
https://en.wikipedia.org/wiki/POWER9
POWER9
POWER9 is a family of superscalar, multithreading, multi-core microprocessors produced by IBM, based on the Power ISA. It was announced in August 2016. The POWER9-based processors are being manufactured using a 14 nm FinFET process, in 12- and 24-core versions, for scale out and scale up applications, and possibly other variations, since the POWER9 architecture is open for licensing and modification by the OpenPOWER Foundation members. Summit, the second fastest supercomputer in the world, is based on POWER9, while also using Nvidia Tesla GPUs as accelerators. Design Core The POWER9 core comes in two variants, a four-way multithreaded one called SMT4 and an eight-way one called SMT8. The SMT4- and SMT8-cores are similar, in that they consist of a number of so-called slices fed by common schedulers. A slice is a rudimentary 64-bit single-threaded processing core with load store unit (LSU), integer unit (ALU) and a vector scalar unit (VSU, doing SIMD and floating point). A super-slice is the combination of two slices. An SMT4-core consists of a 32 KiB L1 cache (1 KiB = 1024 bytes), a 32 KiB L1 data cache, an instruction fetch unit (IFU) and an instruction sequencing unit (ISU) which feeds two super-slices. An SMT8-core has two sets of L1 caches and, IFUs and ISUs to feed four super-slices. The result is that the 12-core and 24-core versions of POWER9 each consist of the same number of slices (96 each) and the same amount of L1 cache. A POWER9 core, whether SMT4 or SMT8, has a 12-stage pipeline (five stages shorter than its predecessor, the POWER8), but aims to retain the clock frequency of around 4 GHz. It will be the first to incorporate elements of the Power ISA v.3.0 that was released in December 2015, including the VSX-3 instructions. The POWER9 design is made to be modular and used in more processor variants and used for licensing, on a different fabrication process than IBM's. On chip are co-processors for compression and cryptography, as well as a large low-latency eDRAM L3 cache. The POWER9 comes with a new interrupt controller architecture called "eXternal Interrupt Virtualization Engine" (XIVE) which replaces a much simpler architecture that was used in POWER4 through POWER8. XIVE will also be used in Power10. Scale out / scale up IBM POWER9 SO scale-out variant, optimized for dual socket computers with up to 120 GB/s bandwidth (1 GB = 1 billion bytes) to directly attached DDR4 memory (targeted for release in 2017) IBM POWER9 SU scale-up variant, optimized for four sockets or more, for large NUMA machines with up to 230 GB/s bandwidth to buffered memory (uses "25.6 GHz" signaling with the PowerAXON 25 GT/sec Link interface) Both POWER9 variants can ship in versions with some cores disabled due to yield reasons, as such Raptor Computing Systems first sold 4-core chips, and even IBM initially sold its AC922 systems with no more than 22-core chips, even though both types of chips have 24 cores on their dies. I/O A lot of facilities are on-chip for helping with massive off-chip I/O performance: The SO variant has integrated DDR4 controllers for directly attached RAM, while the SU variant will use the off-chip Centaur architecture introduced with POWER8 to include high performance eDRAM L4 cache and memory controllers for DDR4 RAM. The Bluelink interconnects for close attachment of graphics co-processors from Nvidia (over NVLink v.2) and OpenCAPI accelerators. General purpose PCIe v.4 connections for attaching regular ASICs, FPGAs and other peripherals as well as CAPI 2.0 and CAPI 1.0 devices designed for POWER8. Multiprocessor (symmetric multiprocessor system) links to connect other POWER9 processors on the same motherboard, or in other closely attached enclosures. Chip types POWER9 chips can be made with two types of cores, and in a Scale Out or Scale Up configuration. POWER9 cores are either SMT4 or SMT8, with SMT8 cores intended for PowerVM systems, while the SMT4 cores are intended for PowerNV systems, which do not use PowerVM, and predominantly run Linux. With POWER9, chips made for Scale Out can support directly-attached memory, while Scale Up chips are intended for use with machines with more than two CPU sockets, and use buffered memory. Modules The IBM Portal for OpenPOWER lists the three available modules for the Nimbus chip, although the Scale-Out SMT8 variant for PowerVM also uses the LaGrange module/socket: Sforza – 50 mm × 50 mm, 4 DDR4, 48 PCIe lanes, 1 XBus 4B Monza – 68.5 mm × 68.5 mm, 8 DDR4, 34 PCIe lanes, 1 XBus 4B, 48 OpenCAPI lanes LaGrange – 68.5 mm × 68.5 mm, 8 DDR4, 42 PCIe lanes, 2 XBus 4B, 16 OpenCAPI lanes Sforza modules use a land grid array (LGA) 2601-pin socket. Systems Raptor Computing Systems / Raptor Engineering Talos II – two-socket workstation/server platform using POWER9 SMT4 Sforza processors; available as 2U server, 4U server, tower, or EATX mainboard. Marketed as secure and owner-controllable with free and open-source software and firmware. Initially shipping with 4-core, 8-core, 18-core, and 22-core chip options until chips with more cores are available. Talos II Lite – single-socket version of the Talos II mainboard, made using the same PCB. Blackbird – single-socket microATX platform using SMT4 Sforza processors (up to 8-core 160 W variant), 4–8 cores, 2 RAM slots (supporting up to 256 GiB total) Google–Rackspace partnership Barreleye G2 / Zaius – two-socket server platform using LaGrange processors; both the Barreleye G2 and Zaius chassis use the Zaius POWER9 motherboard IBM Power Systems AC922 – 2U, 2× POWER9 SMT4 Monza, with up to 6× Nvidia Volta GPUs, 2× CAPI 2.0 attached accelerators and 1 TiB DDR4 RAM. AC here is an abbreviation for Accelerated Computing; this system is also known as "Witherspoon" or "Newell". Power Systems L922 – 2U, 1–2× POWER9 SMT8, 8–12 cores per processor, up to 4 TiB DDR4 RAM (1 TiB = 1024 GiB), PowerVM running Linux. Power Systems S914 – 4U, 1× POWER9 SMT8, 4–8 cores, up to 1 TiB DDR4 RAM, PowerVM running AIX/IBM i/Linux. Power Systems S922 – 2U, 1–2× POWER9 SMT8, 4–11 cores per processor, up to 4 TiB DDR4 RAM, PowerVM running AIX/IBM i/Linux. Power Systems S924 – 4U, 2× POWER9 SMT8, 8–12 cores per processor, up to 4 TiB DDR4 RAM, PowerVM running AIX/IBM i/Linux. Power Systems H922 – 2U, 1–2× POWER9 SMT8, 4–10 cores per processor, up to 4 TiB DDR4 RAM, PowerVM running SAP HANA (on Linux) with AIX/IBM i on up to 25% of the system. Power Systems H924 – 4U, 2× POWER9 SMT8, 8–12 cores per processor, up to 4 TiB DDR4 RAM, PowerVM running SAP HANA (on Linux) with AIX/IBM i on up to 25% of the system. Power Systems E950 – 4U, 2–4× POWER9 SMT8, 8–12 cores per processor, up to 16 TiB buffered DDR4 RAM Power Systems E980 – 1–4× 4U, 4–16× POWER9 SMT8, 8–12 cores per processor, up to 64 TiB buffered DDR4 RAM Hardware Management Console 7063-CR2 – 1U, 1× POWER9 SMT8, 6 cores, 64-128 GB DDR4 RAM. Penguin Computing Magna PE2112GTX – 2U, two-socket server for high performance computing using LaGrange processors. Manufactured by Wistron. IBM supercomputers Summit and Sierra The United States Department of Energy together with Oak Ridge National Laboratory and Lawrence Livermore National Laboratory contracted IBM and Nvidia to build two supercomputers, the Summit and the Sierra, are based on POWER9 processors coupled with Nvidia's Volta GPUs. These systems are slated to go online in 2017. Sierra is based on IBM's Power Systems AC922 compute node. The first racks of Summit were delivered to Oak Ridge National Laboratory on 31 July 2017. MareNostrum 4 – One of the three clusters in the emerging technologies block of the fourth MareNostrum supercomputer is a POWER9 cluster with Nvidia Volta GPUs. This cluster is expected to provide more than 1.5 petaflops of computing capacity when installed. The emerging technologies block of the MareNostrum 4 exists to test if new developments might be "suitable for future versions of MareNostrum". Operating system support As with its predecessor, POWER9 is supported by FreeBSD, IBM AIX, IBM i, Linux (both running with and without PowerVM), and OpenBSD. Implementation of POWER9 support in the Linux kernel began with version 4.6 in March 2016. Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise (SLES), Debian Linux, and CentOS are supported . See also IBM Power microprocessors OpenBMC References External links IBM Power9 IBM Portal for OpenPOWER Computer-related introductions in 2017 IBM microprocessors OpenPower IP cores Parallel computing Power microprocessors Transactional memory 64-bit microprocessors
52364517
https://en.wikipedia.org/wiki/Lumina%20%28desktop%20environment%29
Lumina (desktop environment)
Lumina Desktop Environment, or simply Lumina, is a plugin-based desktop environment for Unix and Unix-like operating systems. It is designed specifically as a system interface for TrueOS, and systems derived from Berkeley Software Distribution (BSD) in general, but has been ported to various Linux distributions. History Created in 2012 by Ken Moore, Lumina was initially a set of extensions to Fluxbox, a stacking window manager for the X Window System. By late 2013, Moore had developed a graphical overlay for Fluxbox based on Qt4, and had created a utility for "launching applications and opening files". The codebase was integrated into the PC-BSD source repository by early 2014, and a port was added to the FreeBSD Ports collection in April 2014. The source code has since been moved to a separate GitHub repository "under the PC-BSD umbrella" and converted to use Qt5. Development also focused on replacing the Fluxbox core with a Qt-based window manager integrated with the Lumina desktop. The project avoids use of Linux-based tools or frameworks, such as D-Bus, Polkit, and systemd. Features The desktop and application menus are dynamically configured upon first being launched, as the desktop environment finds installed applications automatically to add to the menu and as a desktop icon. The default panel includes a Start menu, task manager, and system tray, and its location can be customized. Menus may be accessed via the Start menu or by right-clicking the mouse on the desktop background. Some features are specific to TrueOS, including hardware control of screen brightness (monitor backlight), preventing shutdown of an updating system, and integration with various TrueOS utilities. Utilities include: Insight, a file manager; File information, which reports a file's format and other details; and Lumina Open, a graphical utility to launch applications based on the selected file or folder. Version 1.4 included several new utilities. The PDF reader lumina-pdf is based on the poppler library. The Lumina Theme Engine replaced an earlier theme system; it enables a user to configure the desktop appearance and functionality, and ensures all Qt5 applications "present a unified appearance". Ports Lumina has been ported to various BSD operating systems and Linux distributions. These include: Berkeley Software Distribution TrueOS DragonFly BSD FreeBSD NetBSD OpenBSD kFreeBSD Linux distributions antiX Linux Arch Linux Debian Fedora 24 Gentoo Linux Manjaro Linux NixOS PCLinuxOS Void Linux Notes References External links Lumina Desktop Environment Free desktop environments
57285439
https://en.wikipedia.org/wiki/Marvell%20Tell
Marvell Tell
Marvell Tell III (born August 2, 1996) is an American football cornerback for the Indianapolis Colts of the National Football League (NFL). He played college football for the Southern California Trojans, and played high school football at Crespi Carmelite High School. Early years Playing at Crespi Carmelite High School in Encino, California, Tell played free safety and had ten scholarship offers after his sophomore year, which saw him intercept a pass and make over 50 tackles. He had to sit out part of his junior season due to a broken collarbone. His commitment to USC came at the Army All-American Bowl after his senior year at Crespi. Tell chose USC over Oregon, Texas A&M and Vanderbilt, all of whom he visited. College career After playing sporadically during his freshman season at USC, Tell claimed a starting position in the defensive backfield for his sophomore season. Tell made the 2017 All-Pac-12 team on the first team after his junior season. Towards the end of his senior season, Tell sustained an ankle sprain. At the end of the season, he was named an honorable mention on the 2018 All-Pac-12 team. Professional career At the 2019 NFL Combine, Tell turned heads with a 42-inch vertical jump. Scouts noted his speed and fluidness but worried about his intangibles. Tell was drafted by the Indianapolis Colts in the fifth round (144th overall) in the 2019 NFL Draft. In week 9 against the Pittsburgh Steelers, Tell forced a fumble on running back Jaylen Samuels that was recovered by teammate Justin Houston in the 26–24 loss. On August 5, 2020, Tell announced he would opt out of the 2020 season due to the COVID-19 pandemic. On September 1, 2021, Tell was waived by the Colts and re-signed to the practice squad. On February 21, 2022, Tell re-signed with the Colts. References External links USC Trojans bio 1996 births Living people Players of American football from Pasadena, California American football safeties USC Trojans football players Indianapolis Colts players
60845831
https://en.wikipedia.org/wiki/OpenJS%20Foundation
OpenJS Foundation
The OpenJS Foundation is an organization that was founded in 2019 from a merger of JS Foundation and Node.js Foundation. Promotes the JavaScript and web ecosystem by hosting projects and funds activities that benefit the ecosystem. The OpenJS Foundation is made up of 38 open source JavaScript projects including Appium, Dojo, jQuery, and Node.js, and webpack. Founding members included Google, Microsoft, IBM, PayPal, GoDaddy, and Joyent. History jQuery Foundation jQuery Foundation was founded in 2012 as 501(c)(6) non-profit organization to support the development of the jQuery and jQuery UI projects. jQuery is the most widely adopted JavaScript library according to web analysis as of 2012. Prior to the jQuery Foundation, the jQuery project was a member of the Software Freedom Conservancy since 2009. The jQuery Foundation also advocates on behalf of web developers to improve web standards through its memberships in the W3C, and Ecma TC39 (JavaScript). It created a standards collaboration team in 2011 and joined the W3C in 2013. In 2016, the Dojo Foundation merged with jQuery Foundation and subsequently rebranded itself as JS Foundation and became a Linux Foundation project. JS Foundation (legally JSFoundation, Inc) aimed to help development and adoption of important JavaScript technology. The foundation worked to facilitate collaboration within the JavaScript development community to "foster JavaScript applications and server-side projects by providing best practices and policies." Node.js Foundation The Node.js Foundation was created in 2015 as a Linux Foundation project to accelerate the development of the Node.js platform. The Node.js Foundation operated under an open-governance model to heighten participation amongst vendors, developers, and the general Node.js community. Its structure gives enterprise users the assurance of "innovation and continuity without risk." Its growth led to new initiatives such as the Node Security Platform, a tool allowing continuous security monitoring for Node.js apps. And Node Interactive, "a series of professional conferences aimed at today's average Node.js user." Node.js reports "3.5 million users and an annual growth rate of 100 percent" and the Node.js Foundation is reported as being among The Linux Foundation's fastest growing projects. OpenJS Foundation In 2019, the Node.js Foundation merged with the JS Foundation to form the new OpenJS Foundation with a stated mission to foster healthy growth of the JavaScript and web ecosystem as a whole. Projects The Dojo Foundation (prior to 2016) was most notably home to the Dojo Toolkit. It was also host to Lodash, RequireJS, and other projects created by the Dojo community. The jQuery Foundation (2012-2016), was host to the original jQuery projects such as jQuery, jQuery UI, Sizzle and QUnit. In 2015 the Grunt project joined and Globalize was launched. In 2016, the ESLint project joined. The JS Foundation (2016-2019) attracted additional projects. In 2016, Appium joined, and Node-RED was contributed by IBM in 2016. References External links Free software project foundations in the United States Linux Foundation projects Organizations established in 2019 Non-profit organizations based in San Francisco JavaScript
2281413
https://en.wikipedia.org/wiki/IBM%20Planning%20Analytics
IBM Planning Analytics
IBM Planning Analytics powered by TM1 (formerly IBM Cognos TM1, formerly Applix TM1, formerly Sinper TM/1) is a business performance management software suite designed to implement collaborative planning, budgeting and forecasting solutions, interactive "what-if" analyses, as well as analytical and reporting applications. The database server component of the software platform retains its historical name TM1. Data is stored in in-memory multidimensional OLAP cubes, generally at the "leaf" level, and consolidated on demand. In addition to data, cubes can include encoded rules which define any on-demand calculations. By design, computations (typically aggregation along dimensional hierarchies using weighted summation) on the data are performed in near real-time, without the need to precalculate, due to a highly performant database design and calculation engine. These properties also allow the data to be updated frequently and by multiple users. TM1 is an example of a class of software products which implement the principles of the functional database model. The IBM Planning Analytics platform, in addition to the TM1 database server, includes an ETL tool, server management and monitoring tools and a number of user front ends which provide capabilities designed for common business planning and budgeting requirements, including workflow, adjustments, commentary, etc. The vendor currently offers the software both as a standalone on-premises product and in the SaaS model on the cloud. History While working at Exxon, Lilly Whaley suggested developing a planning system using the IBM mainframe time sharing option (TSO) to replace the previous IMS based planning system and thereby significantly reduce running costs. Manuel "Manny" Perez, who had been in IT for most of his career, took it upon himself to develop a prototype. Right away he realized that in order to provide the multidimensionality and interactivity necessary it would be necessary to keep the data structures in computer memory rather than on disk. The business potential of the planning system Perez had developed became apparent to him and he began to explore the possibilities of commercializing it. Back in early 1981, the IBM personal computer had not yet been announced and the Apple II® was not in significant use at corporations, so initially, Perez looked to implement on a public mainframe timesharing system. Just in time, the IBM personal computer was announced. It provided a low cost development environment which Manny was quick to take advantage of. When Visicalc was released, Perez became convinced that it was the ideal user interface for his visionary product: the Functional Database. With his friend Jose Sinai formed the Sinper Corporation in early 1983 and released his initial product, TM/1 (the "TM" in TM1 stands for "Table Manager"). Sinper was purchased by Applix in 1996, which was purchased by Cognos in late 2007, which was in itself acquired mere months later by IBM. With its flagship TM1 product line, Applix was the purest OLAP vendor among publicly traded independent BI vendors prior to OLAP industry consolidation in 2007, and had the greatest growth rate. On December 16, 2016 IBM released a rebranded and expanded version of the software (IBM Planning Analytics Local 2.0 'powered by' IBM TM1) with a 'restarted' version numbering. The data server component is still referred to as TM1 and retains numbering continued from prior versions, so Planning Analytics version 2.x includes TM1 version 11.x. Current Components TM1 Server Planning Analytics Workspace (a.k.a. PAW) - main web front end and development environment Planning Analytics for Microsoft Excel (a.k.a. PAfE, formerly PAx) - main Excel front end TM1 Web - legacy web front end TM1 Applications - legacy web front end TM1 Perspectives - legacy Excel front end TM1 Architect - legacy standalone Windows front end and development environment TM1 Performance Modeler - legacy development environment See also Business intelligence Comparison of OLAP servers Functional Database Model References External links Product website The Official History of TM1 Patent for the database design Monitor for TM1 Server Beyond the Spreadsheet: The Story of TM1 — Documentary Film, 2020 IBM software Online analytical processing
27684052
https://en.wikipedia.org/wiki/LLDB%20%28debugger%29
LLDB (debugger)
The LLDB Debugger (LLDB) is the debugger component of the LLVM project. It is built as a set of reusable components which extensively use existing libraries from LLVM, such as the Clang expression parser and LLVM disassembler. LLDB is free and open-source software under the University of Illinois/NCSA Open Source License, a BSD-style permissive software license. Since v9.0.0, it was relicensed to the Apache License 2.0 with LLVM Exceptions. Current state LLDB supports debugging of programs written in C, Objective-C, and C++. The Swift community maintains a version which adds support for the language. It is known to work on macOS, Linux, FreeBSD, NetBSD and Windows, and supports i386, x86-64, and ARM instruction sets. LLDB is the default debugger for Xcode 5 and later. Android Studio also uses LLDB for debug. LLDB can be used from other IDEs, including Visual Studio Code, Eclipse, and CLion. Examples of commands An example session Consider the following incorrect program written in C: #include <stdio.h> int main(void) { char msg = "Hello, world!\n"; printf("%s", msg); return 0; } Using the clang compiler on macOS, the code above can be compiled using the -g flag to include appropriate debug information on the binary generated—including the source code—making it easier to inspect it using LLDB. Assuming that the file containing the code above is named test.c, the command for the compilation could be: $ clang -g test.c -o test And the binary can now be run: $ ./test Segmentation fault Since the example code, when executed, generates a segmentation fault, lldb can be used to inspect the problem: $ lldb test (lldb) target create "test" Current executable set to 'test' (x86_64). (lldb) run Process 70716 launched: '/Users/wikipedia/test' (x86_64) Process 70716 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0xffffff90) frame #0: 0x00007fff6c7c46f2 libsystem_platform.dylib`_platform_strlen + 18 libsystem_platform.dylib`_platform_strlen: -> 0x7fff6c7c46f2 <+18>: pcmpeqb xmm0, xmmword ptr [rdi] 0x7fff6c7c46f6 <+22>: pmovmskb esi, xmm0 0x7fff6c7c46fa <+26>: and rcx, 0xf 0x7fff6c7c46fe <+30>: or rax, -0x1 Target 0: (test) stopped. The problem occurs when calling the function strlen, but we can run a backtrace to identify the exact line of code that is causing the problem: (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0xffffff90) * frame #0: 0x00007fff6c7c46f2 libsystem_platform.dylib`_platform_strlen + 18 frame #1: 0x00007fff6c66b16a libsystem_c.dylib`__vfprintf + 8812 frame #2: 0x00007fff6c6911c3 libsystem_c.dylib`__v2printf + 475 frame #3: 0x00007fff6c668e22 libsystem_c.dylib`vfprintf_l + 54 frame #4: 0x00007fff6c666f72 libsystem_c.dylib`printf + 174 frame #5: 0x0000000100000f6d test`main at test.c:5:2 frame #6: 0x00007fff6c5dc3d5 libdyld.dylib`start + 1 (lldb) source list 3 int main(void) { 4 char msg = "Hello, world!\n"; 5 printf("%s", msg); 6 return 0; 7 } From the line beginning with frame #5, LLDB indicates that the error is at line 5 of test.c. Running source list, we see that this refers to the call to printf. According to the exception code EXC_BAD_ACCESS from the backtrace, strlen is trying to read from a region of memory it does not have access to by dereferencing an invalid pointer. Returning to the source code, we see that the variable msg is of type char but contains a string instead of a character. To fix the problem, we modify the code to indicate that msg is a pointer to a string of chars by adding the * operator: #include <stdio.h> int main(void) { char* msg = "Hello, world!\n"; printf("%s", msg); return 0; } After recompiling and running the executable again, LLDB now gives the correct result: (lldb) target create "test" Current executable set to 'test' (x86_64). (lldb) run Process 93319 launched: '/Users/wikipedia/test' (x86_64) Hello, world! Process 93319 exited with status = 0 (0x00000000) (lldb) LLDB runs the program, which prints the output of printf to the screen. After the program exits normally, LLDB indicates that the process running the program has completed, and prints its exit status. See also GNU Debugger Microsoft Visual Studio Debugger References External links Supported LLDB Versions in Qt Creator Debuggers Free software programmed in C++ Lua (programming language)-scriptable software Software using the NCSA license Software using the Apache license Video game development software for Linux
11668692
https://en.wikipedia.org/wiki/Tip%20%28Unix%20utility%29
Tip (Unix utility)
tip is a Unix utility for establishing a terminal connection to a remote system via a modem. It is commonly associated with BSD Unix, as well as other UNIX operating systems such as Sun's Solaris. It was originally included with 4.2BSD. tip is referred to in the Solaris documentation as the preferred terminal emulator to connect to a Sun workstation's serial port for maintenance purposes, for example, to configure the OpenPROM firmware. Basics tip is one of the commands referenced in the expect reference book by Don Libes. The tip command line options are as follows: tip [-v] [-speed-entry] (<hostname> | <phone-number> | <device>) Use ~. to exit. Use ~# to break (Stop-A on a Sun keyboard). Use ~? to list all commands. Examples This Expect script is a simple example that establishes a terminal session: spawn tip modem expect "connected" send "ATD$argc\r" set timeout 30 expect "CONNECT" As tip does not have the built-in logging capabilities that Minicom has, we need to use some other means to record the session. One way is to use script: $ script -a install.log Script started, file is install.log $ tip hardwire [tip session takes place.] $ exit Script done, file is install.log $ and so on. In the above example, run on a Sun SPARC 20 workstation running Solaris 9, we first created a log file called install.log in the current directory using script' and then tell tip to use serial port B. See also cu (Unix utility), a similar command References External links NetBSD source to tip Unix software Communication software Terminal emulators
56380195
https://en.wikipedia.org/wiki/Browser%20isolation
Browser isolation
Browser isolation is a cybersecurity model which aims to physically isolate an internet user's browsing activity (and the associated cyber risks) away from their local networks and infrastructure. Browser isolation technologies approach this model in different ways, but they all seek to achieve the same goal, effective isolation of the web browser and a user's browsing activity as a method of securing web browsers from browser-based security exploits, as well as web-borne threats such as ransomware and other malware. When a browser isolation technology is delivered to its customers as a cloud hosted service, this is known as remote browser isolation (RBI), a model which enables organizations to deploy a browser isolation solution to their users without managing the associated server infrastructure. There are also client side approaches to browser isolation, based on client-side hypervisors, which do not depend on servers in order to isolate their users browsing activity and the associated risks, instead the activity is virtually isolated on the local host machine. Client-side solutions break the security through physical isolation model, but they do allow the user to avoid the server overhead costs associated with remote browser isolation solutions. Mechanism Browser isolation typically leverages virtualization or containerization technology to isolate the users web browsing activity away from the endpoint device - significantly reducing the attack surface for rogue links and files. Browser isolation is a way to isolate web browsing hosts and other high-risk behaviors away from mission-critical data and infrastructure. Browser isolation is a process to physically isolate a user's browsing activity away from local networks and infrastructure, isolating malware and browser based cyber-attacks in the process while still granting full access. Market In 2017, the American research group Gartner identified remote browser (browser isolation) as one of the top technologies for security. The same Gartner report also forecast that more than 50% of enterprises would actively begin to isolate their internet browsing to reduce the impact of cyber attacks over the coming three years. According to Market Research Media, the remote browser isolation (RBI) market is forecast to reach $10 Billion by 2024, growing at CAGR 30% in the period 2019–2024. Comparison to other techniques Unlike traditional web security approaches such as antivirus software and secure web gateways, browser isolation is a zero trust approach which does not rely on filtering content based on known threat patterns or signatures. Traditional approaches can't handle 0-day attacks since the threat patterns are unknown. Rather, browser isolation approach treats all websites and other web content that has not been explicitly whitelisted as untrusted, and isolates them from the local device in a virtual environment such as a container or virtual machine. Web-based files can be rendered remotely so that end users can access them within the browser, without downloading them. Alternatively, files can be sanitized within the virtual environment, using file cleansing technologies such as Content Disarm & Reconstruction (CDR), allowing for secure file downloads to the user device. Effectiveness Typically browser isolation solutions provide their users with 'disposable' (non-persistent) browser environments, once the browsing session is closed or times out, the entire browser environment is reset to a known good state or simply discarded. Any malicious code encountered during that session is thus prevented from reaching the endpoint or persisting within the network, regardless of whether any threat is detected. In this way, browser isolation proactively combats both known, unknown and zero-day threats, effectively complementing other security measures and contributing to a defense-in-depth, layered approach to web security. History Browser isolation began as an evolution of the 'security through physical isolation' cybersecurity model and is also known as the air-gap model by security professionals, who have been physically isolating critical networks, users and infrastructures for cybersecurity purposes for decades. Although techniques to breach 'air-gapped' IT systems exist, they typically require physical access or close proximity to the air-gapped system in order to be effective. The use of an air-gap makes infiltration into systems from the public internet extremely difficult, if not impossible without physical access to the system . The first commercial browser isolation platforms were leveraged by the National Nuclear Security Administration at Lawrence Livermore National Laboratory, Los Alamos National Laboratory and Sandia National Laboratories in 2009, when browser isolation platforms based on virtualization were used to deliver non-persistent virtual desktops to thousands of federal government users. In June 2018, the Defense Information Systems Agency (DISA) announced a request for information for a "cloud-based internet isolation" solution as part of its endpoint security portfolio. As the RFI puts it, "the service would redirect the act of internet browsing from the end user’s desktop into a remote server, external to the Department of Defense Information Network." At the time, the RFI was the largest known project for browser isolation, seeking "a cloud based service leveraging concurrent (simultaneous) use licenses at ~60% of the total user base (3.1 Million users)." See also Malware Internet safety Browser security Antivirus software References Web browsers Web security exploits Internet security
439982
https://en.wikipedia.org/wiki/Mumbai%20Police
Mumbai Police
The Mumbai Police (Marathi: मुंबई पोलीस, IAST: Mumbaī Pulīs, formerly Bombay Police) is the police department of the city of Mumbai, Maharashtra. It is a part of Maharashtra Police and has the primary responsibilities of law enforcement and investigation within the limits of Mumbai. The force's motto is "" (, English: "To protect Good and to destroy Evil"). It is headed by the Commissioner of the Mumbai Police assisted by an IPS officer in the rank of Additional Director General, and each district headed is headed by a Deputy Commissioner of Police in the rank of Superintendent of Police (excluding jails headed by Inspector Generals). Each police station is headed by a Senior Inspector called the Station House Officer (SHO). History Origins During the 17th century (until 1655), the area of present-day Mumbai was under Portuguese control. The Portuguese formed a basic law enforcement structure in this area with the establishment of a Police out-post in 1661. The origins of the present day Mumbai police can be traced back to a militia organised by Gerald Aungier, the then Governor of Bombay in 1669. This Bhandari Militia was composed of around 500 men and was headquartered at Mahim, Sevree and Sion. In 1672, the judicial overview of police decisions by courts was introduced, although none of the judges had any actual legal training. The situation remained unchanged through the Maratha wars. However, by 1682, policing remained stagnant. There was only one ensign for the whole Bhandari militia, and there were only three sergeants and two corporals. Creation and early days On 29 March 1780, the office of the Lieutenant of Police was dissolved and the office of Deputy of Police was created. James Tod, the then Lieutenant of Police was appointed as the first Deputy of Police on 5 April 1780. He was tried and dismissed for corruption in 1790. Subsequently, the designation was changed to "Deputy of Police and High Constable". In 1793, Act XXXIII, Geo. III was promulgated. The post of Deputy of Police was abolished and a post of Superintendent of Police was created in its place, with a Deputy of Superintendent of Police assisting him. Mr. Simon Halliday was the first Superintendent of Police, and governed till 1808. During this time, a thorough revision and re-arrangement of policing in the area outside the Fort was carried out. The troublesome area known as "Dungree and the Woods" was split up into 14 Police divisions, each division being staffed by two English constables and a varying number of Peons (not exceeding 130 for the whole area), who were to be stationary in their respective charges and responsible for dealing with all illegal acts committed within their limits. Post-1857 After the cementing of British Rule in India after the 1857 war of Indian Independence, in 1864, the three Presidency towns of Bombay, Calcutta and Madras were given Commissioners of Police. On 14 December 1864, Sir Frank Souter was appointed the first Police Commissioner of Bombay. He remained in office for 24 years, till 3 July 1888. During that year (1864), Khan Bahadur Sheikh Ibrahim Sheikh Imam became the first Indian appointed to a police officer's post. In 1896 the Commissioner's office moved to an Anglo-Gothic revival building, which it still occupies to this day. The Police Headquarters building is a protected heritage site. After 1947 After independence, many changes to the Bombay Police were instituted. On 15 August 1947, J.S. Bharucha became the first Indian head of the Bombay Police, taking over from the last British Commissioner, Mr. A.E. Caffin. A dog squad was set up in 1965. Computers were first used by the Bombay police in 1976. A Narcotics Cell and an anti-terrorist special operations squad were created in 1989. The service was renamed to Mumbai Police in 1995, following the renaming of Bombay to Mumbai. In 1995, the control room was computerised, and finally, in 1997, the Mumbai Police went online. Modernisation and present day A massive modernization of the Mumbai Police took place in 2005. New vehicles, guns and electronic equipment were procured for police use. The Tourist Squad was also created to patrol the beaches of Mumbai. On 30 May 2009 the Maharashtra government in Mumbai set up a police station dedicated to tackling cyber crime. It is the third such facility in India after Bangalore and Hyderabad. The dedicated police station will now register first information reports (FIRs) on its own and investigate offences pertaining to cyberspace. It is not clear how people abroad may report to Mumbai Cybercell. The police station will take care of all cyber cases in the city including that of terror e-mails. The existing Cyber Crime Investigation Cell of the city police probes cyber offences, but the FIRs are registered in local police stations depending on the site of the offence. A specially trained team of over 25 policemen, headed by an Assistant Commissioner of Police (ACP), were selected for the new job. The facility will function under the supervision of Deputy Commissioner of Police (Preventive) and Joint Commissioner of Police (Crime). Headquarters The Mumbai Police Headquarters are in a Grade II-A listed heritage building that was built in 1894 and designed by John Adams, who also designed the Royal Bombay Yacht Club. It is located opposite Crawford Market in South Mumbai, a mile away from the Victoria Terminus. The construction work started on 17 November 1894 and finished two years later on 24 December 1896. The building was formally opened on 1 January 1897. The architectural style of the building is Gothic Revival. In contrast to the Maharashtra Police Headquarters in Fort, which uses blue basalt and was built some two decades earlier, this building uses yellow basalt. The building underwent a major restoration in 2017 for the first time in its 120-year history. In 2018, it was announced that a police museum funded by Tata Trusts would open in the building. Since then, there have been no further developments. Organisation The Mumbai Police Department is headed by a Police Commissioner, who is an IPS officer. The Mumbai Police comes under the state home department through Maharashtra Police. The city is divided into Twelve police zones and Twenty Five traffic police zones, each headed by a Deputy Commissioner of Police. The Traffic Police is a semi-autonomous body under the Mumbai Police. The department holds several programs for the welfare of its officials including Retirement Planning Workshop. Geographical division Mumbai police is broadly divided into five regions namely Central, North, South, East and West. For administrative purposes, each region is subdivided into 3 to 4 zones. Each zone contains 3 to 4 police stations. Each zone is commanded by a Deputy Commissioner of Police (DCP). Apart from the 12 zones, there is also an additional Port zone. Police stations under the Port zone keep vigil on the Mumbai Port and container terminals in Mumbai. There are a total of 91 police stations in the jurisdiction of Mumbai Police. Every police station has a Police Inspector who is the in-charge officer of the station. Subunits Mumbai Police is divided into the following units: Local Police Special Unit Service Crime Branch or Cyber Cell is a wing of Mumbai Police, India, to deal with computer crimes, and to enforce provisions of India's Information Technology Law, namely, The Information Technology Act, 2000, and various cyber crime related provisions of criminal laws, including the Indian Penal Code, and the Companies Act of India subsection on IT-Sector responsibilities of corporate measures to protect cybersecurity. Cyber Crime Investigation Cell is a part of Crime Branch, Criminal Investigation Department of the Mumbai Police. Commando Force Detection Unit (Mumbai Encounter Squad) Anti Terrorist Squad Traffic Police Administration Social Service Cell Narcotics Cell Wireless Cell Local Armed Police Anti-Extortion Cell Modus Operandi Bureau Missing Persons Bureau Special Branch Intelligence Unit Protection & Security Riot Control Police Economic Offenses Wing Juvenile AID Protection Unit Quick Response Team Force One Each of these units have a Chief of the rank of Joint Commissioner of Police. Hierarchy Recruitment Those who join the police department through the subordinate services examination of the Maharashtra Public Service Commission enter the force at the lowest ranks of the force. Their starting rank is that of a Police constable. Those who join the police force through the combined competitive examination of the Maharashtra Public Service Commission holds a starting rank of Sub Inspector or Deputy Superintendent of Police of Maharashtra Police Service . Civil Servants who join the police force through the civil service examination conducted by UPSC holds a starting rank of Assistant Superintendent of Police of Indian Police Service cadre. Generally the IPS officers make it to the highest rank of Director General. The Commissioner of Police of Mumbai, an IPS officer is one of the rank of Additional Director General of Police. High-profile cases 26 November 2008 Mumbai attacks Anti-Terrorism Squad Chief Hemant Karkare, Additional Commissioner of Police Ashok Kamte and Encounter specialist Vijay Salaskar were among the policemen who fell to the bullets of the terrorists. Then Joint Commissioner of Mumbai Crime Branch Mr. Rakesh Maria under the leadership of Police Commissioner Hasan Gafoor tackled the abrupt attack by his superb skills. Mr. Ramesh Mahale, then an officer with crime branch investigated the case and brought the lone arrested militant Ajmal Kasab to justice. Police Commissioner Hasan Gafoor was shunted out of his office. Mahale resigned recently over a murder case investigation which he was leading. In the following year, as a response to these attacks, a specialised counter-terrorism unit, Force One was formed and commissioned on 24 November 2009, two days before the anniversary of the 26/11 terror attacks. A Committee was appointed to look into the failures of cops pertaining to the terror attack. The Ram Pradhan Committee, as it came to be known, furnished a report recommending a series of improvements & reforms. The State Government of Maharashtra however never had this report tabled in the legislature fearing a fallout over strictures passed in the report. A Public Interest Litigation has been filed by social activist Ketan Tirodkar to demand equal justice for all the police who were killed in the terror attack; especially for the members of the Bomb Disposal Squad of Mumbai Police. During the hearing of the petition, the Government informed the High Court that the Federal Government of India has rejected the proposal to award the Bomb Disposal Squad of the city police for their contribution in defusing granades in the terror attack. Sheena Bora murder case Sheena Bora, an executive working for Metro One based in Mumbai, went missing on 24 April 2012. In August 2015, the Mumbai Police had received a tip-off from an unknown man claiming that Sheena Bora had been murdered. After they got in touch with their Counterparts in Pune, they arrested her mother, Indrani Mukerjea, her stepfather Sanjeev Khanna, and her mother's chauffeur, Shyamvar Pinturam Rai, for allegedly abducting and killing her and subsequently burning her corpse. They also arrested Indrani's husband, Peter Mukerjea in connection with the case. Rai has now been allowed to turn approver in the case after he was pardoned by the Bandra Magistrate Court in Mumbai. As of May 2017, Indrani, Peter, and Sanjeev have been lodged in Byculla Women's Prison and Arthur Road Jail in Mumbai, respectively. Equipment Much of the equipment for the Mumbai Police are manufactured indigenously by the Indian Ordnance Factories controlled by the Ordnance Factories Board, Ministry of Defence, Government of India. Weapons such as Glock Pistols are imported from Austria. These pistols were first imported for the Anti-terrorist Squad in Mumbai when the same was formed in 2004. Weapons Rifles SMLE Mk III*, Ishapore 2A1, SUB Machine GUN CARBINE 9 mm 1A1, 7.62 MM 1A1, Assault Rifle 7.62 mm, 38 MM Multi Shot Riot Gun, INSAS 5.56 mm, AK-47(247 in total), FN-FAL 250 MP5 German automatic sub-machine guns have just been ordered., M4, M107 anti-material rifle and SWAT equipment. Pistols Glock pistol, Pistol Auto 9mm 1A, Smith & Wesson M&P. Detail List of Mumbai police's Vehicles 72 speed boats have been also ordered. Uniform Peaked caps are worn with an orange band and crown that is less stiff such it drops downwards. Khaki short sleeve shirt and long pants are worn by most members. Some women might wear sarees if they prefer. The patch of the police force is visible too. Mumbai police in popular culture Because Bollywood, India's Hindi language film industry, is primarily based in Mumbai, the Mumbai police has been frequently portrayed in films. Some of the prominent ones are listed below: Company (2002) Dum (2003) Aan: Men at Work (2004) Ab Tak Chhappan (2004) Black Friday (2004) Khakee (2004) Shootout at Lokhandwala (2007) A Wednesday (2008) Mumbai Meri Jaan (2008) Department (2012) Talaash (2012) Shootout at Wadala (2013) Singham Returns (2014) Ab Tak Chhappan 2 (2015) Simmba (2018) Slumdog Millionaire (2008) Darbar (2020) Mumbai Saga (2021) Sooryavanshi (2021) Most of these films are based on the operational groups most commonly known as Encounter Squads. Officers like Pradeep Sharma, Vijay Salaskar, Praful Bhosale, Ravindra Angre etc. have previously headed these squads. Junior officers Hemant Desai, Ashok Khot, Sachin Waze, Daya Nayak, Uttam Bhosale etc. assisted them. Honours The Ashok Chakra, India's highest civilian honour during peace time, was conferred posthumously upon two Mumbai Police officers – Hemant Karkare and Ashok Kamte who laid their lives in the service of the nation during the 2008 Mumbai attacks. Junior officer Vijay Salaskar was also posthumously awarded the Ashok Chakra. See also Mumbai History of Mumbai Mumbai Fire Brigade Maharashtra Police Literature Kadam, B. S. Sri; Socio-Historical Study Of Police Administration in Bombay Presidency (1861 to 1947); Kolhapur 1993 (Diss. Shivaji University) Kennedy, M. Notes On Criminal Classes in the Bombay Presidency Appendices regarding some Foreign Criminals who occasionally visit the Presidency: Including Hints on the Detection of Counterfeit Coin; Bombay 1908 Edwardes, Stephen M. (Commissioner of Police); The Bombay City Police: A Historical Sketch, 1672–1916; Bombay u.a. 1923 Edwardes, Stephen M.; Crime in India: Brief Review of the more Important Offences included in the Annual Criminal Returns with Chapters on Prostitution & Miscellaneous Matters; Oxford u.a. 1924 Statistiken: gedruckt im: Annual Report of Police for the Town and Island of Bombay, laufende Monatsstatistiken auf Mumbai Police References External links Official website of the Mumbai Police Mumbai Traffic Police, Official website Maharashtra Police Organisations based in Mumbai 1864 establishments in British India de:Bombay City Police#Mumbai Police
42714503
https://en.wikipedia.org/wiki/ShareX
ShareX
ShareX is a free and open-source screenshot and screencast program for Microsoft Windows. It is published under the GNU General Public License. The project's source code is hosted on GitHub. It is also available on the Microsoft Store and Steam. Features Screenshots ShareX can be used to capture full screen or partial screenshots (which can be exported into various image formats), such as rectangle capture and window capture. It can also record animated GIF files and video using FFmpeg. An included image editor lets you annotate captured screenshots, or modify it with borders, image effects, watermarks, etc. It's also possible to use the editor to draw on top of your windows or desktop before taking the screenshot. Sharing After capture, a screenshot can be automatically exported as an image file, email attachment, exported to a printer, to the clipboard, or uploaded to a remote host such as many popular image hosting services or via FTP. If the image is uploaded to a remote host, the URL generated by it can be copied to the clipboard. Dragging other file types into the program will upload them to a destination based on type, such as a text file being saved to Pastebin and a ZIP file saved to Dropbox. Other tools There are a variety of desktop image capabilities including screen color picker and selector, checksum tool (hash check), onscreen ruler, image combiner, thumbnails for images and video, and many more. The program also includes some basic automation. For example, you can a screenshot, add a border and watermark, and then save to a specific folder. Development Work on a project called ZScreen began in 2007, hosted on SourceForge and moved to Google Code in 2008. In 2010, a parallel project called ZUploader was started to rewrite ZScreen's core from scratch. In 2012, all of ZScreen's features had been ported to ZUploader which was subsequently repackaged and released as ShareX. In 2013, the project was moved to GitHub due to Google Code dropping support for hosting downloads. Reviews TechRadar gave the program 4.5 out of 5 stars and listed it among their 2021 Best Screen Recorders. The Guardian's 2018 article on the "best replacement for the Windows 10 Snipping Tool" lists ShareX first, with the caveat that it's powerful and probably "overkill for most users". See also Comparison of screencasting software References External links ShareX Review Screenshot software Screencasting software Free utility software Free software programmed in C Sharp Windows-only free software Free raster graphics editors
999990
https://en.wikipedia.org/wiki/Natural%20Sciences%20%28Cambridge%29
Natural Sciences (Cambridge)
The Natural Sciences Tripos (NST) is the framework within which most of the science at the University of Cambridge is taught. The tripos includes a wide range of Natural Sciences from physics, astronomy, and geoscience, to chemistry and biology, which are taught alongside the history and philosophy of science. The tripos covers several courses which form the University of Cambridge system of Tripos. It is known for its broad range of study in the first year, in which students cannot study just one discipline, but instead must choose three courses in different areas of the natural sciences and one in mathematics. As is traditional at Cambridge, the degree awarded after Part II is a Bachelor of Arts (BA). A Master of Natural Sciences degree (MSci) is available to those who take the optional Part III. It was started in the 19th Century. Teaching Teaching is carried out by 16 different departments. Subjects offered in Part IA in 2019 are Biology of Cells, Chemistry, Computer Science, Evolution and Behaviour, Earth Sciences, Materials Science, Mathematics, Physics, Physiology of Organisms and Mathematical Biology; students must take three experimental subjects and one mathematics course. There are three options for the compulsory mathematics element in IA: "Mathematics A", "Mathematics B" and "Mathematical Biology". From 2020 Computer Science will no longer be an option in the natural sciences course. Students specialize further in the second year (Part IB) of their Tripos, taking three subjects from a choice of twenty, and completely in their third year (Part II) in, for example, genetics or astrophysics, although general third year courses do exist – Biomedical and Biological Sciences for biologists and Physical Sciences for chemists, physicists, etc. Fourth year options (Part III) are available in a number of subjects, and usually have an entry requirement of obtaining a 2:1 or a First in second year Tripos Examinations, and is applied for before the commencement of the third year. As of 2008, options with an available Part III option are: Astrophysics; Biochemistry; Chemistry; Earth Sciences; Materials Science and Metallurgy; and Experimental and Theoretical Physics. the tripos is delivered by sixteen different departments including: The Department of Chemistry The Department of Chemical Engineering and Biotechnology The Department of Genetics The Department of Physics The Department of Astronomy The Department of Biochemistry The Department of Pharmacology The Department of Plant Sciences The Department of Physiology, Development and Neuroscience The Department of Zoology The Department of Psychology The Department of Computer Science and Technology The Department of Earth Sciences The Department of Materials Science and Metallurgy The Department of History and Philosophy of Science Motivation The University of Cambridge believes that their course's generalisation, rather than specialisation, gives their students an advantage. First, it allows students to experience subjects at university level before specialising. Second, many modern sciences exist at the boundaries of traditional disciplines, for example, applying methods from a different discipline. Third, this structure allows other scientific subjects, such as Mathematics (traditionally a very strong subject at Cambridge), Medicine and the History and Philosophy of Science, (and previously Computer sciences before it had been removed for 2020 entry) to link with the Natural Sciences Tripos so that once, say, the two-year Part I of the Medical Sciences tripos has been completed, one can specialise in another biological science in Part II during one's third year, and still come out with a science degree specialised enough to move into postgraduate studies, such as a PhD. Student enrolment As a result of this structure, the Natural Sciences Tripos has by far the greatest number of students of any Tripos. Undergraduates who are reading for the NST in order to gain their degrees are colloquially known in University slang as 'NatScis (pronounced "Nat-Ski's"), being broadly nicknamed physical science ('phys') or biological science ('bio') NatScis, according to their course choices. (Of course, many students choose both physical and biological options in first year.) The split tends to be about 50:50 between the physical and biological sciences. In 2018, 2594 students applied and 577 were admitted to the Natural Sciences Tripos. References Academic courses at the University of Cambridge
43389
https://en.wikipedia.org/wiki/Routing%20Information%20Protocol
Routing Information Protocol
The Routing Information Protocol (RIP) is one of the oldest distance-vector routing protocols which employs the hop count as a routing metric. RIP prevents routing loops by implementing a limit on the number of hops allowed in a path from source to destination. The largest number of hops allowed for RIP is 15, which limits the size of networks that RIP can support. RIP implements the split horizon, route poisoning, and holddown mechanisms to prevent incorrect routing information from being propagated. In RIPv1 routers broadcast updates with their routing table every 30 seconds. In the early deployments, routing tables were small enough that the traffic was not significant. As networks grew in size, however, it became evident there could be a massive traffic burst every 30 seconds, even if the routers had been initialized at random times. In most networking environments, RIP is not the preferred choice of routing protocol, as its time to converge and scalability are poor compared to EIGRP, OSPF, or IS-IS. However, it is easy to configure, because RIP does not require any parameters, unlike other protocols. RIP uses the User Datagram Protocol (UDP) as its transport protocol, and is assigned the reserved port number 520. Development of distance-vector routing Based on the Bellman–Ford algorithm and the Ford–Fulkerson algorithm, distance-vector routing protocols started to be implemented from 1969 onwards in data networks such as the ARPANET and CYCLADES. The predecessor of RIP was the Gateway Information Protocol (GWINFO) which was developed by Xerox in the mid-1970s to route its experimental network. As part of the Xerox Network Systems (XNS) protocol suite GWINFO transformed into the XNS Routing Information Protocol. This XNS RIP in turn became the basis for early routing protocols, such as Novell's IPX RIP, AppleTalk's Routing Table Maintenance Protocol (RTMP), and the IP RIP. The 1982 Berkley Software Distribution of the UNIX operating system implemented RIP in the routed daemon. The 4.2BSD release proved popular and became the basis for subsequent UNIX versions, which implemented RIP in the routed or gated daemon. Ultimately, RIP had been extensively deployed before the standard, written by Charles Hedrick, was passed as RIPv1 in 1988. The RIP hop count The routing metric used by RIP counts the number of routers that need to be passed to reach a destination IP network. The hop count 0 denotes a network that is directly connected to the router. 16 hops denote a network that is unreachable, according to the RIP hop limit. Versions There are three standardized versions of the Routing Information Protocol: RIPv1 and RIPv2 for IPv4, and RIPng for IPv6. RIP version 1 The original specification of RIP was published in 1988. When starting up, and every 30 seconds thereafter, a router with RIPv1 implementation broadcasts to a request message through every RIPv1 enabled interface. Neighbouring routers receiving the request message respond with a RIPv1 segment, containing their routing table. The requesting router updates its own routing table, with the reachable IP network address, hop count and next hop, that is the router interface IP address from which the RIPv1 response was sent. As the requesting router receives updates from different neighbouring routers it will only update the reachable networks in its routing table, if it receives information about a reachable network it has not yet in its routing table or information that a network it has in its routing table is reachable with a lower hop count. Therefore, a RIPv1 router will in most cases only have one entry for a reachable network, the one with the lowest hop count. If a router receives information from two different neighbouring router that the same network is reachable with the same hop count but via two different routes, the network will be entered into the routing table two times with different next hop routers. The RIPv1 enabled router will then perform what is known as equal-cost load balancing for IP packets. RIPv1 enabled routers not only request the routing tables of other routers every 30 seconds, they also listen to incoming requests from neighbouring routers and send their own routing table in turn. RIPv1 routing tables are therefore updated every 25 to 35 seconds. The RIPv1 protocol adds a small random time variable to the update time, to avoid routing tables synchronizing across a LAN. It was thought, as a result of random initialization, the routing updates would spread out in time, but this was not true in practice. Sally Floyd and Van Jacobson showed in 1994 that, without slight randomization of the update timer, the timers synchronized over time. RIPv1 can be configured into silent mode, so that a router requests and processes neighbouring routing tables, and keeps its routing table and hop count for reachable networks up to date, but does not needlessly send its own routing table into the network. Silent mode is commonly implemented to hosts. RIPv1 uses classful routing. The periodic routing updates do not carry subnet information, lacking support for variable length subnet masks (VLSM). This limitation makes it impossible to have different-sized subnets inside of the same network class. In other words, all subnets in a network class must have the same size. There is also no support for router authentication, making RIP vulnerable to various attacks. RIP version 2 Due to the deficiencies of the original RIP specification, RIP version 2 (RIPv2) was developed in 1993, published in 1994, and declared Internet Standard 56 in 1998. It included the ability to carry subnet information, thus supporting Classless Inter-Domain Routing (CIDR). To maintain backward compatibility, the hop count limit of 15 remained. RIPv2 has facilities to fully interoperate with the earlier specification if all Must Be Zero protocol fields in the RIPv1 messages are properly specified. In addition, a compatibility switch feature allows fine-grained interoperability adjustments. In an effort to avoid unnecessary load on hosts that do not participate in routing, RIPv2 multicasts the entire routing table to all adjacent routers at the address , as opposed to RIPv1 which uses broadcast. Unicast addressing is still allowed for special applications. (MD5) authentication for RIP was introduced in 1997. Route tags were also added in RIP version 2. This functionality allows a distinction between routes learned from the RIP protocol and routes learned from other protocols. RIPng RIPng (RIP next generation) is an extension of RIPv2 for support of IPv6, the next generation Internet Protocol. The main differences between RIPv2 and RIPng are: Support of IPv6 networking. While RIPv2 supports RIPv1 updates authentication, RIPng does not. IPv6 routers were, at the time, supposed to use IPsec for authentication. RIPv2 encodes the next-hop into each route entry, RIPng requires specific encoding of the next hop for a set of route entries. RIPng sends updates on UDP port 521 using the multicast group . RIP messages between routers RIP messages use the User Datagram Protocol on port 520 and all RIP messages exchanged between routers are encapsulated in a UDP segment. RIPv1 Messages RIP defined two types of messages: Request Message Asking a neighbouring RIPv1 enabled router to send its routing table. Response Message Carries the routing table of a router. Timers The routing information protocol uses the following timers as part of its operation: Update Timer Controls the interval between two gratuitous Response Messages. By default the value is 30 seconds. The response message is broadcast to all its RIP enabled interface. Invalid Timer The invalid timer specifies how long a routing entry can be in the routing table without being updated. This is also called as expiration Timer. By default, the value is 180 seconds. After the timer expires the hop count of the routing entry will be set to 16, marking the destination as unreachable. Flush Timer The flush timer controls the time between the route is invalidated or marked as unreachable and removal of entry from the routing table. By default the value is 240 seconds. This is 60 seconds longer than Invalid timer. So for 60 seconds the router will be advertising about this unreachable route to all its neighbours. This timer must be set to a higher value than the invalid timer. Holddown Timer The hold-down timer is started per route entry, when the hop count is changing from lower value to higher value. This allows the route to get stabilized. During this time no update can be done to that routing entry. This is not part of the RFC 1058. This is Cisco's implementation. The default value of this timer is 180 seconds. Limitations The hop count cannot exceed 15, or routes will be dropped. Variable Length Subnet Masks are not supported by RIP version 1 (which is obsolete). RIP has slow convergence and count to infinity problems. Implementations Cisco IOS, software used in Cisco routers (supports version 1, version 2 and RIPng) Cisco NX-OS software used in Cisco Nexus data center switches (supports RIPv2 only) Junos software used in Juniper routers, switches, and firewalls (supports RIPv1 and RIPv2) Routing and Remote Access, a Windows Server feature, contains RIP support Quagga, a free open source software routing suite based on GNU Zebra BIRD, a free open source software routing suite Zeroshell, a free open source software routing suite A RIP implementation first introduced in 4.2BSD, routed, survives in several of its descendants, including FreeBSD and NetBSD. OpenBSD introduced a new implementation, ripd, in version 4.1 and retired routed in version 4.4. Netgear routers commonly offer a choice of two implementations of RIPv2; these are labelled RIP_2M and RIP_2B. RIP_2M is the standard RIPv2 implementation using multicasting - which requires all routers on the network to support RIPv2 and multicasting, whereas RIP_2B sends RIPv2 packets using subnet broadcasting - making it more compatible with routers that do not support multicasting, including RIPv1 routers. Huawei HG633 ADSL/VDSL routers support passive and active routing with RIP v1 & v2 on the LAN and WAN side. Similar protocols Cisco's proprietary Interior Gateway Routing Protocol (IGRP) was a somewhat more capable protocol than RIP. It belongs to the same basic family of distance-vector routing protocols. Cisco has ceased support and distribution of IGRP in their router software. It was replaced by the Enhanced Interior Gateway Routing Protocol (EIGRP) which is a completely new design. While EIGRP still uses a distance-vector model, it relates to IGRP only in using the same routing metrics. IGRP supports multiple metrics for each route, including bandwidth, delay, load, MTU, and reliability. See also Convergence (routing) References Further reading Malkin, Gary Scott (2000). RIP: An Intra-Domain Routing Protocol. Addison-Wesley Longman. . Edward A. Taft, Gateway Information Protocol (revised) (Xerox Parc, Palo Alto, May, 1979) Xerox System Integration Standard - Internet Transport Protocols (Xerox, Stamford, 1981) Internet Standards Internet protocols Routing protocols
15395806
https://en.wikipedia.org/wiki/Process%20management%20%28computing%29
Process management (computing)
A process is a program in execution. An integral part of any modern-day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronization among processes. To meet these requirements, the OS must maintain a data structure for each process, which describes the state and resource ownership of that process, and which enables the OS to exert control over each process. Multiprogramming In any modern operating system there can be more than one instance of a program loaded in memory at the same time. For example, more than one user could be executing the same program, each user having separate copies of the program loaded into memory. With some programs, it is possible to have one copy loaded into memory, while several users have shared access to it so that they each can execute the same program-code. Such a program is said to be re-entrant. The processor at any instant can only be executing one instruction from one program but several processes can be sustained over a period of time by assigning each process to the processor at intervals while the remainder become temporarily inactive. A number of processes being executed over a period of time instead of at the same time is called concurrent execution. A multiprogramming or multitasking OS is a system executing many processes concurrently. Multiprogramming requires that the processor be allocated to each process for a period of time and de-allocated at an appropriate moment. If the processor is de-allocated during the execution of a process, it must be done in such a way that it can be restarted later as easily as possible. There are two possible ways for an OS to regain control of the processor during a program’s execution in order for the OS to perform de-allocation or allocation: The process issues a system call (sometimes called a software interrupt); for example, an I/O request occurs requesting to access a file on hard disk. A hardware interrupt occurs; for example, a key was pressed on the keyboard, or a timer runs out (used in pre-emptive multitasking). The stopping of one process and starting (or restarting) of another process is called a context switch or context change. In many modern operating systems, processes can consist of many sub-processes. This introduces the concept of a thread. A thread may be viewed as a sub-process; that is, a separate, independent sequence of execution within the code of one process. Threads are becoming increasingly important in the design of distributed and client–server systems and in software run on multi-processor systems. How multiprogramming increases efficiency A common trait observed among processes associated with most computer programs, is that they alternate between CPU cycles and I/O cycles. For the portion of the time required for CPU cycles, the process is being executed; i.e. is occupying the CPU. During the time required for I/O cycles, the process is not using the processor. Instead, it is either waiting to perform Input/Output, or is actually performing Input/Output. An example of this is the reading from or writing to a file on disk. Prior to the advent of multiprogramming, computers operated as single-user systems. Users of such systems quickly became aware that for much of the time that a computer was allocated to a single user, the processor was idle; when the user was entering information or debugging programs for example. Computer scientists observed that overall performance of the machine could be improved by letting a different process use the processor whenever one process was waiting for input/output. In a uni-programming system, if N users were to execute programs with individual execution times of t1, t2, ..., tN, then the total time, tuni, to service the N processes (consecutively) of all N users would be: tuni = t1 + t2 + ... + tN. However, because each process consumes both CPU cycles and I/O cycles, the time which each process actually uses the CPU is a very small fraction of the total execution time for the process. So, for process i: ti (processor) ≪ ti (execution) where ti (processor) is the time process i spends using the CPU, and ti (execution) is the total execution time for the process; i.e. the time for CPU cycles plus I/O cycles to be carried out (executed) until completion of the process. In fact, usually the sum of all the processor time, used by N processes, rarely exceeds a small fraction of the time to execute any one of the processes; Therefore, in uni-programming systems, the processor lay idle for a considerable proportion of the time. To overcome this inefficiency, multiprogramming is now implemented in modern operating systems such as Linux, UNIX and Microsoft Windows. This enables the processor to switch from one process, X, to another, Y, whenever X is involved in the I/O phase of its execution. Since the processing time is much less than a single job's runtime, the total time to service all N users with a multiprogramming system can be reduced to approximately: tmulti = max(t1, t2, ..., tN) Process creation Operating systems need some ways to create processes. In a very simple system designed for running only a single application (e.g., the controller in a microwave oven), it may be possible to have all the processes that will ever be needed be present when the system comes up. In general-purpose systems, however, some way is needed to create and terminate processes as needed during operation. There are four principal events that cause a process to be created: System initialization. Execution of process creation system call by a running process. A user request to create a new process. Initiation of a batch job. When an operating system is booted, typically several processes are created. Some of these are foreground processes, that interact with a (human) user and perform work for them. Others are background processes, which are not associated with particular users, but instead have some specific function. For example, one background process may be designed to accept incoming e-mails, sleeping most of the day but suddenly springing to life when an incoming e-mail arrives. Another background process may be designed to accept an incoming request for web pages hosted on the machine, waking up when a request arrives to service that request. Process creation in UNIX and Linux are done through fork() or clone() system calls. There are several steps involved in process creation. The first step is the validation of whether the parent process has sufficient authorization to create a process. Upon successful validation, the parent process is copied almost entirely, with changes only to the unique process id, parent process, and user-space. Each new process gets its own user space. Process creation in Windows is done through the CreateProcessA() system call. A new process runs in the security context of the calling process, but otherwise runs independently of the calling process. Methods exist to alter the security context in which a new processes runs. New processes are assigned identifiers by which the can be accessed. Functions are provided to synchronize calling threads to newly created processes. Process termination There are many reasons for process termination: Batch job issues halt instruction User logs off Process executes a service request to terminate Error and fault conditions Normal completion Time limit exceeded Memory unavailable Bounds violation; for example: attempted access of (non-existent) 11th element of a 10-element array Protection error; for example: attempted write to read-only file Arithmetic error; for example: attempted division by zero Time overrun; for example: process waited longer than a specified maximum for an event I/O failure Invalid instruction; for example: when a process tries to execute data (text) Privileged instruction Data misuse Operating system intervention; for example: to resolve a deadlock Parent terminates so child processes terminate (cascading termination) Parent request Two-state process management model The operating system’s principal responsibility is in controlling the execution of processes. This includes determining the interleaving pattern for execution and allocation of resources to processes. One part of designing an OS is to describe the behaviour that we would like each process to exhibit. The simplest model is based on the fact that a process is either being executed by a processor or it is not. Thus, a process may be considered to be in one of two states, RUNNING or NOT RUNNING. When the operating system creates a new process, that process is initially labeled as NOT RUNNING, and is placed into a queue in the system in the NOT RUNNING state. The process (or some portion of it) then exists in main memory, and it waits in the queue for an opportunity to be executed. After some period of time, the currently RUNNING process will be interrupted, and moved from the RUNNING state to the NOT RUNNING state, making the processor available for a different process. The dispatch portion of the OS will then select, from the queue of NOT RUNNING processes, one of the waiting processes to transfer to the processor. The chosen process is then relabeled from a NOT RUNNING state to a RUNNING state, and its execution is either begun if it is a new process, or is resumed if it is a process which was interrupted at an earlier time. From this model we can identify some design elements of the OS: The need to represent, and keep track of each process. The state of a process. The queuing of NON RUNNING processes Three-state process management model Although the two-state process management model is a perfectly valid design for an operating system, the absence of a BLOCKED state means that the processor lies idle when the active process changes from CPU cycles to I/O cycles. This design does not make efficient use of the processor. The three-state process management model is designed to overcome this problem, by introducing a new state called the BLOCKED state. This state describes any process which is waiting for an I/O event to take place. In this case, an I/O event can mean the use of some device or a signal from another process. The three states in this model are: RUNNING: The process that is currently being executed. READY: A process that is queuing and prepared to execute when given the opportunity. BLOCKED: A process that cannot execute until some event occurs, such as the completion of an I/O operation. At any instant, a process is in one and only one of the three states. For a single processor computer, only one process can be in the RUNNING state at any one instant. There can be many processes in the READY and BLOCKED states, and each of these states will have an associated queue for processes. Processes entering the system must go initially into the READY state, processes can only enter the RUNNING state via the READY state. Processes normally leave the system from the RUNNING state. For each of the three states, the process occupies space in main memory. While the reason for most transitions from one state to another might be obvious, some may not be so clear. RUNNING → READY The most common reason for this transition is that the running process has reached the maximum allowable time for uninterrupted execution; i.e. time-out occurs. Other reasons can be the imposition of priority levels as determined by the scheduling policy used for the Low Level Scheduler, and the arrival of a higher priority process into the READY state. RUNNING → BLOCKED A process is put into the BLOCKED state if it requests something for which it must wait. A request to the OS is usually in the form of a system call, (i.e. a call from the running process to a function that is part of the OS code). For example, requesting a file from disk or a saving a section of code or data from memory to a file on disk. Process description and control Each process in the system is represented by a data structure called a Process Control Block (PCB), or Process Descriptor in Linux, which performs the same function as a traveller's passport. The PCB contains the basic information about the job including: What it is Where it is going How much of its processing has been completed Where it is stored How much it has “spent” in using resources Process Identification: Each process is uniquely identified by the user’s identification and a pointer connecting it to its descriptor. Process Status: This indicates the current status of the process; READY, RUNNING, BLOCKED, READY SUSPEND, BLOCKED SUSPEND. Process State: This contains all of the information needed to indicate the current state of the job. Accounting: This contains information used mainly for billing purposes and for performance measurement. It indicates what kind of resources the process has used and for how long. Processor modes Contemporary processors incorporate a mode bit to define the execution capability of a program in the processor. This bit can be set to kernel mode or user mode. Kernel mode is also commonly referred to as supervisor mode, monitor mode or ring 0. In kernel mode, the processor can execute every instruction in its hardware repertoire, whereas in user mode, it can only execute a subset of the instructions. Instructions that can be executed only in kernel mode are called kernel, privileged or protected instructions to distinguish them from the user mode instructions. For example, I/O instructions are privileged. So, if an application program executes in user mode, it cannot perform its own I/O. Instead, it must request the OS to perform I/O on its behalf. The computer architecture may logically extend the mode bit to define areas of memory to be used when the processor is in kernel mode versus user mode. If the mode bit is set to kernel mode, the process executing in the processor can access either the kernel or user partition of the memory. However, if user mode is set, the process can reference only the user memory space. We frequently refer to two classes of memory user space and system space (or kernel, supervisor or protected space). In general, the mode bit extends the operating system's protection rights. The mode bit is set by the user mode trap instruction, also called a Supervisor Call instruction. This instruction sets the mode bit, and branches to a fixed location in the system space. Since only system code is loaded in the system space, only system code can be invoked via a trap. When the OS has completed the supervisor call, it resets the mode bit to user mode prior to the return. The Kernel system concept The parts of the OS critical to its correct operation execute in kernel mode, while other software (such as generic system software) and all application programs execute in user mode. This fundamental distinction is usually the irrefutable distinction between the operating system and other system software. The part of the system executing in kernel supervisor state is called the kernel, or nucleus, of the operating system. The kernel operates as trusted software, meaning that when it was designed and implemented, it was intended to implement protection mechanisms that could not be covertly changed through the actions of untrusted software executing in user space. Extensions to the OS execute in user mode, so the OS does not rely on the correctness of those parts of the system software for correct operation of the OS. Hence, a fundamental design decision for any function to be incorporated into the OS is whether it needs to be implemented in the kernel. If it is implemented in the kernel, it will execute in kernel (supervisor) space, and have access to other parts of the kernel. It will also be trusted software by the other parts of the kernel. If the function is implemented to execute in user mode, it will have no access to kernel data structures. However, the advantage is that it will normally require very limited effort to invoke the function. While kernel-implemented functions may be easy to implement, the trap mechanism and authentication at the time of the call are usually relatively expensive. The kernel code runs fast, but there is a large performance overhead in the actual call. This is a subtle, but important point. Requesting system services There are two techniques by which a program executing in user mode can request the kernel's services: System call Message passing Operating systems are designed with one or the other of these two facilities, but not both. First, assume that a user process wishes to invoke a particular target system function. For the system call approach, the user process uses the trap instruction. The idea is that the system call should appear to be an ordinary procedure call to the application program; the OS provides a library of user functions with names corresponding to each actual system call. Each of these stub functions contains a trap to the OS function. When the application program calls the stub, it executes the trap instruction, which switches the CPU to kernel mode, and then branches (indirectly through an OS table), to the entry point of the function which is to be invoked. When the function completes, it switches the processor to user mode and then returns control to the user process; thus simulating a normal procedure return. In the message passing approach, the user process constructs a message, that describes the desired service. Then it uses a trusted send function to pass the message to a trusted OS process. The send function serves the same purpose as the trap; that is, it carefully checks the message, switches the processor to kernel mode, and then delivers the message to a process that implements the target functions. Meanwhile, the user process waits for the result of the service request with a message receive operation. When the OS process completes the operation, it sends a message back to the user process. The distinction between two approaches has important consequences regarding the relative independence of the OS behavior, from the application process behavior, and the resulting performance. As a rule of thumb, operating system based on a system call interface can be made more efficient than those requiring messages to be exchanged between distinct processes. This is the case, even though the system call must be implemented with a trap instruction; that is, even though the trap is relatively expensive to perform, it is more efficient than the message passing approach, where there are generally higher costs associated with process multiplexing, message formation and message copying. The system call approach has the interesting property that there is not necessarily any OS process. Instead, a process executing in user mode changes to kernel mode when it is executing kernel code, and switches back to user mode when it returns from the OS call. If, on the other hand, the OS is designed as a set of separate processes, it is usually easier to design it so that it gets control of the machine in special situations, than if the kernel is simply a collection of functions executed by users processes in kernel mode. Even procedure-based operating system usually find it necessary to include at least a few system processes (called daemons in UNIX) to handle situation whereby the machine is otherwise idle such as scheduling and handling the network. See also Process isolation References Sources Operating System incorporating Windows and UNIX, Colin Ritchie. Operating Systems, William Stallings, Prentice Hall, (4th Edition, 2000) Multiprogramming, Process Description and Control Operating Systems – A Modern Perspective, Gary Nutt, Addison Wesley, (2nd Edition, 2001). Process Management Models, Scheduling, UNIX System V Release 4: Modern Operating Systems, Andrew Tanenbaum, Prentice Hall, (2nd Edition, 2001). Operating System Concepts, Silberschatz & Galvin & Gagne (http://codex.cs.yale.edu/avi/os-book/OS9/slide-dir/), John Wiley & Sons, (6th Edition, 2003) Process (computing) Operating system technology
12730
https://en.wikipedia.org/wiki/General%20Electric
General Electric
General Electric Company (GE) is an American multinational conglomerate incorporated in New York State and headquartered in Boston. Until 2021, the company operated in sectors including aviation, power, renewable energy, digital industry, weapons manufacturing, locomotives, and venture capital and finance, but has since divested from several areas, now primarily consisting of the first four segments. In 2020, GE ranked among the Fortune 500 as the 33rd largest firm in the United States by gross revenue. In 2011, GE ranked among the Fortune 20 as the 14th most profitable company, but later very severely underperformed the market (by about 75%) as its profitability collapsed. Two employees of GE—Irving Langmuir (1932) and Ivar Giaever (1973)—have been awarded the Nobel Prize. On November 9, 2021, the company announced it would divide into three public companies. The new companies will be focused on aviation, healthcare, and energy (renewable energy, power and digital) respectively. The first spinoff of the healthcare division is planned for 2023 and to be followed by the spinoff of the energy division in 2024. History Formation During 1889, Thomas Edison had business interests in many electricity-related companies, including Edison Lamp Company, a lamp manufacturer in East Newark, New Jersey; Edison Machine Works, a manufacturer of dynamos and large electric motors in Schenectady, New York; Bergmann & Company, a manufacturer of electric lighting fixtures, sockets, and other electric lighting devices; and Edison Electric Light Company, the patent-holding company and the financial arm backed by J. P. Morgan and the Vanderbilt family for Edison's lighting experiments. In 1889, Drexel, Morgan & Co., a company founded by J.P. Morgan and Anthony J. Drexel, financed Edison's research and helped merge those companies under one corporation to form Edison General Electric Company, which was incorporated in New York on April 24, 1889. The new company also acquired Sprague Electric Railway & Motor Company in the same year. The consolidation did not involve all of the companies established by Edison; notably, the Edison Illuminating Company, which would later become Consolidated Edison, was not part of the merger. In 1880, Gerald Waldo Hart formed the American Electric Company of New Britain, Connecticut, which merged a few years later with Thomson-Houston Electric Company, led by Charles Coffin. In 1887, Hart left to become superintendent of the Edison Electric Company of Kansas City, Missouri. General Electric was formed through the 1892 merger of Edison General Electric Company of Schenectady, New York, and Thomson-Houston Electric Company of Lynn, Massachusetts, with the support of Drexel, Morgan & Co. Both plants continue to operate under the GE banner to this day. The company was incorporated in New York, with the Schenectady plant used as headquarters for many years thereafter. Around the same time, General Electric's Canadian counterpart, Canadian General Electric, was formed. In 1893, General Electric bought the business of Rudolf Eickemeyer in Yonkers, New York, along with all of its patents and designs. One of the employees was Charles Proteus Steinmetz. Only recently arrived in the United States, Steinmetz was already publishing in the field of magnetic hysteresis and had earned worldwide professional recognition. Led by Steinmetz, Eickemeyer's firm had developed transformers for use in the transmission of electrical power among many other mechanical and electrical devices. Steinmetz quickly became known as the engineering wizard in GE's engineering community. Public company In 1896, General Electric was one of the original 12 companies listed on the newly formed Dow Jones Industrial Average, where it remained a part of the index for 122 years, though not continuously. In 1911, General Electric absorbed the National Electric Lamp Association (NELA) into its lighting business. GE established its lighting division headquarters at Nela Park in East Cleveland, Ohio. The lighting division has since remained in the same location. RCA and NBC Owen D. Young, through GE, founded the Radio Corporation of America (RCA) in 1919, after purchasing the Marconi Wireless Telegraph Company of America. He aimed to expand international radio communications. GE used RCA as its retail arm for radio sales. In 1926, RCA co-founded the National Broadcasting Company (NBC), which built two radio broadcasting networks. In 1930, General Electric was charged with antitrust violations and was ordered to divest itself of RCA. Television In 1927, Ernst Alexanderson of GE made the first demonstration of television broadcast reception at his General Electric Realty Plot home at 1132 Adams Rd, Schenectady, New York. On January 13, 1928, he made what was said to be the first broadcast to the public in the United States on GE's W2XAD: the pictures were picked up on 1.5 square inch (9.7 square centimeter) screens in the homes of four GE executives. The sound was broadcast on GE's WGY (AM). Experimental television station W2XAD evolved into the station WRGB which, along with WGY and WGFM (now WRVE), was owned and operated by General Electric until 1983. In 1965, the company expanded into cable with the launch of a franchise, which was awarded to a non-exclusive franchise in Schenectady through subsidiary General Electric Cablevision Corporation. On February 15, 1965, General Electric expanded its holdings in order to acquire more television stations to met the maximum limit of the FCC, and more cable holdings through subsidiaries General Electric Broadcasting Company and General Electric Cablevision Corporation. The company also owned television stations such as KOA-TV (now KCNC-TV) in Denver and WSIX-TV (later WNGE-TV, now WKRN) in Nashville, but like WRGB, General Electric sold off most of its broadcasting holdings, but held on to the Denver television station until in 1986, when General Electric bought out RCA and made it into an owned-and-operated station by NBC. It even stayed on until 1995 when it was transferred to a joint venture between CBS and Group W in a swap deal, alongside KUTV in Salt Lake City for longtime CBS O&O in Philadelphia, WCAU-TV. Power generation Led by Sanford Alexander Moss, GE moved into the new field of aircraft turbo superchargers. This technology also led to the development of industrial gas turbine engines used for power production. GE introduced the first set of superchargers during World War I, and continued to develop them during the interwar period. Superchargers became indispensable in the years immediately prior to World War II. GE supplied 300,000 turbo superchargers for use in fighter and bomber engines. This work led the U.S. Army Air Corps to select GE to develop the nation's first jet engine during the war. This experience, in turn, made GE a natural selection to develop the Whittle W.1 jet engine that was demonstrated in the United States in 1941. GE was ranked ninth among United States corporations in the value of wartime production contracts. Although, their early work with Whittle's designs was later handed to Allison Engine Company. GE Aviation then emerged as one of the world's largest engine manufacturers, bypassing the British company, Rolls-Royce plc. Some consumers boycotted GE light bulbs, refrigerators and other products during the 1980s and 1990s. The purpose of the boycott was to protest against GE's role in nuclear weapons production. In 2002, GE acquired the wind power assets of Enron during its bankruptcy proceedings. Enron Wind was the only surviving U.S. manufacturer of large wind turbines at the time, and GE increased engineering and supplies for the Wind Division and doubled the annual sales to $1.2 billion in 2003. It acquired ScanWind in 2009. In 2018, GE Power garnered press attention when a model 7HA gas turbine in Texas was shut down for two months due to the break of a turbine blade. This model uses similar blade technology to GE's newest and most efficient model, the 9HA. After the break, GE developed new protective coatings and heat treatment methods. Gas turbines represent a significant portion of GE Power's revenue, and also represent a significant portion of the power generation fleet of several utility companies in the United States. Chubu Electric of Japan and Électricité de France also had units that were impacted. Initially, GE did not realize the turbine blade issue of the 9FB unit would impact the new HA units. Computing GE was one of the eight major computer companies of the 1960s along with IBM, Burroughs, NCR, Control Data Corporation, Honeywell, RCA, and UNIVAC. GE had a line of general purpose and special purpose computers, including the GE 200, GE 400, and GE 600 series general purpose computers, the GE 4010, GE 4020, and GE 4060 real-time process control computers, and the DATANET-30 and Datanet 355 message switching computers (DATANET-30 and 355 were also used as front end processors for GE mainframe computers). A Datanet 500 computer was designed, but never sold. In 1962, GE started developing its GECOS (later renamed GCOS) operating system, originally for batch processing, but later extended to timesharing and transaction processing. Versions of GCOS are still in use today. From 1964 to 1969, GE and Bell Laboratories (which soon dropped out) joined with MIT to develop the Multics operating system on the GE 645 mainframe computer. The project took longer than expected and was not a major commercial success, but it demonstrated concepts such as single-level storage, dynamic linking, hierarchical file system, and ring-oriented security. Active development of Multics continued until 1985. GE got into computer manufacturing because in the 1950s they were the largest user of computers outside the United States federal government, aside from being the first business in the world to own a computer. Its major appliance manufacturing plant "Appliance Park" was the first non-governmental site to host one. However, in 1970, GE sold its computer division to Honeywell, exiting the computer manufacturing industry, though it retained its timesharing operations for some years afterwards. GE was a major provider of computer time-sharing services, through General Electric Information Services (GEIS, now GXS), offering online computing services that included GEnie. In 2000, when United Technologies Corp. planned to buy Honeywell, GE made a counter-offer that was approved by Honeywell. On July 3, 2001, the European Union issued a statement that "prohibit the proposed acquisition by General Electric Co. of Honeywell Inc.". The reasons given were it "would create or strengthen dominant positions on several markets and that the remedies proposed by GE were insufficient to resolve the competition concerns resulting from the proposed acquisition of Honeywell". On June 27, 2014, GE partnered with collaborative design company Quirky to announce its connected LED bulb called Link. The Link bulb is designed to communicate with smartphones and tablets using a mobile app called Wink. Acquisitions and divestments In December 1985, GE reacquired RCA, primarily for the NBC television network (also parent of Telemundo Communications Group) for $6.28 billion; this merger surpassed the Capital Cities/ABC merger that happened earlier that year as the largest non-oil merger in world business history. The remainder was sold to various companies, including Bertelsmann (Bertelsmann acquired RCA Records) and Thomson SA, which traces its roots to Thomson-Houston, one of the original components of GE. Also in 1986, Kidder, Peabody & Co., a U.S.-based securities firm, was sold to GE and following heavy losses was sold to PaineWebber in 1994. In 2002, Francisco Partners and Norwest Venture Partners acquired a division of GE called GE Information Systems (GEIS). The new company, named GXS, is based in Gaithersburg, Maryland. GXS is a provider of B2B e-Commerce solutions. GE maintains a minority stake in GXS. Also in 2002, GE Wind Energy was formed when GE bought the wind turbine manufacturing assets of Enron Wind after the Enron scandals. In 2004, GE bought 80% of Vivendi Universal Entertainment, the parent of Universal Pictures from Vivendi. Vivendi bought 20% of NBC forming the company NBCUniversal. GE then owned 80% of NBCUniversal and Vivendi owned 20%. In 2004, GE completed the spin-off of most of its mortgage and life insurance assets into an independent company, Genworth Financial, based in Richmond, Virginia. Genpact formerly known as GE Capital International Services (GECIS) was established by GE in late 1997 as its captive India-based BPO. GE sold 60% stake in Genpact to General Atlantic and Oak Hill Capital Partners in 2005 and hived off Genpact into an independent business. GE is still a major client to Genpact today, for services in customer service, finance, information technology, and analytics. In May 2007, GE acquired Smiths Aerospace for $4.8 billion. Also in 2007, GE Oil & Gas acquired Vetco Gray for $1.9 billion, followed by the acquisition of Hydril Pressure & Control in 2008 for $1.1 billion. GE Plastics was sold in 2008 to SABIC (Saudi Arabia Basic Industries Corporation). In May 2008, GE announced it was exploring options for divesting the bulk of its consumer and industrial business. On December 3, 2009, it was announced that NBCUniversal would become a joint venture between GE and cable television operator Comcast. Comcast would hold a controlling interest in the company, while GE would retain a 49% stake and would buy out shares owned by Vivendi. Vivendi would sell its 20% stake in NBCUniversal to GE for US$5.8 billion. Vivendi would sell 7.66% of NBCUniversal to GE for US$2 billion if the GE/Comcast deal was not completed by September 2010 and then sell the remaining 12.34% stake of NBCUniversal to GE for US$3.8 billion when the deal was completed or to the public via an IPO if the deal was not completed. On March 1, 2010, GE announced plans to sell its 20.85% stake in Turkey-based Garanti Bank. In August 2010, GE Healthcare signed a strategic partnership to bring cardiovascular Computed Tomography (CT) technology from start-up Arineta Ltd. of Israel to the hospital market. In October 2010, GE acquired gas engines manufacture Dresser Industries in a $3 billion deal and also bought a $1.6 billion portfolio of retail credit cards from Citigroup Inc. On October 14, 2010, GE announced the acquisition of data migration & SCADA simulation specialists Opal Software. In December 2010, for the second time that year (after the Dresser acquisition), GE bought the oil sector company Wellstream., an oil pipe maker, for 800 million pounds ($1.3 billion). In March 2011, GE announced that it had completed the acquisition of privately held Lineage Power Holdings from The Gores Group. In April 2011, GE announced it had completed its purchase of John Wood plc's Well Support Division for $2.8 billion. In 2011, GE Capital sold its $2 billion Mexican assets to Santander for $162 million and exit the business in Mexico. Santander additionally assumed the portfolio debts of GE Capital in the country. Following this, GE Capital focused in its core business and shed its non-core assets. In June 2012, CEO and President of GE Jeff Immelt said that the company would invest ₹3 billion to accelerate its businesses in Karnataka. In October 2012, GE acquired $7 billion worth of bank deposits from MetLife Inc. On March 19, 2013, Comcast bought GE's shares in NBCU for $16.7 billion, ending the company's longtime stake in television and film media. In April 2013, GE acquired oilfield pump maker Lufkin Industries for $2.98 billion. In April 2014, it was announced that GE was in talks to acquire the global power division of French engineering group Alstom for a figure of around $13 billion. A rival joint bid was submitted in June 2014 by Siemens and Mitsubishi Heavy Industries (MHI) with Siemens seeking to acquire Alstom's gas turbine business for €3.9 billion, and MHI proposing a joint venture in steam turbines, plus a €3.1 billion cash investment. In June 2014 a formal offer from GE worth $17 billion was agreed by the Alstom board. Part of the transaction involved the French government taking a 20% stake in Alstom to help secure France's energy and transport interests and French jobs. A rival offer from Siemens-Mitsubishi Heavy Industries was rejected. The acquisition was expected to be completed in 2015. In October 2014, GE announced it was considering the sale of its Polish banking business Bank BPH. Later in 2014, General Electric announced plans to open its global operations center in Cincinnati, Ohio. The Global Operations Center opened in October 2016 as home to GE's multifunctional shared services organization. It supports the company's finance/accounting, human resources, information technology, supply chain, legal and commercial operations, and is one of GE's four multifunctional shared services centers worldwide in Pudong, China; Budapest, Hungary; and Monterrey, Mexico. In April 2015, GE announced its intention to sell off its property portfolio, worth $26.5 billion, to Wells Fargo and The Blackstone Group. It was announced in April 2015 that GE would sell most of its finance unit and return around $90 billion to shareholders as the firm looked to trim down on its holdings and rid itself of its image of a "hybrid" company, working in both banking and manufacturing. In August 2015, GE Capital agreed to sell its Healthcare Financial Services business to Capital One for US$9 billion. The transaction involved US$8.5 billion of loans made to a wide array of sectors including senior housing, hospitals, medical offices, outpatient services, pharmaceuticals and medical devices. Also in August 2015, GE Capital agreed to sell GE Capital Bank's on-line deposit platform to Goldman Sachs. Terms of the transaction were not disclosed, but the sale included US$8 billion of on-line deposits and another US$8 billion of brokered certificates of deposit. The sale was part of GE's strategic plan to exit the U.S. banking sector and to free itself from tightening banking regulations. GE also aimed to shed its status as a "systematically important financial institution". In September 2015, GE Capital agreed to sell its transportation-finance unit to Canada's Bank of Montreal. The unit sold had US$8.7 billion (CA$11.5 billion) of assets, 600 employees and 15 offices in the U.S. and Canada. Exact terms of the sale were not disclosed, but the final price would be based on the value of the assets at closing, plus a premium according to the parties. In October 2015, activist investor Nelson Peltz's fund Trian bought a $2.5 billion stake in the company. In January 2016, Haier acquired GE's appliance division for $5.4 billion. In October 2016, GE Renewable Energy agreed to pay €1.5 billion to Doughty Hanson & Co for LM Wind Power during 2017. At the end of October 2016, it was announced that GE was under negotiations for a deal valued at about $30 billion to combine GE Oil & Gas with Baker Hughes. The transaction would create a publicly traded entity controlled by GE. It was announced that GE Oil & Gas would sell off its water treatment business, GE Water & Process Technologies, as part of its divestment agreement with Baker Hughes. The deal was cleared by the EU in May 2017, and by the United States Department of Justice in June 2017. The merger agreement was approved by shareholders at the end of June 2017. On July 3, 2017, the transaction was completed and Baker Hughes became a GE company and was renamed Bake Hughes, A GE Company (BHGE). In November 2018, GE reduced its stake in Baker Hughes to 50.4%. On October 18, 2019, GE reduced its stake to 36.8% and the company was renamed back to Baker Hughes. In May 2017, GE had signed $15 billion of business deals with Saudi Arabia. Saudi Arabia is one of GE's largest customers. In September 2017, GE announced the sale of its Industrial Solutions Business to ABB. The deal closed on June 30, 2018. Fraud allegations and notice of possible SEC civil action On August 15, 2019, Harry Markopolos, a financial fraud investigator known for his discovery of a Ponzi Scheme run by Bernard Madoff, accused General Electric of being a "bigger fraud than Enron", alleging $38 billion in accounting fraud. GE denied wrongdoing. On October 6, 2020, General Electric reported it received a Wells notice from the Securities and Exchange Commission stating the SEC may take civil action for possible violations of securities laws. Insufficient reserves for long-term care policies It is alleged that GE is "hiding" (i.e. under-reserved) $29 billion in losses related to its long-term care business. According to an August 2019 Fitch Ratings report, there are concerns that GE has not set aside enough money to cover its long-term care liabilities. In 2018, a lawsuit (the Bezio case) was filed in New York state court on behalf of participants in GE's 401(k) plan and shareowners alleging violations of Section 11 of the Securities Act of 1933 based on alleged misstatements and omissions related to insurance reserves and performance of GE's business segments. The Kansas Insurance Department (KID) is requiring General Electric to make $14.5 billion of capital contributions for its insurance contracts during the 7-year period ending in 2024. GE reported the total liability related to its insurance contracts increased significantly from 2016 to 2019: December 31, 2016 $26.1 billion December 31, 2017 $38.6 billion December 31, 2018 $35.6 billion December 31, 2019 $39.6 billion In 2018, GE announced the issuance of the new standard by the Financial Accounting Standards Board (FASB) regarding Financial Services - Insurance (Topic 944) will materially affect its financial statements. Mr. Markopolos estimated there will be a $US 10.5 billion charge when the new accounting standard is adopted in the first quarter of 2021. Anticipated $8 billion loss upon disposition of Baker Hughes In 2017, GE acquired a 62.5% interest in Baker Hughes (BHGE) when it combined its oil & gas business with Baker Hughes Incorporated. In 2018, GE reduced its interest to 50.4%, resulting in the realization of a $2.1 billion loss. GE is planning to divest its remaining interest and has warned that the divestment will result in an additional loss of $8.4 billion (assuming a BHGE share price of $23.57 per share). In response to the fraud allegations, GE noted the amount of the loss would be $7.4 billion if the divestment occurred on July 26, 2019. Mr. Markopolos noted that BHGE is an asset available for sale and therefore mark-to-market accounting is required. Markopolos noted GE's current ratio was only 0.67. He expressed concerns that GE may file for bankruptcy if there is a recession. Other In 2018, the GE Pension Plan reported losses of US$3.3 billion on plan assets. In 2018, General Electric changed the discount rate used to calculate the actuarial liabilities of its pension plans. The rate was increased from 3.64% to 4.34%. Consequently, the reported liability for the underfunded pension plans decreased by $7 billion year-over-year, from $34.2 billion in 2017 to $27.2 billion in 2018. In October 2018, General Electric announced it would "freeze pensions" for about 20,000 salaried U.S. employees. The employees will be moved to a defined-contribution retirement plan in 2021. On March 30, 2020, General Electric factory workers protested to convert jet engine factories to make ventilators during the COVID-19 crisis. In June 2020, GE made an agreement to sell its Lighting business to Savant Systems, Inc., an industry leader in the professional smart home space. Financial details of the transaction were not disclosed. In November 2020, General Electric warned it would be cutting jobs waiting for a recovery due to the COVID-19 pandemic. Financial performance Dividends In 2018, GE reduced its quarterly dividend from $0.12 to $0.01 per share. Stock As a publicly-traded company on the New York Stock Exchange, GE stock was one of the 30 components of the Dow Jones Industrial Average from 1907 to 2018, the longest continuous presence of any company on the index, and during this time the only company which was part of the original Dow Jones Industrial Index created in 1896. In August 2000, the company had a market capitalization of $601 billion, and was the most valuable company in the world. On June 26, 2018, the stock was removed from the index and replaced with Walgreens Boots Alliance. In the years leading to its removal, GE was the worst performing stock in the Dow, falling more than 55 percent year on year and more than 25 percent year to date. The company continued to lose value after being removed from the index. Bribery In July 2010, General Electric was willing to pay $23.4 million to settle an SEC complaint, as GE bribed Iraqi government officials to win contracts under the U.N. oil-for-food program. Corporate affairs In 1959, General Electric was accused of promoting the largest illegal cartel in the United States since the adoption of the Sherman Antitrust Act (1890) in order to maintain artificially high prices. In total, 29 companies and 45 executives would be convicted. Subsequent parliamentary inquiries revealed that "white-collar crime" was by far the most costly form of crime for the United States' finances. GE is a multinational conglomerate headquartered in Boston, Massachusetts. However its main offices are located at 30 Rockefeller Plaza at Rockefeller Center in New York City, known now as the Comcast Building. It was formerly known as the GE Building for the prominent GE logo on the roof; NBC's headquarters and main studios are also located in the building. Through its RCA subsidiary, it has been associated with the center since its construction in the 1930s. GE moved its corporate headquarters from the GE Building on Lexington Avenue to Fairfield, Connecticut in 1974. In 2016, GE announced a move to the South Boston Waterfront neighborhood of Boston, Massachusetts, partly as a result of an incentive package provide by state and city governments. The first group of workers arrived in the summer of 2016, and the full move will be completed by 2018. Due to poor financial performance and corporate downsizing, GE sold the land it planned to build its new headquarters building on, instead choosing to occupy neighboring leased buildings. GE's tax return is the largest return filed in the United States; the 2005 return was approximately 24,000 pages when printed out, and 237 megabytes when submitted electronically. As of 2011, the company spent more on U.S. lobbying than any other company. In 2005, GE launched its "Ecomagination" initiative in an attempt to position itself as a "green" company. GE is one of the biggest players in the wind power industry and is developing environment-friendly products such as hybrid locomotives, desalination and water reuse solutions, and photovoltaic cells. The company "plans to build the largest solar-panel-making factory in the U.S.," and has set goals for its subsidiaries to lower their greenhouse gas emissions. On May 21, 2007, GE announced it would sell its GE Plastics division to petrochemicals manufacturer SABIC for net proceeds of $11.6 billion. The transaction took place on August 31, 2007, and the company name changed to SABIC Innovative Plastics, with Brian Gladden as CEO. In February 2017, GE announced that the company intends to close the gender gap by promising to hire and place 20,000 women in technical roles by 2020. The company is also seeking to have a 50:50 male to female gender representation in all entry-level technical programs. In October 2017, GE announced they would be closing research and development centers in Shanghai, Munich and Rio de Janeiro. The company spent $5 billion on R&D in the last year. On February 25, 2019, GE sold its diesel locomotive business to Wabtec. CEO , John L. Flannery was replaced by H. Lawrence Culp Jr. as chairman and CEO in a unanimous vote of the GE Board of Directors. Charles A. Coffin (1913–1922) Owen D. Young (1922–1939, 1942–1945) Philip D. Reed (1940–1942, 1945–1958) Ralph J. Cordiner (1958–1963) Gerald L. Phillippe (1963–1972) Fred J. Borch (1967–1972) Reginald H. Jones (1972–1981) Jack Welch (1981–2001) Jeff Immelt (2001–2017) John L. Flannery (2017–2018) H. Lawrence Culp Jr. (2018–present) Corporate recognition and rankings In 2011, Fortune ranked GE the sixth-largest firm in the U.S., and the 14th-most profitable. Other rankings for 2011–2012 include the following: #18 company for leaders (Fortune) #82 green company (Newsweek) #91 most admired company (Fortune) #19 most innovative company (Fast Company). In 2012, GE's brand was valued at $28.8 billion. CEO Jeff Immelt had a set of changes in the presentation of the brand commissioned in 2004, after he took the reins as chairman, to unify the diversified businesses of GE. Tom Geismar later stated that looking back at the logos of the 1910s, 1920s, and 1930s, one can clearly judge that they are old-fashioned. Chermayeff & Geismar, along with colleagues Bill Brown and Ivan Chermaev, created the modern 1980 logo. They, in turn, argued that even now the old logos look out of date, earlier they were good. The changes included a new corporate color palette, small modifications to the GE logo, a new customized font (GE Inspira) and a new slogan, "Imagination at work", composed by David Lucas, to replace the slogan "We Bring Good Things to Life" used since 1979. The standard requires many headlines to be lowercased and adds visual "white space" to documents and advertising. The changes were designed by Wolff Olins and are used on GE's marketing, literature, and website. In 2014, a second typeface family was introduced: GE Sans and Serif by Bold Monday created under art direction by Wolff Olins. , GE had appeared on the Fortune 500 list for 22 years and held the 11th rank. GE was removed from the Dow Jones Industrial Average on June 28, 2018, after the value had dropped below 1% of the index's weight. Businesses GE's primary business divisions are: GE Additive GE Aviation GE Capital GE Digital GE Healthcare GE Power GE Renewable Energy GE Research Through these businesses, GE participates in markets that include the generation, transmission and distribution of electricity (e.g. nuclear, gas and solar), industrial automation, medical imaging equipment, motors, aircraft jet engines, and aviation services. Through GE Commercial Finance, GE Consumer Finance, GE Equipment Services, and GE Insurance it offers a range of financial services. It has a presence in over 100 countries. General Imaging manufacturers GE digital cameras. Even though the first wave of conglomerates (such as ITT Corporation, Ling-Temco-Vought, Tenneco, etc.) fell by the wayside by the mid-1980s, in the late 1990s, another wave (consisting of Westinghouse, Tyco, and others) tried and failed to emulate GE's success. GE is planning to set up a silicon carbide chip packaging R&D center in coalition with SUNY Polytechnic Institute in Utica, New York. The project will create 470 jobs with the potential to grow to 820 jobs within 10 years. On September 14, 2015, GE announced the creation of a new unit: GE Digital, which will bring together its software and IT capabilities. The new business unit will be headed by Bill Ruh, who joined GE in 2011 from Cisco Systems and has since worked on GE's software efforts. Former divisions GE Industrial was a division providing appliances, lighting and industrial products; factory automation systems; plastics, silicones and quartz products; security and sensors technology, and equipment financing, management and operating services. As of 2007 it had 70,000 employees generating $17.7 billion in revenue. After some major realignments in late 2007, GE Industrial was organized in two main sub businesses: GE Consumer & Industrial Appliances Electrical Distribution Lighting GE Enterprise Solutions Digital Energy GE Fanuc Intelligent Platforms Security Sensing & Inspection Technologies The former GE Plastics division was sold in August 2007 and is now SABIC Innovative Plastics. On May 4, 2008, it was announced that GE would auction off its appliances business for an expected sale of $5–8 billion. However, this plan fell through as a result of the recession. The former GE Appliances and Lighting segment was dissolved in 2014 when GE's appliance division was attempted to be sold to Electrolux for $5.4 billion, but eventually sold it to Haier in June 2016 due to antitrust filing against Electrolux. GE Lighting (consumer lighting) and the newly created Current, powered by GE, which deals in commercial LED, solar, EV, and energy storage, became stand-alone businesses within the company, until the sale of the latter to American Industrial Partners in April 2019 The former GE Transportation division merged with Wabtec on February 25, 2019, leaving GE with a 24.9% holding in Wabtec. On July 1, 2020, GE Lighting was acquired by Savant Systems and remains headquartered at Nela Park in East Cleveland, Ohio. Environmental record Carbon footprint General Electric Company reported Total CO2e emissions (Direct + Indirect) for the twelve months ending 31 December 2020 at 2,080 Kt (-310 /-13% y-o-y). There has been a consistent declining trend in reported emissions since 2016. Pollution Some of GE's activities have given rise to large-scale air and water pollution. Based on data from 2000, researchers at the Political Economy Research Institute listed the corporation as the fourth-largest corporate producer of air pollution in the United States (behind only E. I. Du Pont de Nemours & Co., United States Steel Corp., and ConocoPhillips), with more than 4.4 million pounds per year (2,000 tons) of toxic chemicals released into the air. GE has also been implicated in the creation of toxic waste. According to EPA documents, only the United States Government, Honeywell, and Chevron Corporation are responsible for producing more Superfund toxic waste sites. In 1983, New York State Attorney General Robert Abrams filed suit in the United States District Court for the Northern District of New York to compel GE to pay for the clean-up of what was claimed to be more than 100,000 tons of chemicals dumped from their plant in Waterford, New York. In 1999, the company agreed to pay a $250 million settlement in connection with claims it polluted the Housatonic River (Pittsfield, Massachusetts) and other sites with polychlorinated biphenyls (PCBs) and other hazardous substances. In 2003, acting on concerns that the plan proposed by GE did not "provide for adequate protection of public health and the environment," the United States Environmental Protection Agency issued a unilateral administrative order for the company to "address cleanup at the GE site" in Rome, Georgia, also contaminated with PCBs. The nuclear reactors involved in the 2011 crisis at Fukushima I in Japan were GE designs, and the architectural designs were done by Ebasco, formerly owned by GE. Concerns over the design and safety of these reactors were raised as early as 1972, but tsunami danger was not discussed at that time. , the same model nuclear power reactors designed by GE are operating in the US; however, as of May 31, 2019, the controversial Pilgrim Nuclear Generating Station, in Plymouth, Massachusetts, has been shut down and is in the process of decommission. Pollution of the Hudson River GE heavily contaminated the Hudson River with polychlorinated biphenyls (PCBs) between 1947 and 1977. This pollution caused a range of harmful effects to wildlife and people who eat fish from the river or drink the water. In response to the contamination, activists protested in various ways. Musician Pete Seeger founded the Hudson River Sloop Clearwater and the Clearwater Festival to draw attention to the problem. In 1983, the United States Environmental Protection Agency (EPA) declared a 200-mile (320 km) stretch of the river, from Hudson Falls to New York City, to be a Superfund site requiring cleanup. This Superfund site is considered to be one of the largest in the nation. Other sources of pollution, including mercury contamination and sewage dumping, have also contributed to problems in the Hudson River watershed. Pollution of the Housatonic River From until 1977, GE polluted the Housatonic River with PCB discharges from its plant at Pittsfield, Massachusetts. EPA designated the Pittsfield plant and several miles of the Housatonic to be a Superfund site in 1997, and ordered GE to remediate the site. Aroclor 1254 and Aroclor 1260, made by Monsanto, was the primary contaminant of the pollution. The highest concentrations of PCBs in the Housatonic River are found in Woods Pond in Lenox, Massachusetts, just south of Pittsfield, where they have been measured up to 110 mg/kg in the sediment. About 50% of all the PCBs currently in the river are estimated to be retained in the sediment behind Woods Pond dam. This is estimated to be about of PCBs. Former filled oxbows are also polluted. Waterfowl and fish who live in and around the river contain significant levels of PCBs and can present health risks if consumed. Environmental initiatives On June 6, 2011, GE announced that it has licensed solar thermal technology from California-based eSolar for use in power plants that use both solar and natural gas. On May 26, 2011, GE unveiled its EV Solar Carport, a carport that incorporates solar panels on its roof, with electric vehicle charging stations under its cover. In May 2005, GE announced the launch of a program called "Ecomagination", intended, in the words of CEO Jeff Immelt, "to develop tomorrow's solutions such as solar energy, hybrid locomotives, fuel cells, lower-emission aircraft engines, lighter and stronger durable materials, efficient lighting, and water purification technology". The announcement prompted an op-ed piece in The New York Times to observe that, "while General Electric's increased emphasis on clean technology will probably result in improved products and benefit its bottom line, Mr. Immelt's credibility as a spokesman on national environmental policy is fatally flawed because of his company's intransigence in cleaning up its own toxic legacy." GE has said that it will invest $1.4 billion in clean technology research and development in 2008 as part of its Ecomagination initiative. As of October 2008, the scheme had resulted in 70 green products being brought to market, ranging from halogen lamps to biogas engines. In 2007, GE raised the annual revenue target for its Ecomagination initiative from $20 billion in 2010 to $25 billion following positive market response to its new product lines. In 2010, GE continued to raise its investment by adding $10 billion into Ecomagination over the next five years. GE Energy's renewable energy business has expanded greatly, to keep up with growing U.S. and global demand for clean energy. Since entering the renewable energy industry in 2002, GE has invested more than $850 million in renewable energy commercialization. In August 2008, it acquired Kelman Ltd, a Northern Ireland-based company specializing in advanced monitoring and diagnostics technologies for transformers used in renewable energy generation and announced an expansion of its business in Northern Ireland in May 2010. In 2009, GE's renewable energy initiatives, which include solar power, wind power and GE Jenbacher gas engines using renewable and non-renewable methane-based gases, employ more than 4,900 people globally and have created more than 10,000 supporting jobs. GE Energy and Orion New Zealand (Orion) have announced the implementation of the first phase of a GE network management system to help improve power reliability for customers. GE's ENMAC Distribution Management System is the foundation of Orion's initiative. The system of smart grid technologies will significantly improve the network company's ability to manage big network emergencies and help it to restore power faster when outages occur. In June 2018, GE Volunteers, an internal group of GE Employees, along with Malaysian Nature Society, transplanted more than 270 plants from the Taman Tugu forest reserve so that they may be replanted in the forest trail which is under construction. Educational initiatives GE Healthcare is collaborating with The Wayne State University School of Medicine and the Medical University of South Carolina to offer an integrated radiology curriculum during their respective MD Programs led by investigators of the Advanced Diagnostic Ultrasound in micro-gravity study. GE has donated over one million dollars of Logiq E Ultrasound equipment to these two institutions. Marketing initiatives Between September 2011 and April 2013, GE ran a content marketing campaign dedicated to telling the stories of "innovators—people who are reshaping the world through act or invention". The initiative included 30 3-minute films from leading documentary film directors (Albert Maysles, Jessica Yu, Leslie Iwerks, Steve James, Alex Gibney, Lixin Fan, Gary Hustwit and others), and a user-generated competition that received over 600 submissions, out of which 20 finalists were chosen. Short Films, Big Ideas was launched at the 2011 Toronto International Film Festival in partnership with cinelan. Stories included breakthroughs in Slingshot (water vapor distillation system), cancer research, energy production, pain management and food access. Each of the 30 films received world premiere screenings at a major international film festival, including the Sundance Film Festival and the Tribeca Film Festival. The winning amateur director film, The Cyborg Foundation, was awarded a prize at the 2013 at Sundance Film Festival. According to GE, the campaign garnered more than 1.5 billion total media impressions, 14 million online views, and was seen in 156 countries. In January 2017, GE signed an estimated $7 million deal with the Boston Celtics to have its corporate logo put on the NBA team's jersey. Political affiliation In the 1950s, GE sponsored Ronald Reagan's TV career and launched him on the lecture circuit. GE has also designed social programs, supported civil rights organizations, and funds minority education programs. Notable appearances in media In the early 1950s, Kurt Vonnegut was a writer for GE. A number of his novels and stories (notably Cat's Cradle and Player Piano) refer to the fictional city of Ilium, which appears to be loosely based on Schenectady, New York. The Ilium Works is the setting for the short story "Deer in the Works". In 1981, GE won a Clio award for its :30 Soft White Light Bulbs commercial, We Bring Good Things to Life. The slogan "We Bring Good Things to Life" was created by Phil Dusenberry at the ad agency BBDO. GE was the primary focus of a 1991 short subject Academy Award-winning documentary, Deadly Deception: General Electric, Nuclear Weapons, and Our Environment, that juxtaposed GE's "We Bring Good Things To Life" commercials with the true stories of workers and neighbors whose lives have been affected by the company's activities involving nuclear weapons. In 2013, GE received a National Jefferson Award for Outstanding Service by a Major Corporation. See also GE Technology Infrastructure Knolls Atomic Power Laboratory List of assets owned by General Electric Phoebus cartel Top 100 US Federal Contractors References Further reading Carlson, W. Bernard. Innovation as a Social Process: Elihu Thomson and the Rise of General Electric, 1870–1900 (Cambridge: Cambridge University Press, 1991). Woodbury, David O. Elihu Thomson, Beloved Scientist (Boston: Museum of Science, 1944) Haney, John L. The Elihu Thomson Collection American Philosophical Society Yearbook 1944. Hammond, John W. Men and Volts: The Story of General Electric, published 1941, 436 pages. Mill, John M. Men and Volts at War: The Story of General Electric in World War II, published 1947. Irmer, Thomas. Gerard Swope. In Immigrant Entrepreneurship: German-American Business Biographies, 1720 to the Present, vol. 4, edited by Jeffrey Fear. German Historical Institute. External links 1892 establishments in New York (state) Aircraft engine manufacturers of the United States American companies established in 1892 Companies listed on the New York Stock Exchange Conglomerate companies of the United States Conglomerate companies established in 1892 Electric power companies of the United States Electrical engineering companies of the United States Electrical wiring and construction supplies manufacturers Electronics companies established in 1892 Former components of the Dow Jones Industrial Average GIS companies Guitar amplification tubes Lighting brands Manufacturing companies based in Boston Manufacturing companies established in 1892 Marine engine manufacturers Military equipment of the United States Multinational companies headquartered in the United States Photography companies of the United States RCA Schenectady, New York Superfund sites in Washington (state) Thomas Edison Time-sharing companies Transportation companies of the United States Transportation companies based in New York (state) Electric motor manufacturers Pump manufacturers
11354344
https://en.wikipedia.org/wiki/Distributed%20networking
Distributed networking
Distributed networking is a distributed computing network system where components of the program and data depend on multiple sources. Overview Distributed networking, used in distributed computing, is the network system over which computer programming, software, and its data are spread out across more than one computer, but communicate complex messages through their nodes (computers), and are dependent upon each other. The goal of a distributed network is to share resources, typically to accomplish a single or similar goal. Usually, this takes place over a computer network, however, internet-based computing is rising in popularity. Typically, a distributed networking system is composed of processes, threads, agents, and distributed objects. Merely distributed physical components is not enough to suffice as a distributed network; typically distributed networking uses concurrent program execution. Client/server Client/server computing is a type of distributed computing where one computer, a client, requests data from the server, a primary computing center, which responds to the client directly with the requested data, sometimes through an agent. Client/server distributed networking is also popular in web-based computing. Client/Server is the principle that a client computer can provide certain capabilities for a user and request others from other computers that provide services for the clients. The Web's Hypertext Transfer Protocol is basically all client/server. Agent-based A distributed network can also be agent-based, where what controls the agent or component is loosely defined, and the components can have either pre-configured or dynamic settings. Decentralized Decentralization is where each computer on the network can be used for the computing task at hand, which is the opposite of the client/server model. Typically, only idle computers are used, and in this way, it is thought that networks are more efficient. Peer-to-peer (P2P) computation is based on a decentralized, distributed network, including the distributed ledger technology such as blockchain. Mesh Mesh networking is a local network composed of devices (nodes) that was originally designed to communicate through radio waves, allowing for different types of devices. Each node is able to communicate with every other node on the network. Advantages of distributed networking Prior to the 1980s, computing was typically centralized on a single low-cost desktop computer. But today, computing resources (computers or servers) are typically physically distributed in many places, which distributed networking excels at. Some types of computing doesn't scale well past a certain level of parallelism and the gains of superior hardware components, and thus is bottle-necked, such as by Very Large Scale Instruction Words. By increasing the number of computers rather than the power of their components, these bottlenecks are overcome. Situations where resource sharing becomes an issue, or where higher fault tolerance is needed also find aid in distributed networking. Distributed networking is also very supportive of higher levels of anonymity. Cloud computing Enterprises with rapid growth and scaling needs may find it challenging to maintain their own distributed network under the traditional client/server computing model. Cloud Computing is the utility of distributed computing over Internet-based applications, storage, and computing services. A cloud is a cluster of computers or servers that are closely connected to provide scalable, high-capacity computing or related tasks. See also Cloud computing Data center Distributed data store Distributed file system Distributed computing Peer-to-peer References File sharing networks Distributed data storage
20102550
https://en.wikipedia.org/wiki/Aladdin%20%28food%20%26%20beverage%20containers%29
Aladdin (food & beverage containers)
Aladdin is a brand notable for its line of character lunchboxes including Hopalong Cassady, Superman, Mickey Mouse and The Jetsons. Today, Aladdin continues to be a food and beverage products brand and is owned by Pacific Market International, LLC of Seattle, Washington and Aladdin continues to be a kerosene lamps and wicks products brand and is owned by Hattersley Aladdin Ltd of the United Kingdom. Aladdin Industries Aladdin Industries was a vendor of lunchboxes, kerosene lamps, stoves and thermal food storage containers. It was founded in Chicago in 1908 by Victor Samuel Johnson, Sr. and incorporated as the Mantle Lamp Company. Aladdin Industries was created as a subsidiary of Mantle Lamp Company in 1914, specifically to manufacture vacuum bottles. The company was further diversified under former president Johnson's leadership. It was the maker of the first character lunchbox, using images of Hopalong Cassidy, in 1950. In 1908, Johnson Sr., a Chicago soap salesman, became interested in kerosene mantle burners. Dissatisfied with the available kerosene lamps of the time, Johnson began selling U.S.-made mantle lamps. He incorporated his lamp sales business and called the company the Mantle Lamp Company of America. In 1912, the company began manufacturing mantle lamps that gave off a steady white light without smoke. They called these lamps Aladdin lamps after the magical lamp and wish-granting genie in the children's story. In 1917, Johnson Sr. diversified the company's offerings and began producing insulated cooking dishes, known at the time as Aladdin Thermalware jars. These Thermalware jars were the company's first venture into heat and cold retaining dishes and are early cousins of the products in use today. In 1919, Johnson moved these jars into a new subsidiary he called Aladdin Industries. This subsidiary offered thermalware jars and vacuum ware and successfully sold and manufactured these products from 1919-1943. In 1943, Johnson Sr. died and his son Victor S. Johnson, Jr. took over as president of Aladdin Industries, Inc. In 1949, in an effort to centralize operations, Johnson Jr. moved Aladdin's offices and manufacturing facilities to Nashville, TN. Under Johnson Jr.'s management, Aladdin began producing metal lunch boxes in the 1940s. By the 1950s Aladdin was an industry leader in this category and would remain so for the next 30 years. Aladdin's dominance in lunch products resulted from a strategic move in the early 50s to license popular character images on its products. Hopalong Cassidy was the first character licensed product and in its first year, sales went from 50,000 units to 600,000 units. Subsequent branding included Superman, Mickey Mouse and The Jetsons. In 1965 Aladdin Industries expanded their product line through the acquisition of the Stanley Bottle operation. This move helped solidify the company's position in the food and beverage container category by deepening their line of steel offerings. In 1968, Aladdin introduced the insulated thermal tray, which revolutionized meal distribution for airlines, and then hospitals and other mass-feeding institutions which could, at last, keep hot foods hot and cold foods cold for long periods of time. Aladdin Industries incorporated Aladdin Synergetics as a new division for healthcare foodservice products. In 1998, this subsidiary was sold to Welbilt Corporation and was renamed Aladdin Temp-Rite. In 2002, Aladdin Temp-Rite was acquired by the Ali Group. 1980s – 2002 During the 1980s and 1990s Aladdin continued to grow and by the mid 1990s its Nashville operation grew to employ 1100 employees. At this time, foam insulated mugs grew in popularity and Aladdin's products were sold in grocery chains nationwide. Aladdin opened their Nashville factory on Murfreesboro Road, producing its first thermal products in July 2002. Seattle-based company Pacific Market International, LLC purchased the Aladdin brand in 2002. Current Aladdin brand The Aladdin brand is now owned by the privately held Pacific Market International (PMI), and is headquartered in Seattle, Washington, with offices in Europe, Asia and Australia. As of 2009, PMI sell vacuum flasks and other thermal products manufactured under the Aladdin name. Literary references The protagonist in Penelope Fitzgerald's Booker prize shortlisted novel The Gate of Angels, set in 1912, uses one of the company's lamps (an "Aladdin") in the fictional college of St Angelicus, where the use of electricity or gas is not permitted. See also Lunch box Aladdin Industries Stanley bottle References External links Aladdin Industries from The Tennessee Encyclopedia of History and Culture A History of Aladdin Lamps — TeriAnn's Guide to Aladdin Mantle Lamps Manufacturing companies based in Seattle Defunct manufacturing companies based in Tennessee Manufacturing companies established in 1908 1908 establishments in Illinois Manufacturing companies disestablished in 2002 2002 disestablishments in Tennessee Vacuum flasks American companies established in 1908
20094331
https://en.wikipedia.org/wiki/Protected%20Streaming
Protected Streaming
Protected Streaming is a DRM technology by Adobe. The aim of the technology is to protect digital content (video or audio) from unauthorized use. Protected Streaming consists of many different techniques; basically there are two main components: encryption and SWF verification. This technique is used by the Hulu desktop player and the RTÉ Player. Fifa.com also uses this technique to serve the videos on the official site. Some videos on YouTube also use RTMPE, including those uploaded there by BBC Worldwide. Encryption Streamed content is encrypted by the Flash Media Server "on the fly", so that the source file itself does not need to be encrypted (a significant difference from Microsoft's DRM). For transmission ("streaming"), a special protocol is required, either RTMPE or RTMPS. RTMPS uses SSL-encryption. In contrast, RTMPE is designed to be simpler than RTMPS, by removing the need to acquire a SSL Certificate. RTMPE makes use of well-known industry standard cryptographic primitives, consisting of Diffie–Hellman key exchange and HMACSHA256, generating a pair of RC4 keys, one of which is then used to encrypt the media data sent by the server (the audio or video stream), while the other key is used to encrypt any data sent to the server. RTMPE caused less CPU-load than RTMPS on the Flash Media Server. Adobe fixed the security issue in January 2009, but did not fix the security holes in the design of the RTMPE algorithm itself. Analysis of the algorithm shows that it relies on security through obscurity. For example, this renders RTMPE vulnerable to Man in the Middle attacks. Tools which have a copy of the well-known constants extracted from the Adobe Flash Player are able to capture RTMPE streams, a form of the trusted client problem. Adobe issued DMCA takedowns on RTMPE recording tools, including rtmpdump, to try to limit their distribution. In the case of rtmpdump, however, this led to a Streisand effect. SWF verification The Adobe Flash Player uses a well-known constant, appended to information derived from the SWF file (a hash of the file and its size), as input to HMACSHA256. The HMACSHA256 key is the last 32 bytes of the server's first handshake packet. The Flash Media Server uses this to limit access to those clients which have access to the SWF file (or have been given a copy of the hash and the size of the SWF file). All officially allowed clients (which are in fact *.swf files) need to be placed on the Flash Media Server streaming the file. Any other client requesting a connection will receive a "connection reject". The combination of both techniques is intended to ensure streams cannot be sniffed and stored into a local file, as SWF verification is intended to prevent third party clients from accessing the content. However, it does not achieve this goal. Third party clients are free to write the decrypted content to a local file simply by knowing the hash of the SWF file and its size. In practice, therefore, Adobe's own implementation of the Macromedia Flash Player is the only client which does not allow saving to a local file. The only possible way to restrict connections to a Flash Media Server is to use a list of known hosts, to avoid the whole player (the Flash client) being placed on an unauthorised website. Even this has no real benefit for mass-distributed files, as any one of the known hosts could take a copy of the data and re-distribute it at will. Thus "known host" security is only useful when the known hosts can be trusted not to re-distribute the data. Notes References Whitepaper by Adobe RTMPE (Adobe LiveDocs) RTMPS (Adobe LiveDocs) rtmpdump 2.1+ (Source code and binaries) Source code of rtmpdump v1.6 by Andrej Stepanchuk RTMPE specification, generated from the rtmpdump source code RTMFP encryption mechanism (DRAFT), reverse engineered from scratch Multimedia Network protocols
2356105
https://en.wikipedia.org/wiki/Microsoft%20Cluster%20Server
Microsoft Cluster Server
Microsoft Cluster Server (MSCS) is a computer program that allows server computers to work together as a computer cluster, to provide failover and increased availability of applications, or parallel calculating power in case of high-performance computing (HPC) clusters (as in supercomputing). Microsoft has three technologies for clustering: Microsoft Cluster Service (MSCS, a HA clustering service), Component Load Balancing (CLB) (part of Application Center 2000), and Network Load Balancing Services (NLB). With the release of Windows Server 2008 the MSCS service was renamed to Windows Server Failover Clustering (WSFC), and the Component Load Balancing (CLB) feature became deprecated. Prior to Windows Server 2008, clustering required (per Microsoft KBs) that all nodes in the clusters to be as identical as possible from hardware, drivers, firmware, all the way to software. After Windows Server 2008 however, Microsoft modified the requirements to state that only the operating system needs to be of the same level (such as patch level). Background Cluster Server was codenamed "Wolfpack" during its development. Windows NT Server 4.0, Enterprise Edition was the first version of Windows to include the MSCS software. The software has since been updated with each new server release. The cluster software evaluates the resources of servers in the cluster and chooses which are used based on criteria set in the administration module. In June 2006, Microsoft released Windows Compute Cluster Server 2003, the first high-performance computing (HPC) cluster technology offering from Microsoft. History During Microsoft's first attempt at development of a cluster server Microsoft originally priced at $10,000, ran into problems causing the software to fail because of buggy software causing fail-over forcing the workload from two servers to a single server. This results in poor allocation of resources, poor performance of the servers, and very poor reviews from analysts. The announcement of a new update to the Microsoft Cluster Server software came in 1998 promising new features in 1999 and the newest addition in the line of Windows NT software in the form of Windows NT 5.0 Enterprise Edition. Also promising support for 4 nodes post release of NT 5.0. Microsoft's first attempt at pushing the cluster server software was at the 2005 Super-Computing conference in Seattle the new software being developed, Windows Compute Cluster Server 2003 (Windows CCS 2003), is still in beta. On May 8, 2006 Microsoft reports the release of the full featured Windows Compute Cluster Server 2003 (for industrial use) and the Windows Compute Cluster Server 2003 R2 (for small businesses) software to the public for purchase in summer 2006. References External links Microsoft Clustering Services Cluster Server Cluster computing High-availability cluster computing
39327151
https://en.wikipedia.org/wiki/Wolfenstein%3A%20The%20New%20Order
Wolfenstein: The New Order
Wolfenstein: The New Order is a 2014 action-adventure first-person shooter video game developed by MachineGames and published by Bethesda Softworks. It was released on 20 May 2014 for Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, and Xbox One. The game is the seventh main entry in the Wolfenstein series and the successor to 2009's Wolfenstein, set in an alternate history 1960s Europe where the Nazis won the Second World War. The story follows war veteran William "B.J." Blazkowicz and his efforts to stop the Nazis from ruling over the world. The game is played from a first-person perspective and most of its levels are navigated on foot. The story is arranged in chapters, which players complete in order to progress. A morality choice in the prologue alters the game's storyline; some characters and small plot points are replaced throughout the two timelines. The game features a variety of weapons, most of which can be dual wielded. A cover system is present. Development began in 2010, soon after id Software gave MachineGames the rights for the franchise. The development team envisioned Wolfenstein: The New Order as a first-person action-adventure game, taking inspiration from previous games in the series and particularly focusing on the combat and adventure elements. The game attempts to delve into character development of Blazkowicz, unlike its predecessors—a choice from the developers to interest players in the story. They aimed to portray him in a heroic fashion. At release, Wolfenstein: The New Order received generally positive reviews, with praise particularly directed at the combat and the narrative of the game. Critics considered it a positive change to the series and nominated it for multiple year-end accolades, including Game of the Year and Best Shooter awards from several gaming publications. A stand-alone expansion, Wolfenstein: The Old Blood, was released in May 2015 and is set before the events of the game. A sequel, Wolfenstein II: The New Colossus, was released in October 2017. Gameplay Wolfenstein: The New Order is an action-adventure and first-person shooter video game played from a first-person perspective. To progress through the story, players fight enemies throughout levels. The game utilizes a health system in which health is divided into separate sections that regenerate; if an entire section is lost, players must use a health pack to replenish the missing health. Players use melee attacks, firearms, and explosives to fight enemies, and may run, jump, and occasionally swim to navigate through the locations. Melee attacks can be used to silently take down enemies without being detected. Alternatively, players can ambush enemies, which often results in an intense firefight between the two parties. A cover system can be used in combat as assistance against enemies. Players have the ability to lean around, over, and under cover, which can be used as a tactical advantage during shootouts and stealth levels. The game gives players a wide variety of weapon options; they can be found on the ground, retrieved from dead enemies, or removed from their stationary position and carried around. Weapon ammunition must be manually retrieved from the ground or from dead enemies. Players have access to a weapon inventory, which allows them to carry as many weapons as they find. With some of these weapons, players have the ability to dual wield, giving them an advantage over enemies by dealing twice as much damage. Players can customize weapons through the use of upgrades; for example, a rocket launcher can be attached to the side of an assault rifle, and a wire cutting tool can be upgraded to a laser gun. Synopsis Setting and characters The New Order is set in an alternate universe where Nazi Germany have managed to deploy advanced technologies, enabling them to turn the tide against the Allies and ultimately win World War II. Its storyline is loosely connected to 2009's Wolfenstein and features returning characters Kreisau Circle leader Caroline Becker (Bonita Friedericy) and SS-Oberst-Gruppenfuhrer Wilhelm "Deathshead" Strasse (Dwight Schultz), the nemesis of series protagonist, U.S. special forces operative Captain William "B.J." Blazkowicz (Brian Bloom). The New Order has a branching narrative: during the prologue chapter, Deathshead forces Blazkowicz to decide the fate of one of his comrades. The player's choice as Blazkowicz will create two timeline versions of the game's storyline, where alternate characters are established as replacements for characters who otherwise would have significant roles in the plot. At the conclusion of the prologue chapter, either Scottish pilot Fergus Reid (Gideon Emery) or U.S. Army Private Probst Wyatt III (A.J. Trauth) survives and escapes Deathshead's compound. Blazkowicz suffers a severe head injury during the escape attempt and lapses into a persistent vegetative state. He is brought to a psychiatric asylum in Poland, where he is cared for by its head nurse Anya Oliwa (Alicja Bachleda) and her parents, who run the facility under the Nazi regime. Blazkowicz watches as Anya's parents are regularly forced to hand patients over to Nazi authorities, who deem them Untermenschen for their mental disabilities and take them to Deathshead for unknown experimentation. Blazkowicz and Anya enter into a romantic relationship over the course of the game's narrative. Other major characters include Frau Engel (Nina Franoszek), the Commandant of an extermination camp in northern Croatia known as Camp Belica; Set Roth (Mark Ivanir), a member of a Jewish mystical secret society known as the Da'at Yichud who is incarcerated at Camp Belica; Bombate (Peter Macon), a Namibian prisoner of Camp Belica who assists Blazkowicz; and Max Hass (Alex Solowitz), a seemingly brain-damaged member of the Resistance who is looked after by former Nazi member Klaus Kreutz. Plot In July 1946, Blazkowicz and his comrades take part in an air raid against a fortress and weapons laboratory run by Deathshead, but are captured and brought to a human experimentation laboratory. Blazkowicz escapes from the laboratory's emergency incinerator, although he is severely injured. He is admitted to a Polish psychiatric asylum where he remains in a catatonic state. In 1960, the Nazi regime orders the asylum to be "shut down", and executes Anya's parents when they resist. Blazkowicz awakens from his vegetative state and eliminates the extermination squad before escaping with Anya. Blazkowicz and Anya drive to her grandparents' farm, where they inform him that the Nazis had defeated the United States in 1948 and that the members of the ensuing anti-Nazi Resistance have been captured. Blazkowicz interrogates a captured officer from the asylum, learning that the top members of the Resistance are imprisoned in Berlin's Eisenwald Prison. Anya's grandparents smuggle her and Blazkowicz through a checkpoint in Stettin before they travel to Berlin. During the train ride, Blazkowicz encounters Frau Engel for the first time. When they arrive, Anya helps Blazkowicz break into Eisenwald Prison, where he rescues the person he spared fourteen years prior (Fergus or Wyatt) and finds that the Resistance movement is led by Caroline, who was left paralyzed due to an incident at Isenstadt in 2009's Wolfenstein. The Resistance execute an attack on a Nazi research facility in London, bombing their base of operations, stealing secret documents and prototype stealth helicopters. The documents reveal the Nazis are relying on reverse-engineered technology derived from the Da'at Yichud, which created such inventions as energy weapons, computer artificial intelligence, and super concrete; however, it is revealed that someone is tampering with the super concrete's formula, making it susceptible to mold deterioration. The Resistance discover a match with Set, who is imprisoned in Camp Belica. Blazkowicz agrees to go undercover inside Camp Belica and meets Set, who tells him that the Nazis have co-opted Da'at Yichud technology to mass-produce and control robots, and offers to help the Resistance in return for the destruction of the camp. Blazkowicz finds a battery for a device that controls Camp Belica's robots, which he and Set then use to incapacitate Engel, destroy the camp, and liberate its prisoners. Set reveals that the Nazis' discovery of one of the Da'at Yichud caches, which included advanced technology centuries ahead of its time, is what allowed Germany to surpass the Allies in military might. Set agrees to assist the Resistance by revealing the location of one such cache, but states that the Resistance requires a U-boat to access it. Blazkowicz obtains a U-boat, but discovers that it is the flagship of the Nazis' submarine fleet, and is equipped with a cannon designed to fire nuclear warheads, which requires keycodes from the Nazi lunar research facility to operate. Blazkowicz uses the Spindly Torque—a Da'at Yichud spherical device capable of destroying super concrete—to steal the identity of a Nazi lunar scientist and infiltrate the Lunar Base. He succeeds at obtaining the keycodes, but upon returning to Earth, he discovers that Engel has mounted an assault on the Resistance base, capturing some of its members on behalf of Deathshead. The Resistance use the Spindly Torque to break open Deathshead's compound. After liberating the compound's captives, Blazkowicz travel to the top of the tower, where Deathshead's workshop is located. Inside, Deathshead reveals to Blazkowicz that he possesses the preserved brain of the soldier that Blazkowicz chose to die, and puts it in a robot. The robot comes alive and assaults Blazkowicz, who defeats it and puts his friend to rest by destroying the brain. Commandeering a larger robot mecha, Deathshead attacks Blazkowicz, who gets the upper hand and destroys the robot. He drags Deathshead out of the wreckage and attacks him, who pulls out and arms a grenade which explodes, killing himself and gravely wounding Blazkowicz. As he crawls towards a window, Blazkowicz mentally recites "The New Colossus" as he watches the Resistance survivors board a helicopter. Believing that they have reached safety, Blazkowicz gives instructions to fire the nuclear cannon. After the credits, a helicopter is heard approaching. Development After developer MachineGames was founded, the employees began brainstorming ideas, and pitching them to publishers. In June 2009, MachineGames owner ZeniMax Media acquired id Software and all of its property, including Doom, Quake and Wolfenstein. Bethesda Softworks, who had previously declined a pitch from MachineGames, suggested that they develop a new game from a franchise acquired by ZeniMax. MachineGames inquired about developing a new game in the Wolfenstein series; the studio visited id Software, who approved of MachineGames' request for a new Wolfenstein game. By November 2010, paperwork was signed, allowing MachineGames to develop Wolfenstein: The New Order. Preliminary development lasted approximately three years. The existence of Wolfenstein: The New Order was first acknowledged by Bethesda Softworks on 7 May 2013, through the release of an announcement trailer. Prior to this, Bethesda teased the upcoming project by releasing three images with the caption "1960". Though originally due for release in late 2013, the game was delayed to 2014 in order for the developers to further "polish" the game. In February 2014, it was announced that The New Order would launch on 20 May 2014 in North America, on 22 May 2014 in Australia, and on 23 May 2014 in Europe. The Australian and European release dates were later pushed forward, resulting in a worldwide launch on 20 May 2014. All pre-orders of the game granted the purchaser an access code to the Doom beta, developed by id Software. In accordance with Strafgesetzbuch section 86a, the German release of The New Order had all Nazi symbols and references were removed. The German software ratings board, Unterhaltungssoftware Selbstkontrolle, later introduced the "social adequacy clause", which allowed the use of such imagery in relevant scenarios, reviewed on a case-by-case basis. Bethesda made the uncensored international version, which lacks German as a language option, available for purchase in Germany on 22 November 2019, while continuing to sell the censored and localised version separately. Following the game's release, MachineGames began developing Wolfenstein: The Old Blood, a standalone expansion pack set before the events of The New Order. It was released in May 2015. Gameplay design The initial inspiration for Wolfenstein: The New Order came from previous games in the franchise. Senior gameplay designer Andreas Öjerfors said that it was the "super intense immersive combat" that defined the previous games, so MachineGames ensured that this element was included in The New Order. The development team refer to the game as a "first-person action adventure", naming this one of the unique defining points of the game. "It is the David vs Goliath theme", Öjerfors explained. "B.J. against a global empire of Nazis." Öjerfors acknowledged that many aspects of the game's narrative are exaggerated elements of the Nazi Party: "The larger than life leaders, strange technology, strange experiments." The team viewed the game as a "dark-roasted blend of drama, mystery, humor". Creative director Jens Matthies explained that they "take perhaps the most iconic first-person shooter franchise in history and push it into a strange new world". Wolfenstein: The New Order is the second game to use id Software's id Tech 5 engine, after Rage (2011). The game utilizes the engine to add a large amount of detail to the game world. The team often found it difficult to develop the game with 1080p resolution at 60 frames per second, particularly on complex environments, but "we always made it work somehow", said Matthies. He has said that the main advantages of the engine is the speed and the detailing, while its biggest disadvantage is dynamic lighting; "on the other hand the static light rendering is really awesome, so you have full radiosity and can do really spectacular-looking things using that," he added. Senior concept artist Axel Torvenius said that one of the main inspirations for the art design of the game was films from the 1960s, calling out the James Bond movies. The design for the Nazis in the game was influenced by the aesthetics of the Nazis at the end of the World War II; "it's blended with the style of the 1960s and the fashion ideals of how to express yourself visually", Öjerfors explained. This viewpoint is influenced by the element of exaggeration, which is common throughout the game's design and has been acknowledged by the team as a development inspiration. Character models can be covered in up to a 256k texture; however, this is not used often in the game on individual characters, due to the difficulty of seeing it from a distance. Wolfenstein: The New Order only features a single-player mode. The team felt that dividing focus and resources across both a single-player and an online multiplayer mode would be less efficient. When questioned about the lack of an online multiplayer mode, Öjerfors explained that the decision was simple. "If we could take every bit of energy and sweat the studio has and pour all that into the single-player campaign, it gives us the resources to make something very, very cool, compared to if we would also have to divert some of our resources to making multiplayer." Executive producer Jerk Gustafsson attributed the choice to the style of game the team is familiar with, stating that MachineGames is "a single-player studio". Characters and setting The team attempted to develop characters that offer a unique experience to the game. "The overarching goal for us was about building an ensemble of genuinely interesting characters we wanted to interact with", said Matthies. They strived to connect the thoughts and actions of all characters to the human experience, allowing players to know "why a person is doing what they are doing". Matthies feels that all characters, particularly the allies, contain some dimension of his own personality. "They're an expression of something that is part of me that I think is interesting to explore", he said. The game's playable character, William "B.J." Blazkowicz, has been previously featured as the playable protagonist of all Wolfenstein games. When developing the character of Blazkowicz for The New Order, MachineGames considered his appearances in previous games in the series. When doing this, they realised that the character had never really developed at all throughout the games; "He's just the guy that you play", said Pete Hines, Vice President of PR and Marketing for Bethesda. The team discovered that they were interested in exploring his story, which is what they later invested in. Throughout the game, Blazkowicz communicates some of his inner thoughts through short monologues, many of which reveal that he has been traumatized by some of his experiences. "We always loved the idea of a prototypical action hero exterior juxtaposed with a rich and vulnerable interior psychology", said Matthies. One of the largest priorities for the team when developing the character of Blazkowicz was to "reveal whatever needs to be revealed to [Blazkowicz] and the player" simultaneously; Matthies felt that, despite the simplicity of this concept, it is rarely used in games. Prior to developing The New Order, the team had primarily worked on games that involved antihero protagonists. However, id Software wished Blazkowicz to be portrayed differently in the game. Matthies said, "It's really important to [id] that BJ is a hero, and not an anti-hero." The team attempted to develop Blazkowicz into a character that players could relate to, as they felt that players are generally unable to relate to video game protagonists. "The goal is not to have a protagonist that's so neutral that you can project yourself into them; the goal is to have a protagonist that is so relatable that you become them", said Matthies. They tried to make players become "emotionally in sync" with Blazkowicz, using the morality choice in the game's prologue to do so. Wilhelm "Deathshead" Strasse, the game's main antagonist, has been previously featured as an antagonist of Return to Castle Wolfenstein (2001) and main antagonist of Wolfenstein (2009). For The New Order, the team achieved closure on his story; to do so in an effective way, they wanted to find an interesting angle to portray him: his personality is full of enthusiasm, and he appreciates life after his near-death experience in the previous game. When developing the Nazis, Matthies states that the team "didn't want to cartoon-ify them", instead opting to treat them seriously. Gideon Emery, who portrayed Fergus Reid, auditioned for his role in the game. He described Fergus as "a tough as nails soldier, who gives [Blazkowicz] both support and a pretty hard time in the process". Matthies felt that Fergus is a type of father figure to Blazkowicz, and that he "only gives negative reinforcement". Conversely, he saw Wyatt as a "sort of son surrogate", as Blazkowicz is tasked as being his protector and mentor, and that he gives "positive reinforcement". Max Hass was inspired by the character of Garp from John Irving's novel The World According to Garp. "Max was the most challenging character to cast, which seems counter-intuitive because he's a pretty simple guy on paper, but it took a tremendous actor to pull that off and a long time to find him", Matthies said. A large aspect of the game is the alternate history in which it is set, where the Nazis won the Second World War. The team saw this aspect as an opportunity to create everything at a very large scale, with very few limitations; "so many things that we can create, and work with, and expand on. So, I never really felt that we were limited", said Öjerfors. Music production Wolfenstein: The New Order makes use of an original score that reflects the alternate universe depicted in the game. "We wanted to identify with different sounds that were kind of iconic, 1960s sounds, and then do our own twist on them to make a sound authentic enough that it felt realistic", said Hines. The team placed a high importance on the game's music. During the game's development, composer Mick Gordon traveled to Sweden to meet with the team, and he spotted the game over three days, partly collaborating with both Fredrik Thordendal and Richard Devine. Gordon expressed the difference in composing the soundtrack for Wolfenstein: The New Order compared to other games: "Usually you sign onto a project and then you're given a list of 150 battle cues to do." The team began searching for a genre on which to base the soundtrack. They initially sought inspiration from the music of Richard Wagner, who was admired by Nazi Party leader Adolf Hitler. After studying Wagner's work, however, the team discovered that it did not necessarily fit with the game's tone. They searched for a style of music that would suit the Nazis, ultimately selecting distortion. "There's lots of analogue distortion types, there's all sorts of different pedals and valves and things that are really breaking up", said Gordon. They also took inspiration from 1960s music, using analogue equipment such as tape machines and reel-to-reel machines. Gordon has said that the soundtrack is "a tribute to all things guitar". In collaboration with each other, the team of musicians composed over six hours of music which scores the game. Matthies said, "A lot of the score features odd time signatures yet it's all very groovy." Bethesda, AKQA, and COPILOT Music and Sound collaborated on the marketing campaign for Wolfenstein: The New Order to invent the fictional state-owned German record label Neumond Recording Company. The campaign was crafted to introduce the video game's alternate history in the form of pop music from the 1960s. The label promoted ten fictional German pop artists: seven original songs, and three covers reworked into German from their original versions. Each artist was given a full biography, and the singles were packaged with album cover artwork. The covered songs were featured in trailers but omitted from the game because the songs' owners did not want their work to be associated with Nazi imagery. The original songs created for the Neumond label were initially written in English to ensure that the lyrics reflected Wolfensteins alternate history without creating content that could be used for actual propaganda outside of the game, given the sensitive nature of the game's subject matter. Reception Critical response Wolfenstein: The New Order was released to mostly positive reviews. Metacritic calculated an average score of 81 out of 100 based on 23 reviews for the Windows version, 79 out of 100 based on 18 reviews for the Xbox One version and 73 reviews for the PlayStation 4 version. Reviewers liked the game's concept, narrative and combat mechanics. The combat mechanics of the game received praise. Daniel Hindes of GameSpot felt that the intensity and variety of the combat in the game has granted the series "a breath of fresh air", and believes that it managed to fulfill his nostalgic expectations from the series. Ryan Taljonick of GamesRadar called it "satisfying". Simon Miller of VideoGamer.com lauded the game's shooting and stealth mechanics, naming the former as "solid". Similarly, GameSpot's Hindes noted that the stealth was "simple but effective", and named it one of the best things about the game. Steve Boxer of The Guardian also called out the stealth, calling it "decent". Colin Moriarty of IGN considered the narrative and characters one of the best features, stating that it's where the game "really shines". Matt Sakuraoka-Gilman of Computer and Video Games called the narrative "intelligently written, brilliantly voiced and highly polished". Kotaku's Mike Fahey felt somewhat divided about the story, initially finding the attempts at emotion too obvious, but ultimately feeling satisfied, calling it "spectacular". He also praised the characterization of Blazkowicz in the game. GamesRadar's Taljonick also felt mixed about the game's characters, finding Blazkowicz interesting, but feeling as though the supporting characters were quite undeveloped, leaving players to forget about them during gameplay. Conversely, Matt Bertz of Game Informer noted that the attempts to give Blazkowicz more depth feel odd in reflection to his brutal actions during other parts of the game. VideoGamer.com's Miller also felt negatively about the narrative, calling it "awful". Joystiq's Ludwig Kietzmann commented on the drastic changes in the narrative's pacing, feeling that it "dragged down" whenever the player is forced to search for ammunition; Steven O'Donnell of Good Game believed otherwise, feeling like he was "gearing up and patching up" after each fight. The game's use of an alternate history concept, with the Axis victory in World War II, was commended by many reviewers. IGN's Moriarty and GameSpot's Hindes called it "interesting", with the former naming it one of the standout points of the game. Jason Hill of The Sydney Morning Herald called the concept "absorbing", while Owen Anslow of The Mirror called it "intriguing". Destructoid's Chris Carter felt that the development team "went all the way" and spent a lot of time on the game's concept. The graphical design of the game received commentary from reviewers. GameSpot's Hindes praised the visual design, noting that it accurately captured the time period, while effectively depicting the alternate storyline in which the game is set. Taljonick of GamesRadar stated that the game's level design contributes to his enjoyment of the shooting sequences. He also praised the size of the levels, enjoying the possibility of participating in a large gunfight "with some sort of plan". Kotaku's Fahey praised the level design for similar reasons, admiring the degree of detail in the game. Digital Spy's Liam Martin shared mixed commentary on the design, noting that the character models are animated well, but the game is "hardly a shining example of next-gen graphical potential". ABC's Alex Walker criticized the game's graphical design, commenting that the developers "focus[ed] their attention" on other aspects of the game. Most critics and commentators shared the opinion that The New Order was better than they were expecting from a Wolfenstein game. Jon Blyth of Official Xbox Magazine called the game an "unexpected gem", while ABC's Walker said that he "never expected [to] enjoy [the game] so much". The Sydney Morning Heralds Hill said that the game ensures that the series is "a relevant force again", while Destructoid's Carter felt that the game "does wonders for essentially rebooting the franchise without rendering all the previous stories moot". Edge agreed, calling the developers "brave". Tom Watson wrote in New Statesman that The New Order was "the big surprise of the year" for "modernis[ing] this old classic", praising its graphics, game play, and plot. Sales Within a week of its release, Wolfenstein: The New Order became the second best-selling game of 2014 in the United Kingdom, behind Titanfall. The game topped the weekly UK charts in its first week, totaling a quarter of all games sold in the region and accounting for 36% of revenue. According to MCV, it was the 22nd best-selling game of 2014 in the UK. In the United States, the game was the fourth and seventh best-selling game of May and June 2014, respectively. The game was ranked the fifth and fourteenth best-selling digital PlayStation 4 game of May and June 2014, respectively. In its first week in Japan, the PlayStation 3 and PlayStation 4 versions of the game were placed on the charts at 15th and 8th, respectively, collectively selling over 11,000 units. By June 2014, the game had sold almost 400,000 physical units in Europe, equating to over €21 million. Awards Wolfenstein: The New Order received multiple nominations and awards from gaming publications. The game won Game of the Year from Classic Game Room, received nominations from the Golden Joystick Awards, Good Game Game Informer, and IGN Australia, and received runner-up from Polygon. It was also placed on various lists of the best games of 2014: USA Today placed it at 9th, Eurogamer at 10th, and Ars Technica at 6th. The game also received nominations for Best Shooter from The Escapist, The Game Awards, Game Informer, GameTrailers, Hardcore Gamer and IGN. It received nominations signifying excellence in storytelling from The Game Awards, the Golden Joystick Awards, IGN Australia and the SXSW Gaming Awards. It achieved runner-up for Biggest Surprise awards from both Giant Bomb and the readers of Kotaku. It was also nominated for Best PC Game by IGN Australia, receiving runner-up by Kotaku readers. The game was also nominated for Best Multiplatform from Hardcore Gamer, Best Console Game from IGN Australia, and Best PlayStation 3 Game, Best Xbox 360 Game, and Best Xbox One Game from IGN. Sequel At E3 2017, Bethesda announced Wolfenstein II: The New Colossus, a sequel to The New Order. It was released on 27 October 2017. Notes References External links 2014 video games Action-adventure games Bethesda Softworks games Dystopian video games Experimental medical treatments in fiction First-person shooters Id Tech games MachineGames games Nazis in fiction Nazism in fiction PlayStation 3 games PlayStation 4 games Retrofuturistic video games Single-player video games Terrorism in fiction Video games about Nazi Germany Video games developed in Sweden Video games scored by Mick Gordon Video games set in 1946 Video games set in 1960 Video games set in Berlin Video games set in Croatia Video games set in Germany Video games set in Gibraltar Video games set in London Video games set in Poland Video games set in psychiatric hospitals Video games set on the Moon Video games with expansion packs Windows games Wolfenstein Video games about World War II alternate histories Xbox 360 games Xbox Cloud Gaming games Xbox One games
1587371
https://en.wikipedia.org/wiki/Ultimate%20Mortal%20Kombat%203
Ultimate Mortal Kombat 3
Ultimate Mortal Kombat 3 is a fighting game in the Mortal Kombat series, developed and released by Midway to arcades in 1995. It is a standalone update of 1995's earlier Mortal Kombat 3 with an altered gameplay system, additional characters like the returning favorites Kitana and Scorpion who were missing from Mortal Kombat 3, and some new features. Several home port versions of the game were soon released after the arcade original. Although none were completely identical to the arcade version, the Sega Saturn port is near arcade original. Some later home versions followed the arcade original with more accuracy. Some versions were released under different titles: Mortal Kombat Advance for the Game Boy Advance in 2001 and Ultimate Mortal Kombat for the Nintendo DS in 2007. An iOS version recreating the game using a 3D graphics engine was released by Electronic Arts in 2010. Ultimate Mortal Kombat 3 was mostly well-received and has been considered a high point for the Mortal Kombat series. However, the iOS remake and some other home versions were received poorly. Ultimate Mortal Kombat 3 was updated to include more content from previous games in the series as Mortal Kombat Trilogy in 1996. The 2011 compilation Mortal Kombat Arcade Kollection includes an emulation of UMK3 as well as the first Mortal Kombat and Mortal Kombat II. Gameplay Two new gameplay modes have been introduced since the original Mortal Kombat 3: the 2-on-2 mode which was similar to an Endurance match but with as many as three human players in a given round on both sides (these had not been seen in the series since the first Mortal Kombat), and a new eight-player Tournament mode. An extra Master difficulty is present. Shao Kahn's Lost Treasures – selectable prizes, of which some are extra fights and others lead to various cutscenes or other things – are introduced after either the main game or the eight-player Tournament are completed. To balance the gameplay, some characters were given new moves and some existing moves were altered. Some characters were given extra combos and some combos were made to cause less damage. Chain combos could be started by using a jump punch (vertical or angled) or a vertical jump kick, which creates more opportunities to use combos. Combos that knock opponents in the air no longer send one's opponent to the level above in multi-layered levels; only regular uppercuts do this. The computer-controlled opponent AI was improved in the game. However, three new flaws were introduced along with the revisions: while backflipping away from an opponent, if the player performs a jump kick, the AI character will always throw a projectile; this leaves the computer character vulnerable to some attacks and can easily lead into a devastating combo. If the player walks back-and-forth within a certain range of the AI character, the opponent will mimic the player's walking movements for the whole round and never attack. If the computer opponent is cornered, the player can repeatedly perform punches without the AI character stumbling back, thus allowing the player to win easily. UMK3 features several new backgrounds: Scorpion's Lair/Hell (this stage also contains a new Stage Fatality, where an uppercut can send the opponent into a river of lava); Jade's Desert (in a reference to his MK3 ending, Cyrax is seen stuck waist-deep in sand in the background); River Kombat/The Waterfront; Kahn's Kave/The Cavern; Blue Portal/Lost (a combination of the background from the UMK3 "Choose Your Destiny" screen, the Pit 3 bridge, and the mountains and bridge from the Pit II in Mortal Kombat II); Noob's Dorfen (based on the Balcony stage, which can now be played using a Kombat Kode without having to fight Noob Saibot to see it as in MK3). Before reaching any of the original MK3 backgrounds in 1- or 2-player mode, the game must cycle through all of the UMK3 exclusive backgrounds twice. Scorpion's Lair, Secret Cave and Abandoned River stages are selectable by using a password while on the missing Bank Stage cycle. In Scorpion's Lair, fighters can uppercut each other into Kahn's Kave. The original red portal background used for the "Choose Your Destiny" screen is now blue. Some elements from MK3 are missing in UMK3. The only biographies featured are those of Kitana, Jade, Scorpion and Reptile (the ninja characters who were not included in MK3), which are the only four shown during attract mode, while all of the biographies and the full-body portraits of the MK3 characters are missing. The biographies that do appear in the game are presented differently from those in MK3, as are the endings. The storyline images and text do not appear. Finally, the Bank and Hidden Portal stages from MK3 were removed (Jade's Desert serves as a placeholder where The Bank stage used to appear once the player reaches the original MK3 level cycle). Characters The arcade version features all playable characters from Mortal Kombat 3, who were portrayed by the same actors: Cyrax (Sal Divita), Liu Kang (Eddie Wong), Kabal (Richard Divizio), Kano (Richard Divizio), Kung Lao (Tony Marquez), Stryker (Michael O'Brien), Jax Briggs (John Parrish), Nightwolf (Sal Divita), Sektor (Sal Divita), Shang Tsung (John Turk), Sheeva (stop motion), Sindel (Lia Montelongo), Smoke (Sal Divita), Sonya Blade (Kerri Hoskins) and Sub-Zero (John Turk). The boss and sub-boss from MK3, Motaro (stop motion) and Shao Kahn (Brian Glynn, voiced by Steve Ritchie), also return. Shang Tsung's transformations are accompanied by announcements of the name of the character he is changing into. There are four additional characters that are playable from the start: Several ninja characters from the first two games that have been absent from Mortal Kombat 3 return in Ultimate Mortal Kombat 3, including Kitana, Jade, Reptile and Scorpion on the prototype version; a new Ultimate Kombat Kode was added in revision 1.0 to enable Mileena, Ermac, and Classic Sub-Zero as secret characters. Jade (Becky Gable) – After the renegade princess Kitana killed her evil twin Mileena and escaped from Outworld to Earth, her close friend Jade was appointed by the emperor Shao Kahn to find and bring her back alive. Kitana (Becky Gable) – She is accused of treason after killing Mileena; she now attempts to reach queen Sindel to warn her of their true past. Reptile (John Turk) – As one of Shao Kahn's most trusted servants, Reptile assists Jade in the hunt for Kitana, but with secret orders enabling him to kill her if necessary. Scorpion (John Turk) – Scorpion escapes from Earth's hell after Shao Kahn's failed attempt at stealing the souls of Earthrealm. He eventually joins the struggle against Outworld. More are unlockable via the Ultimate Kombat Kode: Classic Sub-Zero (John Turk) – Having been seemingly killed in the first game, Sub-Zero mysteriously returns to again attempt the assassination of Shang Tsung. Ermac (John Turk) – A mysterious warrior that exists as a life force of the souls of dead Outworld warriors in Shao Kahn's possession. Mileena (Becky Gable) – After she was killed by Kitana, Mileena was brought back to life by Shao Kahn to help him defeat Earth's warriors with her combat skills and a mind-reading connection to her sister. Finally, Smoke's human form can be unlocked via a code entered right before a match. Returning characters were warmly welcomed by critics as an improvement to the "lackluster roster" of MK3 with "the greatly missed" Kitana, Mileena, Reptile, and especially Scorpion. The female ninja characters (Mileena, Kitana and Jade), returning from Mortal Kombat II, were portrayed by a different actress, Becky Gable, due to the lawsuit issued by Katalin Zamiar and some of the other MKII actors against Midway; they were also given a different set of outfits and hairstyles, which were again identical for all of them (in the game there are just three palette swapped character models for male, female and cyborg ninjas, not counting the MK3 Sub-Zero but including Classic Sub-Zero). There are also two new hidden opponents and console exclusives: Noob Saibot (John Turk) and Rain (John Turk). Although Noob Saibot was featured in the original MK3, he is no longer a palette swap of Kano but instead of a ninja; as before, he is fought via a Kombat Kode. Rain is featured in the game's opening montage (except on the Sega Saturn), but he is actually a fake hidden character that is not found in the arcade game. Both Noob Saibot and Rain were made playable for the 16-bit console versions, although Sheeva was removed, and the two boss characters are playable via a cheat code. Release Like previous Mortal Kombat games released up to that point, Ultimate Mortal Kombat 3 debuted in arcades. It first appeared in select arcades in early October 1995. Arcade owners who already owned Mortal Kombat 3 were provided with the option to upgrade to Ultimate Mortal Kombat 3 at no cost. In 2008, the Mortal Kombat series co-creator, designer and producer Ed Boon said that UMK3 is his favorite 2D Mortal Kombat title. It was also the last game he has programmed himself. Ultimate Mortal Kombat 3 was ported to many home consoles with varying results, including home (Super NES, Genesis and Sega Saturn) and portable consoles (Game Boy Advance and Nintendo DS), the Xbox Live Arcade, and iOS-based mobile devices and mobile phones. The game was also bundled with the premium version of Mortal Kombat: Armageddon for the PlayStation 2 and included in compilation release Mortal Kombat Arcade Kollection for the PC, PlayStation 3 and Xbox 360. The developers and publishers of the various releases included Acclaim Entertainment, Avalanche Software, Electronic Arts, Eurocom, Warner Bros. Interactive Entertainment, and Williams Entertainment. The later versions usually feature online play and other improvements over the arcade version, and in some cases even 3D graphics. A port for the 3DO Interactive Multiplayer was being developed by New Level but was canceled in 1996. Cited reasons for the cancellation include development delays which pushed the release date too far beyond the peak of Mortal Kombat 3s popularity and the fact that the Mortal Kombat franchise had no established presence on the console. Ultimate Mortal Kombat 3 Wave Net Ultimate Mortal Kombat 3 Wave Net (an abbreviation for Williams Action Video Entertainment Network) was a rare network version of the game that allowed for online multiplayer matches. It was tested only in the Chicago and San Francisco areas that used a dedicated T1 line, connected directly to Midway's Chicago headquarters. It is highly unlikely that any Wave Net test games were ever released to the public after the infrastructure was dismantled, and so there are no known ROM image dumps of this version. One of the reasons this version was not widely adopted was the cost of T1 lines at the time: the setup cost several thousand dollars per arcade installation, plus a few hundred dollars for each cabinet using the hardware. Williams' plan was to use WaveNet to upload new games and game updates, which they would provide to arcade owners for free in exchange for a cut of the games' revenues. Super NES The Super Nintendo Entertainment System (SNES) version was developed by Avalanche Software and published by Williams Entertainment in June, 1996 in North America, and by Acclaim Entertainment on November 28, 1996, in Europe. This version of the game uses the code from Sculptured Software's prior port of the original MK3 released a year earlier. The limitations of the system led to many cuts being made to fit everything in the SNES cartridge: the announcer no longer says the characters' names, Sheeva was removed, only the five new arcade backgrounds are featured, and Shao Kahn's Lost Treasures chest has only 10 boxes instead of 12. Also, many changes affect the game's finishing moves: Rain and Noob Saibot have no regular Fatalities or other finishing moves; Kitana's "Kiss of Death" only inflates the opponent's heads, reusing the effect from Kabal's "Air Pump" Fatality; Sonya Blade's Friendship from MK3 is used, as opposed to her Friendship from the arcade version of UMK3; Ermac's head punch Fatality is altered; Scorpion's "Hellraiser" Fatality is different (he takes the opponent back to the Hell stage, where the opponent simply burns to ash) and is no longer censored like the arcade one. Animality finishing moves were also removed, ironically keeping the Mercy move, which originally served as a requirement for Animalities in MK3. On the other hand, Brutalities were introduced; a finishing move in which the player attacks their opponent with a series of kicks and punches which result in the victim exploding. At the same time, some changes were actually improvements over the arcade version. Rain and Noob Saibot are made into playable characters for the first time. Mileena, Ermac, and Classic Sub-Zero are playable out of the box. Motaro and Shao Kahn are unlockable characters for two-player fights, although only one player can choose a boss at a given time. A cheat code allows access to three separate cheat menus, where the player can drastically alter gameplay, access hidden content or view the characters' endings, among many other things. Sega Genesis The Sega Genesis version was developed by Avalanche Software and published by Williams Entertainment on October 11, 1996, in North America and by Acclaim Entertainment on November 28, 1996, in Europe (Mega Drive version). Much like the SNES port, This version of the game uses the code from Sculptured Software's prior port of the original MK3 released a year earlier. Due to the limitations of the system's hardware, the Sega Genesis port featured inferior graphics and sound to these of the SNES port. Like on the SNES, Sheeva was removed, Shao Kahn's treasure chest has only 10 boxes, the announcer no longer says the characters' names, Kitana's "Kiss of Death" only inflates heads, Scorpion's "Hellraiser" Fatality is different, Sonya's Friendship from Mortal Kombat 3 is used, and the game retains the Bank stage. There were, however, several differences. Unlike the SNES version, the Genesis version features more stages: with the addition of the five new ones, it also features six of the original ones from MK3, including the Subway, Bank, Rooftop, Soul Chamber, The Temple, and The Pit 3. There are several additional cuts regarding special and finishing moves: both Animalities and Mercy were removed; Rain and Noob were given a Brutality, but no other finishing moves; Human Smoke shares Scorpion's combos, rather than having unique ones; in Stryker's Friendship, the running characters are replaced by dogs. It did, however, have exclusive features in comparison to the arcade. Again, like the SNES port, Rain and Noob Saibot are made playable characters along with bosses Motaro and Shao Kahn, and Mileena, Ermac, and Classic Sub-Zero are playable without any need of codes; Brutalities are also included in this version. Shang Tsung can morph into Robot Smoke, Noob Saibot, and Rain, which is not possible in the arcades, while Nightwolf is given the Red Shadow shoulder move that was later used in MKT. This version also features a rendition of Pong entitled MK4, which is the same as the one that appeared in the Genesis/Mega Drive port of MK3. Sega Saturn The Sega Saturn version was developed by Eurocom and published in 1996 by Williams Entertainment in North America and by GT Interactive in Europe. It is based directly on the version of Mortal Kombat 3 that was released for the PlayStation and PC. It thus has the same graphical quality and menu system. Since the arcade intro sequence is missing, Rain does not appear in the game, yet the message Kombat Kode "Rain can be found in the Graveyard" is still displayed. It also contains several elements of MK3 that were removed for the arcade version of UMK3, such as "The Bank" level and Noob Saibot being a shadow Kano as in MK3 (not a black ninja as in the arcade version of UMK3). There are a few new Kombat Kodes, but several that were present in the arcade release do not work any longer. The secret characters can be unlocked via a secret options screen, eliminating the need to enter three separate Kombat Kodes to unlock them (this is much faster, especially since unlocked characters cannot be saved); the Kombat Kodes themselves were also shortened to have six slots instead of ten. Mortal Kombat Advance Mortal Kombat Advance is the title given to the Game Boy Advance port of the game, which was developed by Virtucraft and published by Midway Games in North America on December 12, 2001, and in Europe on March 1, 2002. This version is based on the SNES port, but each character (except for Noob Saibot and the bosses) has only one individual Fatality and one Friendship. Three hidden characters can be unlocked by completing any tower other than Novice: Human Smoke (Warrior), Motaro (Master), and Shao Kahn (Grand Master). The GBA control system features two fewer buttons than those used in UMK3, which results in many special moves' button sequences being consolidated or changed. The violence in this game was toned down due to a younger fanbase using the GBA (though the game is still rated "M for Mature") and there is less blood. GameSpot named Mortal Kombat Advance the worst Game Boy Advance game of 2002, dubbing it "violently broken". PlayStation 2 On all "Premium Edition" copies of the PlayStation 2 version of 2006's Mortal Kombat: Armageddon, a near arcade-perfect version of the game is included in the first disc. However, it is impossible to save unlocked characters in this version without accessing the EJB menu. Xbox Live Arcade The Xbox Live Arcade version has very few differences from the arcade original. There are some minor glitches in the network play on Xbox Live and there is no option to save the Kombat Kode unlocked characters. Online leaderboards were created to keep track of all time network stats and friends, the screen size was adjustable for anything between 4:3 and 16:9 televisions, and unlockable Achievements were also included. The game was accidentally released by Warner Bros. Interactive on the digital download service on the Friday evening of October 20, 2006, but was quickly pulled about 20 minutes later. According to Xbox Live director of programming, Major Nelson, an emergency meeting was called to discuss what to do about the game's release, knowing some keen users had already purchased the game. The decision was made to go on and release the game on the next morning, four days before its scheduled release date. As of 2010, it remained as the only post-launch XBLA game to be released on any day other than Wednesday. As of June 2010, the game cannot be downloaded as it was removed from XBLA due to "publisher evolving rights and permissions". Those who have purchased the game before this date can re-download and play online. Nintendo DS On June 27, 2007, MK co-creator Ed Boon officially confirmed a Nintendo DS port entitled Ultimate Mortal Kombat. The game, developed by Other Ocean Interactive and published by Midway games on November 12, 2007, in North America and on December 7, 2007, in Europe, is an arcade-perfect port of UMK3, and includes Wi-Fi play and brings back the "Puzzle Kombat" minigame from Mortal Kombat: Deception. Additionally, when unlocking Ermac, Mileena and Classic Sub-Zero with Kombat Kodes on the VS screen, they will remain unlocked, thanks to the inclusion of player profiles. Mobile (J2ME) In December 2010, EA Mobile released a Java-based port of the game for mobile phones. The game features only six playable fighters (Cyrax, Liu Kang, Scorpion, Sub-Zero, Sonya, Kitana) and a single boss character (Shao Kahn). iOS In December 2010, Electronic Arts published their remake of the game for iOS. It features a wireless two-player mode that could function over either Wi-Fi or Bluetooth connections. Although the gameplay remains true to the 2D original, the digitized sprites of the arcade machine were replaced with 3D rendered graphics. Control was implemented via an on-screen joystick and buttons, utilizing the iOS-based devices' capacitive touchscreen. Network communication allowed for scores to be posted online, and a simplified control scheme was also included to improve accessibility. The character roster was incomplete, featuring only nine playable characters (Sub-Zero, Scorpion, Kitana, Nightwolf, Jax, Sheeva, Sonya, Liu Kang, and Stryker). Success at playing the game would unlock two additional fighters (Ermac and Jade). Both boss characters were included as CPU-only opponents. The game also features achievements. In June 2011, EA updated it to include the full roster and six new arenas. Mortal Kombat Arcade Kollection The game is a part of the digital release package Mortal Kombat Arcade Kollection, developed by Other Ocean Interactive and NetherRealm Studios and published by Warner Bros. Interactive for the PC, PlayStation 3 and Xbox 360 in 2012. Arcade Kollection also includes the first Mortal Kombat and Mortal Kombat II. Arcade1Up In 2019, Arcade1Up released a home arcade cabinet that included Ultimate Mortal Kombat 3, alongside Mortal Kombat and Mortal Kombat II. Reception Critical response Reviewing the arcade version, a Next Generation critic expressed concern that the Mortal Kombat series was headed for the same rut Street Fighter had fallen into, in which unnecessary updates of the same game replaced new installments. He remarked that even the biggest change the game made, the four new characters, was rendered uninteresting by their recycling of the graphic sets of previous characters. However, he added that "To be fair, there is none of the MK quality, detail, or gameplay missing, just about everything you want is there." According to a later IGN retrospective, "the revision helped to win over some frustrated fans, but followers of Johnny Cage, Raiden, and Baraka remained perturbed." Critical reception of the game has varied depending on the version under review. The initial releases were generally well-received by critics, especially the 32-bit Sega Saturn version. EGM named it their "Game of the Month", commenting that it is a "near-perfect" translation of the arcade version, with the only problem being the long loading times. VideoGames rated this port a review score of 8/10, calling it "simply a great game" and stating that "if there was ever a definitive MK game, this is it." In GamePro, Major Mike summarized that "Saturn owners left out in the cold when MK 3 hit the PlayStation can now gloat: Ultimate has arrived, and it offers more fighters, moves, fatalities, and secrets than MK 3." While he criticized some elements of the game itself, such as the weak fatalities, he held that the Saturn conversion faithfully replicates the arcade game in every respect. A reviewer for Next Generation agreed that the Saturn version is an impeccable conversion apart from the "miserable necessity" of load times during Shang Tsung's morphs, but criticized Ultimate Mortal Kombat 3 for offering too little improvement over the original Mortal Kombat 3. While noting that since the original MK3 had never been released for the Saturn, the publishers could not be accused of trying to sell consumers the same game twice, he felt MK3 was a slapdash and unexciting entry in the Mortal Kombat series. Rich Leadbetter of Maximum commented that while Ultimate Mortal Kombat 3 does not measure up to contemporary Capcom fighters in terms of gameplay, it is unsurpassed in its huge number of secrets and replayability. He also praised Eurocom's conversion, saying it is a superior effort to Williams' conversion of MK3 for the PlayStation. Rad Automatic of Sega Saturn Magazine, like EGM and GamePro, praised the game's retention of the full content and quality of the arcade version, but also added, "Capcom have just released three bona fide awesome 2D beat 'em ups onto the Saturn, and ... I couldn't honestly say that I rate MK3 above them." A review by Computer and Video Games called it an "excellent conversion of a great coin-op", as well as "[e]ssential for fans, and something well worth consideration from all Saturn owners." Reviewing the Genesis version, GamePros Bruised Lee said the graphics and controls are solid by 16-bit standards, but the arcade version's voices and music are poorly reproduced, and the game offers too little beyond the previous installments of the series, all of which had already been released for Genesis. He summarized, "Mortal Kombat fans looking for a quick fix should enjoy UMK3, and players new to MK will find this game a treat. If you're looking for a new fighting game experience, however, you'll have to wait for MK4." He scored the Super NES version lower in fun factor but higher in graphics and sound, stating that this version duplicates the arcade game's voices and music very well. However, he repeated the central point that the game is essentially a slightly modified retread of Mortal Kombat 3. The four reviewers of Electronic Gaming Monthly likewise praised the quality of the Super NES conversion while noting that it offered little new for fans of the series. Dan Hsu in particular remarked, "Does anyone else feel a little cheated? After all, Mortal Kombat 3 was released for the SNES just a year ago. Now, we're getting Ultimate MK3 (a decent improvement over MK3) while a couple of other systems are getting Mortal Kombat Trilogy. Perhaps SNES carts can't hold enough memory to handle Trilogy. Even so, I wouldn't want to buy UMK3 knowing that a better MK package exists." The SNES version was nominated for Nintendo Power Awards '96 in the category "Best Tournament Fighting Game". Years later, Ultimate Mortal Kombat 3 was also named as the best retro Mortal Kombat game by Alex Langley of Arcade Sushi in 2013. On the other hand, Mortal Kombat Advance, the later port for the Game Boy Advance, was panned by critics. It was given a review score of 2.9 by GameSpot's Jeff Gerstmann for how it "plays little to nothing like the game it's based on," and has a rating of only 34% at GameRankings. EGM editor Dan Hsu gave the game the first "0" rating in the magazine's history, and it tied with three other titles for the "Flat-out Worst Game" award by GameSpot in 2002. Advance was included among the worst games of all time by GamesRadar in 2014. IGN rated the game 97th on its "Top 100 SNES Games of All Time". In 2018, Complex listed the game 68th in their "The Best Super Nintendo Games of All Time" writing: "Although not as jaw-dropping as MK2 was when it first came out on the SNES, this was still a great port of the arcade classic." Ultimate Mortal Kombat for the Nintendo DS was considered much better than the GBA game. It was given a review score of 7.8 out of 10 from IGN's Greg Miller, who wrote that "if all you want is a really solid, fun version of Mortal Kombat 3 that can go online, that's what you're going to get. It's good stuff all around." GameSpot's "Best and Worst of 2006" included the XBLA version among the five best fighting games of the year. Ed Boon, one of the creators of the series, has stated Ultimate Mortal Kombat 3 to be his favorite Mortal Kombat arcade title. Legal controversy U.S. Appeals Court Judge Richard Posner considered Ultimate Mortal Kombat 3 to be "a feminist violent video game". Finding that Indianapolis' attempt to ban Ultimate Mortal Kombat 3 violated the First Amendment, Judge Posner wrote "the game is feminist in depicting a woman as fully capable of holding her own in violent combat with heavily armed men. It thus has a message, even an "ideology," just as books and movies do." Judge Posner further marveled that "The woman wins all the duels. She is as strong as the men, she is more skillful, more determined, and she does not flinch at the sight of blood." Sales Due in part to the Genesis and Super NES versions being delayed until the 1996 Christmas season, spawning rumors that they would never be released, those versions met with disappointing sales. Legacy Mortal Kombat Trilogy was released by Midway in 1996 as a follow-up to Ultimate Mortal Kombat 3. Unlike Ultimate Mortal Kombat 3, Mortal Kombat Trilogy was not released in arcades but was instead released for the Sony PlayStation, Nintendo 64, Sega Saturn and PC, as well as for the Game.com and R-Zone. Mortal Kombat Trilogy features the same gameplay and story, but includes all of the characters from the first three games, adding several completely new ones. Also, it introduces features such as the "Aggressor" bar, a meter that fills during the course of the match to make a player character faster and stronger for a short time, and the Brutality finishing moves that were introduced in the 16-bit versions of Ultimate Mortal Kombat 3. Ultimate Mortal Kombat 3 was also later remastered to be released as part of the Mortal Kombat Arcade Kollection in 2011. Notes References External links (iOS remake) Ultimate Mortal Kombat 3 - The Mortal Kombat Wiki UMK3 at Mortal Kombat Online Dinosaurs in video games 1995 video games 1990s fighting video games Arcade video games Cancelled 3DO Interactive Multiplayer games Game Boy Advance games IOS games IPod games Mortal Kombat games Nintendo DS games Post-apocalyptic video games PlayStation Network games Sega Genesis games Sega Saturn games Super Nintendo Entertainment System games Midway video games Fighting games Video games with digitized sprites Xbox 360 Live Arcade games Multiplayer and single-player video games Video games developed in the United States Video games scored by Dan Forden Williams video games GT Interactive Software games Eurocom games Electronic Arts games Avalanche Software games Acclaim Entertainment games
36388210
https://en.wikipedia.org/wiki/SIE%20%28file%20format%29
SIE (file format)
The SIE format is an open standard for transferring accounting data between different software produced by different software suppliers. SIE could be used to transfer data between software on the same computer, but also used for sending data between companies, for example between the company, the accountant and the audit. It can also be used for transferring from trade pre-systems like payables, receivables and salary systems to accounting as well as from accounting to tax administrative special applications. File format SIE is a tagged text file format, not XML like XBRL GL and UN/CEFACT accounting. Elder not able to use new XML technology, but about 20 times compacter file format. The SIE files are in five sections: General information Used chart of accounts declaration Used dimension/object (identifiers) of accounting declaration (making object related accounting analyse possible from SIE data) Balances of accounts (of this, last and possibly more previous years, for the total and for each object) Accounting entries (of this year) History A non-profit organization (The SIE Group) was formed in 1992 by leading Swedish accounting software vendors and accounting specialist (accountants/auditors interest groups). The accounting data file interchange file format rapidly gained market support, and is now implemented in all accounting software on the Swedish market. It is also used by government bodies, such as the Swedish Tax Authority, Statistics Sweden and The Economic Crimes Authority. SIE is closely related to the Swedish standard charts of accounts organisation BAS (accounting) (the most advanced and fully covering standard account chart in the world), and SIE is one of the owning members of BAS. However the SIE file format works with any charts of accounts and is technically independent of BAS. Together the SIE file format and the standard chart of accounts make a strong concept for accounting information interchange that has proven simple and efficient. SIE has since 2012 a close cooperation with XBRL Sweden. SIE is not an official SIS (Swedish Standards Institute local ISO) national standard. The SIE Group provides the SIE File format as an open standard. The SIE Group is however a member of SIS. A large Swedish domestic market implementation rate Due to the fact that the SIE standard is so well spread in the software business in Sweden it has become a de facto standard for transferring accounting data in Sweden. The format is open to everyone, but only SIE members can get their software approved. A vendor interest group, not a standards organisation As a vendor interest group SIE is not bound to the statements of neutrality against proprietary solutions of regular standards organisations. That makes the SIE specification also include implementation instruction layer that is not possible in official standards organisations work. The implementation instruction layer has helped a lot in limiting dialects and makes certain the information interchange work in practice. Beside the vendors are members this is a key of the huge implementation rate in the Swedish domestic market. International aspect The file format was originally developed for the Swedish market only, and the record tags in the file format was given Swedish names. To overcome that a revised and XML-based version of the file format (SIE 5) was released in 2018. The new file format has however not yet replaced the older file formats. See also XBRL GL SAF-T UN/CEFACT External links SIE XBRL UN/CEFACT BAS Swedish standard chart of accounts References Accounting software Computer file formats
36724181
https://en.wikipedia.org/wiki/Digital%20commons%20%28economics%29
Digital commons (economics)
The digital commons are a form of commons involving the distribution and communal ownership of informational resources and technology. Resources are typically designed to be used by the community by which they are created. Examples of the digital commons include wikis, open-source software, and open-source licensing. The distinction between digital commons and other digital resources is that the community of people building them can intervene in the governing of their interaction processes and of their shared resources. The digital commons provides the community with free and easy access to information. Typically, information created in the digital commons is designed to stay in the digital commons by using various forms of licensing, including the GNU General Public License and various Creative Commons licenses. Early development One of the first examples of digital commons is the Free Software movement, founded in the 1980s by Richard Stallman as an organized attempt to create a digital software commons. Inspired by the 70s programmer culture of improving software through mutual help, Stallman's movement was designed to encourage the use and distribution of free software. To prevent the misuse of software created by the movement, Stallman founded the GNU General Public License. Free software released under this license, even if it is improved or modified, must also be released under the same license, ensuring the software stays in the digital commons, free to use. Today Today the digital commons takes the form of the Internet. With the internet come radical new ways to share information and software, enabling the rapid growth of the digital commons to the level enjoyed today. People and organisations can share their software, photos, general information, and ideas extremely easily due to the digital commons. Mayo Fuster Morell proposed a definition of digital commons as "information and knowledge resources that are collectively created and owned or shared between or among a community and that tend to be non-exclusive, that is, be (generally freely) available to third parties. Thus, they are oriented to favor use and reuse, rather than to exchange as a commodity. Additionally, the community of people building them can intervene in the governing of their interaction processes and of their shared resources". The Foundation for P2P Alternatives explicitly aims to "creates a new public domain, an information commons, which should be protected and extended, especially in the domain of common knowledge creation" and actively promotes extending Creative Commons Licenses. Modern examples Creative Commons Creative Commons (CC) is a non-profit organization that provides many free copyright licenses with which contributors to the digital commons can license their work. Creative Commons is focused on the expansion of flexible copyright. For example, popular image sharing sites like Flickr and Pixabay, provide access to hundreds of millions of Creative Commons licensed images, freely available within the digital commons. Creators of content in the digital commons can choose the type of Creative Commons license to apply to their works, which specifies the types of rights available to other users. Typically, Creative Commons licenses are used to restrict the work to non-commercial use. Wikis Wikis are a huge contribution to the digital commons, serving information while allowing members of the community to create and edit content. Through wikis, knowledge can be pooled and compiled, generating a wealth of information from which the community can draw. Public software repositories Following in the spirit of the Free Software movement, public software repositories are a system in which communities can work together on open-source software projects, typically through version control systems such as Git and Subversion. Public software repositories allow for individuals to make contributions to the same project, allowing the project to grow bigger than the sum of its parts. A popular platform hosting public and open source software repositories is GitHub. City Top Level Domains Top Level Domains or TLDs are Internet resources that facilitate finding the numbered computers on the Internet. The largest and most familiar TLD is .com. Beginning in 2012, ICANN, the California not-for-profit controlling access to the Domain Name System, began issuing names to cities. More than 30 cities applied for their TLDs, with .paris, .london, .nyc, .tokyo having been issued as of May 2015. A detailing of some commons names within the .nyc TLD includes neighborhood names, voter related names, and civic names. Precious Plastic Precious Plastic is an open source project which promotes recycling of plastic through the use of hardware and business models which are available for free under Creative Commons license. It collaboratively designs and publishes designs, codes, source materials and business models which can be used by any person or group to start a plastic recycling project of their own. The online platform also consists of an online shop where hardware and recycled plastic products can be bought. As of January 2020, more than 80,000 people from around the world are working on some type of Precious Plastic project. Digital Commons as a policy In October 2020 the European Commission adopted its new Open Source Software Strategy 2020–2023. The main goal of the strategy being the possibility to achieve European wide digital sovereignty. It has been recognized that open source impacts the digital autonomy of Europe and it is likely that open source can give Europe a chance to create and maintain its own, independent digital approach and stay in control of its processes, its information, and its technology. The digital strategy makes it clear that ‘collaborative working methods will be the norm within the Commission’s IT community to foster the sharing of code, data and solutions’. The principal working methods encouraged by this open-source strategy are open, inclusive and co-creative. Wherever it makes sense to do so, the Commission will share the source code of its future IT projects. For publication of these projects, the European Union public licence (EUPL) will be preferred. The Commission will focus these efforts on an EU-centric digital government code repository (for ex. one in Estonia ) In addition, the developers will be free to make occasional contributions to closely related open-source projects. The principles and actions of the new open-source strategy will make it easier to obtain permission to share code with the outside world. Open data Both definitions of Open Data and Commons revolve around the concept of shared resources with a low barrier to access. Open Data usually is linked to data produced by the government and make available to public but it can come from many sources like science, non-profit organizations and society in general. Issues Opportunity of digital commons The usage of digital commons has led to the disruption of industries that benefited from publishing (authors and publishers) while providing potential to other industries. Many wikis help to pass knowledge to be used in a productive manner. They also have increased opportunities in education, healthcare, manufacturing, governance, finance, science, etc. Massive open online courses (MOOCs) are a great example of opportunities that digital commons bring, by bringing the opportunity to access high quality education to many people. Mayo Clinic is another example of spreading the medical knowledge to public availability. Nowadays most scientific journals have an online presence as well. Gender imbalance in digital commons The traditional under-representation of women and the lack of gender diversity in the field of STEM and in the programmer culture is also present in digital commons-based initiatives and open-source-software projects like Wikipedia or OpenStreetMap. Also smaller initiatives, like hackerspaces, makerspaces or fab labs are characterized by a considerable gender gap among their participants. There are different initiatives trying to face these challenges and bridging this gap by providing and creating empowering spaces where women and non-binary persons can experiment, exchange and learn with and from each other. Feminist hackerspaces were founded as a reaction to women's experiences of sexism, harassment and misogyny shown by the brogrammer culture in hackerspaces. Besides closing the gender gap among participants and creating safe spaces for female and non-binary persons, some projects additionally want to visualize the under-representation and lack of gender-related topics in the movements and in the outcome of their work. The collective Geochicas for example is engaged in the OpenStreetMap community looking on maps through a feminist lens and visualize data linked to gender and feminism. One project launched during 2016 and 2017 aimed to map cancer clinics and feminicides in Nicaragua. In the same years Geochicas created visibility campaigns on Twitter under the hashtag "#MujeresMapeandoElMundo" and the “International Survey on Gender Representation”. In 2018 they created a virtual map by analyzing data from OpenStreetMap to rise awareness of the lack of representation of women's names on the streets of cities in Latin America and Spain. Tragedy of digital commons Based on the tragedy of the commons and digital divide, Gian Maria Greco and Luciano Floridi have described the "tragedy of digital commons". As with the tragedy of the commons, the problem of the tragedy of the digital commons lies in the population and arises on two ways: the average user of the information technology (infosphere) behaves in the way Hardin's herdsmen behave by exploiting common resources until they no longer can recover, meaning that users do not pay attention to the consequences of their behaviour (for example, by overloading of the traffic by wanting to access the webpage as quick as possible). the pollution of the infosphere, i.e., the indiscriminate and improper usage of the technology and digital resources and overproduction of data. This brings excess information that leads to corruption of communication and information overload. An example of this is spam, which takes up to 45% of email traffic. The tragedy of the digital commons also considers other artificial agents, like worms, that can self-replicate and spread within computer systems leading to digital pollution. See also Knowledge commons Commons Tragedy of the commons References External links 1st International Forum on digital commons Computer law Copyleft Digital rights Economics of intellectual property Public commons
8083806
https://en.wikipedia.org/wiki/Fleet%20management%20software
Fleet management software
Fleet management software (FMS) is computer software that enables people to accomplish a series of specific tasks in the management of any or all aspects relating to a fleet of vehicles operated by a company, government, or other organisation. These specific tasks encompass all operations from vehicle acquisition through maintenance to disposal. Fleet management software functions The main function of fleet management software is to accumulate, store, process, monitor, report on and export information. Information can be imported from external sources such as gas pump processors, territorial authorities for managing vehicle registration (for example DVLA and VOSA), financial institutions, insurance databases, vehicle specification databases, mapping systems and from internal sources such as Human Resources and Finance. Vehicle management Fleet management software should be able to manage processes, tasks and events related to all kinds of vehicles - car, trucks, earth-moving machinery, buses, forklift trucks, trailers and specialist equipment, including: Vehicle inventory - the number and type of vehicles Vehicle maintenance - specific routine maintenance and scheduled maintenance, and ad hoc requirements Licensing, registration, MOT and tax Vehicle insurance including due dates and restrictions Cost management and analysis Vehicle disposal Driver management Driver license management, including special provisions Logging of penalty points and infringements against a licence Booking system for pool vehicles Passenger safety (SOS) Incident management Accidents and fines, plus apportioning costs to drivers Tracking Telematics Route planning Logbooks and work time Alerts Fleet management metrics to track Identification Metrics such as vehicle ID, company ID, location ID Utilization Metrics such as mileage and fuel data Behavioral Metrics like average speed and harsh acceleration Trip Metrics like number of trips, average duration Maintenance Metrics like maintenance costs and number of diagnostics Software procurement and development Fleet management software can be developed in-house by the company or organisation using it, or be purchased from a third party. It varies greatly in its complexity and cost. Fleet management software is directly related to fleet management. It originated on mainframe computers in the 1970s and shifted to the personal computers in the 1980s when it became practical. In later years however, Fleet Management Software has been more efficiently provided as SaaS. Fleet management software has become increasingly necessary and complex as increasing amounts of vehicle related legislation has been brought in. See also Vehicle tracking References Business software Transportation planning Automotive software Fleet management
1507752
https://en.wikipedia.org/wiki/DNS%20spoofing
DNS spoofing
DNS spoofing, also referred to as DNS cache poisoning, is a form of computer security hacking in which corrupt Domain Name System data is introduced into the DNS resolver's cache, causing the name server to return an incorrect result record, e.g. an IP address. This results in traffic being diverted to the attacker's computer (or any other computer). Overview of the Domain Name System A Domain Name System server translates a human-readable domain name (such as example.com) into a numerical IP address that is used to route communications between nodes. Normally if the server does not know a requested translation it will ask another server, and the process continues recursively. To increase performance, a server will typically remember (cache) these translations for a certain amount of time. This means if it receives another request for the same translation, it can reply without needing to ask any other servers, until that cache expires. When a DNS server has received a false translation and caches it for performance optimization, it is considered poisoned, and it supplies the false data to clients. If a DNS server is poisoned, it may return an incorrect IP address, diverting traffic to another computer (often an attacker's). Cache poisoning attacks Normally, a networked computer uses a DNS server provided by an Internet service provider (ISP) or the computer user's organization. DNS servers are used in an organization's network to improve resolution response performance by caching previously obtained query results. Poisoning attacks on a single DNS server can affect the users serviced directly by the compromised server or those serviced indirectly by its downstream server(s) if applicable. To perform a cache poisoning attack, the attacker exploits flaws in the DNS software. A server should correctly validate DNS responses to ensure that they are from an authoritative source (for example by using DNSSEC); otherwise the server might end up caching the incorrect entries locally and serve them to other users that make the same request. This attack can be used to redirect users from a website to another site of the attacker's choosing. For example, an attacker spoofs the IP address DNS entries for a target website on a given DNS server and replaces them with the IP address of a server under their control. The attacker then creates files on the server under their control with names matching those on the target server. These files usually contain malicious content, such as computer worms or viruses. A user whose computer has referenced the poisoned DNS server gets tricked into accepting content coming from a non-authentic server and unknowingly downloads the malicious content. This technique can also be used for phishing attacks, where a fake version of a genuine website is created to gather personal details such as bank and credit/debit card details. Variants In the following variants, the entries for the server would be poisoned and redirected to the attacker's name server at IP address . These attacks assume that the name server for is . To accomplish the attacks, the attacker must force the target DNS server to make a request for a domain controlled by one of the attacker's nameservers. Redirect the target domain's name server The first variant of DNS cache poisoning involves redirecting the name server of the attacker's domain to the name server of the target domain, then assigning that name server an IP address specified by the attacker. DNS server's request: what are the address records for ? subdomain.attacker.example. IN A Attacker's response: Answer: (no response) Authority section: attacker.example. 3600 IN NS ns.target.example. Additional section: ns.target.example. IN A w.x.y.z A vulnerable server would cache the additional A-record (IP address) for , allowing the attacker to resolve queries to the entire domain. Redirect the NS record to another target domain The second variant of DNS cache poisoning involves redirecting the nameserver of another domain unrelated to the original request to an IP address specified by the attacker. DNS server's request: what are the address records for ? subdomain.attacker.example. IN A Attacker's response: Answer: (no response) Authority section: target.example. 3600 IN NS ns.attacker.example. Additional section: ns.attacker.example. IN A w.x.y.z A vulnerable server would cache the unrelated authority information for 's NS-record (nameserver entry), allowing the attacker to resolve queries to the entire domain. Prevention and mitigation Many cache poisoning attacks against DNS servers can be prevented by being less trusting of the information passed to them by other DNS servers, and ignoring any DNS records passed back which are not directly relevant to the query. For example, versions of BIND 9.5.0-P1 and above perform these checks. Source port randomization for DNS requests, combined with the use of cryptographically secure random numbers for selecting both the source port and the 16-bit cryptographic nonce, can greatly reduce the probability of successful DNS race attacks. However, when routers, firewalls, proxies, and other gateway devices perform network address translation (NAT), or more specifically, port address translation (PAT), they may rewrite source ports in order to track connection state. When modifying source ports, PAT devices may remove source port randomness implemented by nameservers and stub resolvers. Secure DNS (DNSSEC) uses cryptographic digital signatures signed with a trusted public key certificate to determine the authenticity of data. DNSSEC can counter cache poisoning attacks. In 2010 DNSSEC was implemented in the Internet root zone servers., but needs to be deployed on all top level domain servers as well. The DNSSEC readiness of these is shown in the list of Internet top-level domains. As of 2020, all of the original TLDs support DNSSEC, as do country code TLDs of most large countries, but many country code TLDs still do not. This kind of attack can be mitigated at the transport layer or application layer by performing end-to-end validation once a connection is established. A common example of this is the use of Transport Layer Security and digital signatures. For example, by using HTTPS (the secure version of HTTP), users may check whether the server's digital certificate is valid and belongs to a website's expected owner. Similarly, the secure shell remote login program checks digital certificates at endpoints (if known) before proceeding with the session. For applications that download updates automatically, the application can embed a copy of the signing certificate locally and validate the signature stored in the software update against the embedded certificate. See also DNS hijacking DNS rebinding Mausezahn Pharming Root name server Dan Kaminsky References Computer security exploits Domain Name System Hacking (computer security) Internet security Internet ethics Internet service providers Types of cyberattacks
216381
https://en.wikipedia.org/wiki/Boot%20sector
Boot sector
A boot sector is the sector of a persistent data storage device (e.g., hard disk, floppy disk, optical disc, etc.) which contains machine code to be loaded into random-access memory (RAM) and then executed by a computer system's built-in firmware (e.g., the BIOS). Usually, the very first sector of the hard disk is the boot sector, regardless of sector size (512 or 4096 bytes) and partitioning flavor (MBR or GPT). The purpose of defining one particular sector as the boot sector is inter-operability between various firmwares and various operating systems. The purpose of chainloading first a firmware (e.g., the BIOS), then some code contained in the boot sector, and then, for example, an operating system, is maximal flexibility. The IBM PC and compatible computers On an IBM PC compatible machine, the BIOS selects a boot device, then copies the first sector from the device (which may be an MBR, VBR or any executable code), into physical memory at memory address 0x7C00. On other systems, the process may be quite different. Unified Extensible Firmware Interface (UEFI) The UEFI (not legacy boot via CSM) does not rely on boot sectors, UEFI system loads the boot loader (EFI application file in USB disk or in the EFI system partition) directly. Additionally, the UEFI specification also contains "secure boot", which basically wants the UEFI code to be digitally signed. Damage to the boot sector In case a boot sector receives physical damage, the hard disk will no longer be bootable; unless when used with a custom BIOS, which defines a non-damaged sector as the boot sector. However, since the very first sector additionally contains data regarding the partitioning of the hard disk, the hard disk will become entirely unusable, except when used in conjunction with custom software. Partition tables A disk can be partitioned into multiple partitions and, on conventional systems, it is expected to be. There are two definitions on how to store the information regarding the partitioning: A master boot record (MBR) is the first sector of a data storage device that has been partitioned. The MBR sector may contain code to locate the active partition and invoke its Volume Boot Record. A volume boot record (VBR) is the first sector of a data storage device that has not been partitioned, or the first sector of an individual partition on a data storage device that has been partitioned. It may contain code to load an operating system (or other standalone program) installed on that device or within that partition. The presence of an IBM PC compatible boot loader for x86-CPUs in the boot sector is by convention indicated by a two-byte hexadecimal sequence 0x55 0xAA (called the boot sector signature) at the end of the boot sector (offsets 0x1FE and 0x1FF). This signature indicates the presence of at least a dummy boot loader which is safe to be executed, even if it may not be able actually to load an operating system. It does not indicate a particular (or even the presence of) file system or operating system, although some old versions of DOS 3 relied on it in their process to detect FAT-formatted media (newer versions do not). Boot code for other platforms or CPUs should not use this signature, since this may lead to a crash when the BIOS passes execution to the boot sector assuming that it contains valid executable code. Nevertheless, some media for other platforms erroneously contain the signature, anyway, rendering this check not 100% reliable in practice. The signature is checked for by most system BIOSes since (at least) the IBM PC/AT (but not by the original IBM PC and some other machines). Even more so, it is also checked by most MBR boot loaders before passing control to the boot sector. Some BIOSes (like the IBM PC/AT) perform the check only for fixed disk/removable drives, while for floppies and superfloppies, it is enough to start with a byte greater or equal to 06h and the first nine words not to contain the same value, before the boot sector is accepted as valid, thereby avoiding the explicit test for 0x55, 0xAA on floppies. Since old boot sectors (e.g., very old CP/M-86 and DOS media) sometimes do not feature this signature despite the fact that they can be booted successfully, the check can be disabled in some environments. If the BIOS or MBR code does not detect a valid boot sector and therefore cannot pass execution to the boot sector code, it will try the next boot device in the row. If they all fail it will typically display an error message and invoke INT 18h. This will either start up optional resident software in ROM (ROM BASIC), reboot the system via INT 19h after user confirmation or cause the system to halt the bootstrapping process until the next power-up. Systems not following the above described design are: CD-ROMs usually have their own structure of boot sectors; for IBM PC compatible systems this is subject to El Torito specifications. C128 or C64 software on Commodore DOS disks where data on Track 1, Sector 0 began with a magic number corresponding to string "CBM". IBM mainframe computers place a small amount of boot code in the first and second track of the first cylinder of the disk, and the root directory, called the Volume Table of Contents, is also located at the fixed location of the third track of the first cylinder of the disk. Other (non IBM-compatible) PC systems may have different boot sector formats on their disk devices. Operation On IBM PC compatible machines, the BIOS is ignorant of the distinction between VBRs and MBRs, and of partitioning. The firmware simply loads and runs the first sector of the storage device. If the device is a floppy or USB flash drive, that will be a VBR. If the device is a hard disk, that will be an MBR. It is the code in the MBR which generally understands disk partitioning, and in turn, is responsible for loading and running the VBR of whichever primary partition is set to boot (the active partition). The VBR then loads a second-stage bootloader from another location on the disk. Furthermore, whatever is stored in the first sector of a floppy diskette, USB device, hard disk or any other bootable storage device, is not required to immediately load any bootstrap code for an OS, if ever. The BIOS merely passes control to whatever exists there, as long as the sector meets the very simple qualification of having the boot record signature of 0x55, 0xAA in its last two bytes. This is why it is easy to replace the usual bootstrap code found in an MBR with more complex loaders, even large multi-functional boot managers (programs stored elsewhere on the device which can run without an operating system), allowing users a number of choices in what occurs next. With this kind of freedom, abuse often occurs in the form of boot sector viruses. Boot sector viruses Since code in the boot sector is executed automatically, boot sectors have historically been a common attack vector for computer viruses. To combat this behavior, the system BIOS often includes an option to prevent software from writing to the first sector of any attached hard drives; it could thereby protect the master boot record containing the partition table from being overwritten accidentally, but not the volume boot records in the bootable partitions. Depending on the BIOS, attempts to write to the protected sector may be blocked with or without user interaction. Most BIOSes, however, will display a popup message giving the user a chance to override the setting. The BIOS option is disabled by default because the message may not be displayed correctly in graphics mode and blocking access to the MBR may cause problems with operating system setup programs or disk access, encryption or partitioning tools like FDISK, which may not have been written to be aware of that possibility, causing them to abort ungracefully and possibly leaving the disk partitioning in an inconsistent state. As an example, the malware NotPetya attempts to gain administrative privileges on an operating system, and then would attempt to overwrite the boot sector of a computer. The CIA has also developed malware that attempts to modify the boot sector in order to load additional drivers to be used by other malware. See also Master boot record (MBR) Volume boot record (VBR) Notes References External links Computer file systems BIOS Booting
25771130
https://en.wikipedia.org/wiki/VVVVVV
VVVVVV
VVVVVV is a 2010 puzzle-platform game created by Terry Cavanagh. In the game, the player controls Captain Viridian, who must rescue their spacecrew after a teleporter malfunction caused them to be separated in Dimension VVVVVV. The gameplay is characterized by the inability of the player to jump, instead opting on controlling the direction of gravity, causing the player to fall upwards or downwards. The game consists of more than 400 individual rooms, and also supports the creation of user-created levels. The game was built in Adobe Flash and released on January 11, 2010, for Microsoft Windows and OS X. The game was ported to C++ by Simon Roth in 2011, and released as part of the Humble Indie Bundle #3. The port to C++ allowed the porting of the game to other platforms such as Linux, Pandora, Nintendo 3DS, and Nintendo Switch. Gameplay Unlike most platforming games, in VVVVVV the player is not able to jump, but instead can reverse the direction of gravity when standing on a surface, causing Captain Viridian to fall either upwards or downwards. This feature was first seen in the 1986 8-bit game Terminus. The player uses this mechanic to traverse the game's environment and avoid various hazards, including stationary spikes and moving enemies. Later areas introduce new mechanics such as moving floors or rooms which, upon touching one edge of the screen, cause the player character to appear on the other side. VVVVVV contains eight main levels, including an intro level, four levels which can be accessed in a non-linear sequence, two intermission levels, and one final level, only seen outside Dimension VVVVVV (in a "polar dimension"). These are situated inside a large open world for the player to explore, spanning more than 400 individual rooms. Due to its high level of difficulty, the game world contains many checkpoints, to which the player's character is reset upon death. Plot The player controls Captain Viridian, who at the outset of VVVVVV must evacuate the spaceship along with the captain's crew, when the ship becomes affected by "dimensional interference". The crew escapes through a teleporter on the ship; however, Captain Viridian becomes separated from the rest of the crew on the other end of the teleporter. Upon returning to the ship, the Captain learns that the ship is trapped in an alternative dimension (referred to as Dimension VVVVVV), and that the ship's crew has been scattered throughout this dimension. The player's goal, as Captain Viridian, is to rescue the missing crew members and find the cause of the dimensional interference. Development The gravity-flipping mechanic of VVVVVV is based on an earlier game designed by Cavanagh titled Sine Wave Ninja. In an interview with IndieGames.com, Cavanagh said that he was interested in using this idea as a core concept of a game, something he felt other games which include a gravity-flipping mechanism had never done before. Cavanagh first unveiled VVVVVV on his blog in June 2009. The game had been in development for two weeks, and Cavanagh estimated that the game would be finished in another two, "but hopefully not much longer." A follow-up post published in July 2009 included screenshots of the game and an explanation of the game's gravity-flipping mechanic. Cavanagh wrote that VVVVVV, unlike some of his previous work such as Judith and Pathways, would not be a "storytelling experiment", but rather "focused on the level design". The game was first shown publicly at the 2009 Eurogamer Expo, which gave Cavanagh the opportunity to collect feedback from players. In December 2009, a beta version of VVVVVV which had been given to donors was leaked on 4chan. The visual style of VVVVVV is heavily inspired by games released for the Commodore 64 8-bit computer from the 1980s, especially Jet Set Willy and Monty on the Run which is referenced by the element of collecting difficult-to-reach shiny objects and most notably the naming of each room; Cavanagh aimed to create a game "that looked and felt like the C64 games I grew up with." He eventually entrusted naming the rooms to QWOP developer Bennett Foddy, who created every room name in the final version. The graphical style of VVVVVV is heavily influenced by Commodore 64 games such as Monty on the Run; similarly, the game's music is heavily dependent on chiptune elements. Swedish composer Magnus Pålsson scored the game, and released the original soundtrack in 2010, titled PPPPPP. Cavanagh also considered this game an opportunity to indulge in his "retro fetish". He has said because he lacks the technical prowess to make more modern-looking games, he instead focuses on making them visually interesting; additionally, he finds this to be made easier by "work[ing] within narrow limits". VVVVVV was the first game which Cavanagh sold commercially. While his previous games were all released as freeware, due to the size of VVVVVV compared to his previous work, Cavanagh felt that he "couldn't see [himself] going down that route." VVVVVV contains many strange visual elements, most notable of which being the sad elephant, sometimes also called the elephant in the room, which is a large elephant with a tear dropping from its eye. It spans four rooms near the Space Station area of Dimension VVVVVV, flickering constantly from color to color. If the player stands near the elephant for a short period of time, it will cause Captain Viridian to become sad. The elephant serves no function to the game, but has served to provoke much discussion about its meaning or symbolism amongst fans of the game. Similarly to many of the enemies in the game, the sad elephant originated in dream journals kept by creator Terry Cavanagh and not from Jet Set Willy as once believed. Release VVVVVV was released on January 11, 2010, for Microsoft Windows and Mac OS X. A trial version of the game is playable on the website Kongregate. A Linux version was in development, but a number of technical difficulties arose in the porting process, which led Cavanagh to cancel it for the time being. The game was rewritten in C++ by games developer Simon Roth in 2011, allowing Linux support to be successfully implemented. This formed version 2.0 of VVVVVV, launched on July 24, 2011, as part of the third Humble Indie Bundle. Version 2.0 also features support for custom levels, and a level editor. The C++ port also allowed for the implementation of new graphics modes and various speed improvements. Version 2.0, however, does not support saved games from the original Flash version of VVVVVV; many players received this update via Steam without warning, and hence were unable to continue their existing saved games. A save-file exporter is in development. Based on this source code it was also ported in 2011 for the Open Pandora, which requires the data files from the Microsoft Windows, Mac OS X or Linux version of the game to work. On October 7, 2011, it was announced that a version of the game was being made for Nintendo 3DS by Nicalis. It was released on December 29, 2011, in North America and May 10, 2012, in Europe. The Nintendo 3DS version was eventually released in Japan on October 12, 2016, courtesy of Japanese publisher Pikii. In 2010, a demo of the game's early levels was ported to the Commodore 64 by programmer Paul Koller, with Cavanagh's assistance. In April 2017 a complete port to the Commodore 64 was released by developer Laxity. On the 10th anniversary of the game's release in January 2020, Cavanagh made its source code publicly available on GitHub. Soundtrack The soundtrack of VVVVVV was composed by chiptune musician Magnus Pålsson (also known as SoulEye). Cavanagh approached Pålsson to compose VVVVVV after playing Space Phallus, an indie game by Charlie's Games, which featured a song by him. Pålsson wrote on the Distractionware blog that, upon playing Cavanagh's previous games, he was "amazed at the depth that came with the games, even though they were small and short." In writing the music for VVVVVV, Pålsson aimed to make "uptempo happy songs that would ingrain themselves into your minds whether you want to or not, hopefully so much so that you’d go humming on them when not playing, and making you want to come back to the game even more." The complete soundtrack, titled PPPPPP, was released on January 10, 2010, alongside VVVVVV and is sold as a music download or CD on Pålsson's personal website. On June 12, 2014, Pålsson released a power metal version of the soundtrack titled MMMMMM which was arranged and performed by guitarist Jules Conroy. The album contains a mod file to replace the in-game music with the metal tracks. Video game record label Materia Collective released a 180g picture disc vinyl LP edition on January 11, 2020. Reception VVVVVV has been generally well received by critics, earning a score of 81/100 for the Windows version and 83/100 for the 3DS version on review aggregator website Metacritic. The game was noted for being the first important independent release of 2010; Kieron Gillen of Rock, Paper, Shotgun called it "the first great Indie game of the year", while Michael Rose, writing for IndieGames.com, noted that the release of VVVVVV followed a year which "some may argue...didn't really deliver an outstanding indie title which showed the mainstream that independent developers mean business." The level design of VVVVVV was lauded by critics: Rose considered the game to have no filler content, which he found to be "one of the game's strongest points". Michael McWhertor of gaming news blog Kotaku wrote that the game's areas contained "a surprising amount of variation throughout...ensuring that VVVVVV never feels like its designer failed to explore the gameplay possibilities." Most reviewers wrote of VVVVVV high level of difficulty. McWhertor found that "the game's trial and error moments can seriously test one's patience." However, several critics noted that the game's challenge is made less frustrating due to its numerous checkpoints, as well as the player's ability to retry after dying as many times as needed. These additions made VVVVVV "not unforgiving", according to IGN staff writer Samuel Claiborn, while still being "old-school in its demands of player dedication". Independent reviewer Declan Tyson said that he feels "victimised by the game's criminally unforgiving collision detection and over-enthusiastically sensitive controls" but that "it's all worth it for when you reach the next checkpoint and feel that split second of relief". IGN’s Matthew Adler named VVVVVV the 12th hardest modern game, saying it requires quick reflexes in order to survive and that players will likely die many times. The price of VVVVVV when it was originally released was $15. This was seen by McWhertor as being the game's "one unfortunate barrier" to entry: "While there's plenty to see and do after blazing through the game's core campaign, the steeper than expected asking price will probably turn some off." Likewise, Gillen wrote in his review that the cost "does strike you as a lot for an Indie lo-fi platformer", while insisting that "it is worth the money". Since its original release, the price of VVVVVV has been reduced to $5. On his blog, Cavanagh said that the decision was difficult to make, but added, "I know that the original price of $15 was off putting for a lot of people". VVVVVV was awarded the IndieCade 2010 award for "Most Fun and Compelling" game in October 2010. Game development blog Gamasutra honored VVVVVV in its year-end independent games awards, which earned second place in 2009 and an honorable mention in 2010. The game's protagonist, Captain Viridian, is a playable character in the Windows version of the platform game Super Meat Boy. Notes References External links 2010 video games Linux games MacOS games Android (operating system) games Ouya games IOS games Flash games Puzzle-platform games Windows games Indie video games Metroidvania games Nintendo 3DS eShop games Nintendo Switch games PlayStation 4 games PlayStation Network games PlayStation Vita games Retro-style video games Video games developed in Ireland IndieCade winners Commercial video games with freely available source code Nicalis games
1214601
https://en.wikipedia.org/wiki/Cognitive%20walkthrough
Cognitive walkthrough
The cognitive walkthrough method is a usability inspection method used to identify usability issues in interactive systems, focusing on how easy it is for new users to accomplish tasks with the system. A cognitive walkthrough is task-specific, whereas heuristic evaluation takes a holistic view to catch problems not caught by this and other usability inspection methods. The method is rooted in the notion that users typically prefer to learn a system by using it to accomplish tasks, rather than, for example, studying a manual. The method is prized for its ability to generate results quickly with low cost, especially when compared to usability testing, as well as the ability to apply the method early in the design phases before coding even begins (a common trait with usability testing). Introduction A cognitive walkthrough starts with a task analysis that specifies the sequence of steps or actions required by a user to accomplish a task, and the system responses to those actions. The designers and developers of the software then walk through the steps as a group, asking themselves a set of questions at each step. Data is gathered during the walkthrough, and afterwards a report of potential issues is compiled. Finally the software is redesigned to address the issues identified. The effectiveness of methods such as cognitive walkthroughs is hard to measure in applied settings, as there is very limited opportunity for controlled experiments while developing software. Typically measurements involve comparing the number of usability problems found by applying different methods. However, Gray and Salzman called into question the validity of those studies in their dramatic 1998 paper "Damaged Merchandise", demonstrating how very difficult it is to measure the effectiveness of usability inspection methods. The consensus in the usability community is that the cognitive walkthrough method works well in a variety of settings and applications. Streamlined cognitive walkthrough procedure After the task analysis has been made, the participants perform the walkthrough: Define inputs to the walkthrough: a usability specialist lays out the scenarios and produces an analysis of said scenarios through explanation of the actions required to accomplish the task. Identify users Create a sample task for evaluation Create action sequences for completing the tasks Implementation of interface Convene the walkthrough: What are the goals of the walkthrough? What will be done during the walkthrough What will not be done during the walkthrough Post ground rules Some common ground rules No designing No defending a design No debating cognitive theory The usability specialist is the leader of the session Assign roles Appeal for submission to leadership Walk through the action sequences for each task Participants perform the walkthrough by asking themselves a set of questions for each subtask. Typically four questions are as Will the user try to achieve the effect that the subtask has? E.g. Does the user understand that this subtask is needed to reach the user's goal Will the user notice that the correct action is available? E.g. is the button visible? Will the user understand that the wanted subtask can be achieved by the action? E.g. the right button is visible but the user does not understand the text and will therefore not click on it. Does the user get appropriate feedback? Will the user know that they have done the right thing after performing the action? By answering the questions for each subtask usability problems will be noticed. Record any important information Learnability problems Design ideas and gaps Problems with analysis of the task Revise the interface using what was learned in the walkthrough to improve the problems. The CW method does not take several social attributes into account. The method can only be successful if the usability specialist takes care to prepare the team for all possibilities during the cognitive walkthrough. This tends to enhance the ground rules and avoid the pitfalls that come with an ill-prepared team. Common shortcomings In teaching people to use the walkthrough method, Lewis & Rieman have found that there are two common misunderstandings: The evaluator doesn't know how to perform the task themself, so they stumble through the interface trying to discover the correct sequence of actions—and then they evaluate the stumbling process. (The user should identify and perform the optimal action sequence.) The walkthrough method does not test real users on the system. The walkthrough will often identify many more problems than you would find with a single, unique user in a single test session There are social constraints to the inhibit the cognitive walkthrough process. These include time pressure, lengthy design discussions and design defensiveness. Time pressure is caused when design iterations occur late in the development process, when a development team usually feels considerable pressure to actually implement specifications, and may not think they have the time to evaluate them properly. Many developers feel that CW's are not efficient because of the amount of time they take and the time pressures that they are facing. A design team spends their time trying to resolve the problem, during the CW instead of after the results have been formulated. Evaluation time is spent re-designing, this inhibits the effectiveness of the walkthrough and leads to lengthy design discussions. Many times, designers may feel personally offended that their work is even being evaluated. Due to the fact that a walk-through would likely lead to more work on a project that they already are under pressure to complete in the allowed time, designers will over-defend their design during the walkthrough. They are more likely to be argumentative and reject changes that seem obvious. History The method was developed in the early nineties by Wharton, et al., and reached a large usability audience when it was published as a chapter in Jakob Nielsen's seminal book on usability, "Usability Inspection Methods". The Wharton, et al. method required asking four questions at each step, along with extensive documentation of the analysis. In 2000 there was a resurgence in interest in the method in response to a CHI paper by Spencer who described modifications to the method to make it effective in a real software development setting. Spencer's streamlined method required asking only two questions at each step, and involved creating less documentation. Spencer's paper followed the example set by Rowley, et al. who described the modifications to the method that they made based on their experience applying the methods in their 1992 CHI paper "The Cognitive Jogthrough". Originally designed as a tool to evaluate interactive systems, such as postal kiosks, automated teller machines (ATMs), and interactive museum exhibits, where users would have little to no experience with using this new technology. However, since its creation, the method has been applied with success to complex systems like CAD software and some software development tools to understand the first experience of new users. See also Cognitive dimensions, a framework for identifying and evaluating elements that affect the usability of an interface Comparison of usability evaluation methods References Further reading Blackmon, M. H. Polson, P.G. Muneo, K & Lewis, C. (2002) Cognitive Walkthrough for the Web CHI 2002 vol. 4 No. 1 pp. 463–470 Blackmon, M. H. Polson, Kitajima, M. (2003) Repairing Usability Problems Identified by the Cognitive Walkthrough for the Web CHI 2003 pp497–504. Dix, A., Finlay, J., Abowd, G., D., & Beale, R. (2004). Human-computer interaction (3rd ed.). Harlow, England: Pearson Education Limited. p321. Gabrielli, S. Mirabella, V. Kimani, S. Catarci, T. (2005) Supporting Cognitive Walkthrough with Video Data: A Mobile Learning Evaluation Study MobileHCI ’05 pp77–82. Goillau, P., Woodward, V., Kelly, C. & Banks, G. (1998) Evaluation of virtual prototypes for air traffic control - the MACAW technique. In, M. Hanson (Ed.) Contemporary Ergonomics 1998. Good, N. S. & Krekelberg, A. (2003) Usability and Privacy: a study of KaZaA P2P file-sharing CHI 2003 Vol.5 no.1 pp137–144. Gray, W. & Salzman, M. (1998). Damaged merchandise? A review of experiments that compare usability evaluation methods, Human-Computer Interaction vol.13 no.3, 203–61. Gray, W.D. & Salzman, M.C. (1998) Repairing Damaged Merchandise: A rejoinder. Human-Computer Interaction vol.13 no.3 pp325–335. Hornbaek, K. & Frokjaer, E. (2005) Comparing Usability Problems and Redesign Proposal as Input to Practical Systems Development CHI 2005 391–400. Jeffries, R. Miller, J. R. Wharton, C. Uyeda, K. M. (1991) User Interface Evaluation in the Real World: A comparison of Four Techniques Conference on Human Factors in Computing Systems pp 119 – 124 Lewis, C. Polson, P, Wharton, C. & Rieman, J. (1990) Testing a Walkthrough Methodology for Theory-Based Design of Walk-Up-and-Use Interfaces Chi ’90 Proceedings pp235–242. Mahatody, Thomas / Sagar, Mouldi / Kolski, Christophe (2010). State of the Art on the Cognitive Walkthrough Method, Its Variants and Evolutions, International Journal of Human-Computer Interaction, 2, 8 741–785. Rizzo, A., Marchigiani, E., & Andreadis, A. (1997). The AVANTI project: prototyping and evaluation with a cognitive walkthrough based on the Norman's model of action. In Proceedings of the 2nd conference on Designing interactive systems: processes, practices, methods, and techniques (pp. 305-309). Rowley, David E., and Rhoades, David G (1992). The Cognitive Jogthrough: A Fast-Paced User Interface Evaluation Procedure. Proceedings of CHI '92, 389–395. Sears, A. (1998) The Effect of Task Description Detail on Evaluator Performance with Cognitive Walkthroughs CHI 1998 pp259–260. Spencer, R. (2000) The Streamlined Cognitive Walkthrough Method, Working Around Social Constraints Encountered in a Software Development Company CHI 2000 vol.2 issue 1 pp353–359. Wharton, C. Bradford, J. Jeffries, J. Franzke, M. Applying Cognitive Walkthroughs to more Complex User Interfaces: Experiences, Issues and Recommendations CHI ’92 pp381–388. External links Cognitive Walkthrough Usability inspection
45272069
https://en.wikipedia.org/wiki/Archibus
Archibus
Archibus is an Integrated Workplace Management System (IWMS) platform developed by Archibus, Inc. The platform is integrated bi-directionally with building information modeling and CAD design software. Archibus software is reportedly used to manage around 15 million properties around the world. Archibus software is easily integrated with Mobile, GIS, and ERP systems such as Oracle, SAP, Sage and others. Products Archibus is a platform that includes a range of infrastructure and facilities management solutions, and is available in both Web-based and Microsoft Windows-based platforms. The software can also be delivered in the cloud (SaaS). Platforms: Archibus Web Central Archibus Smart Client Archibus Mobile Framework Delivery Models: Archibus On-Premises ― software installed within the building premises. Archibus Hosted Services ― software installed & used from a remote location. Archibus SaaS ― a software as a service model Archibus Cloud ― a software as a service model Modules: Real Property Capital Projects Space Assets Sustainability & Risk Maintenance Workplace Services Technology Extensions Professional Services History Founded in Boston, in 1983, Archibus software is the originator of IWMS software. On December 5, 2018, JMI Equity, a growth equity firm focused on investing in leading software companies, made a strategic growth investment in both Archibus and Serraview. References See also Autodesk Revit ArchiCAD Building information modeling
50499089
https://en.wikipedia.org/wiki/ManicTime
ManicTime
ManicTime is automatic time tracking software, which tracks application and web page usage. Tracked data helps users keep track of time spent on various projects and tasks. It was developed by Finkit d.o.o., a company based in Slovenia. Details ManicTime Client runs in the background and records applications, documents and web sites used by user. Collected data can then be used to keep track of time spent on various projects and tasks. All data is stored locally in SQL Server Compact database. ManicTime Server ManicTime Server was introduced in 2011. It collects data from ManicTime Clients and generates reports, which can be viewed with a web browser. ManicTime Server is an on-premises software and stores data in either SQLite, PostgreSQL or Microsoft SQL Server. Other applications can interact with ManicTime Server through SQL or JSON web service. See also Comparison of time tracking software Project management software References External links Official site Time-tracking software Proprietary software
49415982
https://en.wikipedia.org/wiki/Screencheat
Screencheat
Screencheat is a first-person shooter video game developed by Samurai Punk and published by Surprise Attack. The game was released for Microsoft Windows, OS X, and Linux in October 2014 and was released for PlayStation 4 and Xbox One in March 2016. The game was later ported to Nintendo Switch, with enhanced graphics and updated interface, under the name Screencheat: Unplugged, in November 2018. Gameplay Screencheat is a multiplayer first-person shooter video game, but in functionality it is a second-person shooter, because every player's character model is invisible. Since the viewpoints of all players are shown on the screen, players are required to look at others' screens to deduce their opponents' location, hence the name of the game. The maps are brightly colored in order to make it easier to figure out where a player is. Screencheat includes several different gamemodes. Some, including Deathmatch, King of the Hill, and Capture the Fun (where players fight for possession of a piñata), are multiplayer FPS standards. The game also has a handful of unique gamemodes, such as One Shot, where each player is limited to one shot and cannot reload until a certain amount of time, and Murder Mystery, in which players have to kill a specific opponent with a particular weapon. Development and release Screencheat was developed by Australia-based studio Samurai Punk and published by Surprise Attack. The games inception came from the 2014 Global Game Jam, where it received several awards and honourable mentions. The game was released for Microsoft Windows, OS X, and Linux on 21 October 2014. Prior to release, a free public beta period ran from 4 August 2014 to 3 September 2014. It was released for PlayStation 4 and Xbox One on 1 March 2016. On 29 November 2018, Samurai Punk released Screencheat: Unplugged, optimized for Nintendo Switch with remastered graphics and a new weapon-unlocking system. Reception Screencheat received an average reception from critics upon release. Critics praised the game for building an enjoyable experience around a single novel idea, but they also criticised the game's lack of depth, and limited replayability. References External links 2014 video games First-person shooters Game jam video games Linux games MacOS games Nintendo Switch games PlayStation 4 games Video games developed in Australia Windows games Xbox One games
3763004
https://en.wikipedia.org/wiki/West%20Sussex%20Football%20League
West Sussex Football League
The West Sussex Football League is a football competition in England. It was formed in 1896. The League has 10 divisions of which the highest, the Premier Division, sits at level 12 of the English football league system. It is a feeder to the Southern Combination League Division Two. Membership is not limited to clubs from West Sussex. Currently the league also includes teams from Surrey and Hampshire though these are from places that are close to the county boundaries. Cup Competitions The West Sussex Football League operates one pre-season opener, three league cups and five charity cup competitions. Walter Rossiter Memorial Trophy The pre-season opening match of the season is the Walter Rossiter Trophy which is contested between the previous season's league winner and Centenary Cup winners. The current holder is West Chiltington FC. League Cups The league's main cup is the Centenary Cup which includes teams from the league's top three tiers, from the Premier Division to Division 2. The current holder of the cup is Horsham Trinity FC. The league's two other cups are the Tony Kopp Cup for the northern section of Divisions 3, 4 and 5 and the Bareham Trophy for the southern sections of the same lower divisions. The current holder of the Tony Kopp Cup is AFC Gatwick and the holder of the Bareham Trophy is Barnham Trojans FC. Charity Cups The remaining competitions within the league are the charity cups, these operate laterally across the northern and southern section of each division. These cups are: 2018-19 members Premier Division Angmering | Ashington Rovers | Capel | Henfield | Lavant | Newtown Villa | Nyetimber Pirates | Southwater | TD Shipley | The Unicorn | West Chiltington Championship North Alfold Reserves | Barns Green | Billingshurst Reserves | Broadbridge Heath Reserves | Holbrook | Horsham Crusaders | Horsham Trinity | Partridge Green | Pulborough | Steyning Town Community FC Reserves | Westcott 1935 | Wisborough Green Championship South Coal Exchange | East Dean | Lavant Reserves | Nyetimber Pirates Reserves | Petworth | Predators | Sidlesham Reserves | Sompting | Stedham United | Worthing Borough Division Two North Cowfold Reserves | Ewhurst | Henfield Reserves | Horsham Baptists & Ambassadors | Horsham Olympic | Newdigate | Rowfant Village | Rudgwick | Slinfold | Upper Beeding Reserves Division Two South AFC Southbourne | Felpham Colts | Hunston Community Club FC | Milland | Newtown Villa Reserves | Storrington Community FC Reserves | The Crown | Watersfield | Whyke United | Yapton Division Three North Billingshurst 3rds | Capel Reserves | Cowfold 3rds | Ewhurst Reserves | Holbrook Reserves | Horsham Crusaders Reserves | Southwater Reserves | Southwater Royals | Westcott 1935 Reserves | Wisborough Green Reserves Division Three South AFC Littlehampton | Ambassadors | Angmering Seniors Reserves | Barnham Trojans | Beaumont Park | Boxgrove | Chapel | Harting | Littlehampton United Reserves | Yapton Reserves Division Four North Brockham | Fittleworth | Horsham Baptists & Ambassadors Reserves | Horsham Trinity Reserves | Plaistow | Pulborough Patriots | Rudgwick Reserves | Southwater 3rds | TD Shipley Reserves | Watersfield Reserves Division Four South Barnham Trojans Reserves | Del United | Fenhurst Sports | Predators Reserves | Nyetimber Pirates 3rds | Selsey Reserves | Stedham United Reserves | Wittering United Division Five North AFC Gatwick | Barns Green Reserves | Brockham Reserves | Horsham Olympic Reserves | Petworth Reserves | Partridge Green Reserves | Pulborough Reserves | Slinfold Reserves | Thakeham Village | Watersfield 3rds Division Five South Angmering Reserves | Arun Church | Broadwater Athletic | Felpham Colts Reserves | Flansham Park Rangers | Lodsworth | The Unicorn Reserves Past divisional champions The league ran 11 divisions up until 2015 when Division 5 Central was dropped. In 2016, Division One was split into North and South to increase back to 11 divisions. External links Official site Football in West Sussex Football leagues in England Sports leagues established in 1896 1896 establishments in England
3494476
https://en.wikipedia.org/wiki/TextMate
TextMate
TextMate is a general-purpose GUI text editor for macOS created by Allan Odgaard. TextMate features declarative customizations, tabs for open documents, recordable macros, folding sections, snippets, shell integration, and an extensible bundle system. History TextMate 1.0 was released on 5 October 2004, after 5 months of development, followed by version 1.0.1 on 21 October 2004. The release focused on implementing a small feature set well, and did not have a preference window or a toolbar, didn't integrate FTP, and had no options for printing. At first only a small number of programming languages were supported, as only a few “language bundles” had been created. Even so, some developers found this early and incomplete version of TextMate a welcome change to a market that was considered stagnated by the decade-long dominance of BBEdit. TextMate 1.0.2 came out on 10 December 2004. In the series of TextMate 1.1 betas, TextMate gained features: a preferences window with a GUI for creating and editing themes; a status bar with a symbol list; menus for choosing language and tab settings, and a “bundle editor” for editing language-specific customizations. On 6 January 2006, Odgaard released TextMate 1.5, the first “stable release” since 1.0.2. Reviews were positive, in contrast to earlier versions that had been criticised. TextMate continued to develop through mid-2006. On 8 August 2006, TextMate was awarded the Apple Design Award for Best Developer Tool, at Apple's Worldwide Developers Conference in San Francisco, California, to “raucous applause.” In February 2006, the TextMate blog expressed intentions for future directions, including improved project management, with a plug-in system to support remote file systems such as FTP, and revision control systems such as Subversion. Throughout 2007, the core application changed only minimally, though its “language bundles” continued to advance. In June 2009, TextMate 2 was announced as being about 90 percent complete, but with an undisclosed final-feature list. A public alpha was made available for download on the TextMate blog in December 2011, followed by a release candidate at the end of 2016. In September 2019, a final version was released. In August 2012, TextMate 2's source code was published on GitHub under the terms of GPL-3.0-or-later, an attempt by the developer to counteract restrictions Apple placed on software distributed through the Mac App Store. Odgaard stated he prefers receiving patches as public domain as this preserves his ability to release a future version under a more permissive license, or to make a version available on the Mac App Store. Odgaard also stated he has a friend who uses some of TextMate's frameworks in a closed-source project, and they could not incorporate patches released under GPL. Features Language Grammars TextMate language grammars allows users to create their own arbitrarily complex syntax highlighting modes by assigning each document keyword a unique name. It uses a modified version of the Apple ASCII property list format to define language grammars. These grammars allow nesting rules to be defined using the Oniguruma regular expression library, and then assigned specific “scopes”: compound labels which identify them for coloration. Each point of a document is assigned one or more scopes, which define where in the document the point is, how it should be colored, and what the behavior of TextMate should be at that point. For instance, the title of one of the links in the “External links” section has the scope: text.html.mediawiki markup.list.mediawiki meta.link.inline.external.mediawiki string.other.link.title.external.mediawiki This scope tells us that we are looking at a link title within a link within a list within a MediaWiki document. TextMate themes can mark up any scope, at varying levels of precision. For instance, one theme may decide to color every constant (constant.*) identically, while another may decide that numerical constants (constant.numeric.*) should be colored differently than escaped characters (constant.character.escape.*). The hierarchal scope syntax allows language authors and theme authors various levels of coverage, so that each one can choose to opt for simplicity or comprehensiveness, as desired. TextMate documentation provides a list of naming commonly used across different programming languages for interoperability between bundles. Commands TextMate supports user-defined and user-editable commands that are interpreted by bash or the interpreter specified with a shebang. Commands can be sent many kinds of input by TextMate (the current document, selected text, the current word, etc.) in addition to environment variables and their output can be similarly be handled by TextMate in a variety of ways. At its most simple, a command might receive the selected text, transform it, and re-insert it into the document replacing the selection. Other commands might simply show a tool tip, create a new document for their output, or display it as a web page using TextMate's built-in HTML renderer. Many language-specific bundles such as bash, PHP or Ruby contain commands for compiling and/or running the current document or project. In many cases the STDOUT and STDERR of the code's process will be displayed in a window in TextMate. Snippets At their simplest, TextMate “snippets” are pieces of text which can be inserted into the document at the current location via a context-sensitive key stroke or tab completion. Snippets are "intelligent", supporting "tab stops" dynamic updating, access to environment variables, and the ability to run inline scripts. This allows complicated behaviors. Tab stops can be cycled through using the “tab” key and support default text, drop-downs, to complete elements of the snippet. The results of these tab stops can be dynamically changed in another portion of the snippet, as the user fills in a stop. TextMate environment variables can be used, supporting information about the current scope, line number, or author name, etc. Snippets also have the ability to run inline shell scripts. Bundles TextMate language grammars, snippets, macros, commands, and templates can be grouped into “bundles” of functionality. Any snippet, macro, or command can be executed by pressing a keyboard shortcut, by typing a particular word and then pressing the “tab” key (so-called “tab triggers”), or by selecting the command from a menu. Tab triggers are particularly useful; the combination of tab triggers and snippets greatly eases coding in verbose languages, or languages with commonly typed patterns. Snippets, macros, and commands can be limited to a particular scope, so that for instance the “close html tag” command does not work in a python script, freeing up that keyboard shortcut to be used for something else. This allows individual languages, and even individual scopes, to override built-in commands such as “Reformat Paragraph” with more specialized versions. Even special keys such as the return key and spacebar can be overridden. A Subversion repository is available containing many more bundles than are shipped with the editor package, for everything from Markdown to blogging to MIPS assembly language. Project management Several documents or folders can be opened at once in a TextMate project window, which provides a drawer along its side listing file and folder names, and a series of tabs across the top. In TextMate 1.5, this drawer provides a means for users to organize files and folders from across the file system, as well as the ability to create virtual folders for further organization. This feature was removed from TextMate 2 and replaced with an ordinary file browser. Search and replace can be undertaken across an entire project, and commands can interact with the selected files or folders in the drawer. Bundles for CVS, Subversion, darcs, and other revision control systems allow TextMate to manage versioned code. Other features TextMate has many features common to programming editors: Folding code sections can be used to hide areas of a document not currently being edited, for a more compact view of code structure or to avoid distraction. The sections to be folded can be selected by hand, or the structure of the document itself can be used to determine foldings. Regular-expression–based search and replace speeds complicated text manipulations. TextMate uses the Oniguruma regular expression library developed by K. Kosako. A function pop-up provides a list of sections or functions in the current document. Clipboard history allows users to cut many sections of text at once, and then paste them. Column editing mode allows adding the same text to several rows of text, and is very useful for manipulating tabular data. "rmate" support for launching textmate as editor for files from remote servers - much improved over work-arounds needed in version 1 In addition, TextMate supports features to integrate well with the OS X graphical environment: Clipboard graphical history supports pasting from previous copies, including prior launches. Find and replace support an analogous graphical history. Editing is further enhanced by multiple cursors (insertion points), and the ability to extend the current selection to additional instances creating multiple cursors. A WebKit-based HTML view window shows live updates as an HTML document is edited. VoiceOver and Zoom users can use TextMate thanks to its accessibility support. Limitations TextMate does have a few limitations when compared to other editors in its class: Because TextMate is not tightly coupled to a scripting language, as Emacs is to Emacs Lisp, it is impossible for users to have complete control over the program's configuration and behavior. Allan Odgaard explained his thoughts on the subject in an email to the TextMate mailing list, advocating for "platform-recommended" solutions. No built-in HTML validator — because TextMate uses the W3C validator for HTML validation, users must have an active network connection to validate HTML using the standard functionality. Lack of code-completion feature: despite its substantial support for macros, commands, and snippets, TextMate has no built-in support for code-hinting or guided code-completion, so text editors that support these features may prove to be a better choice when learning the syntax of a new language or coding in verbose languages. Remark: you can get code/word suggestions by typing one or more letters and (repeatedly) pressing ESC. The suggestions are words that occur in the current document. TextMate is not binary safe. It is explicitly text only, and does not guarantee that arbitrary binary data in a file will be preserved through a load/save cycle, regardless of whether that data is edited. Awards TextMate 1.5 won the Apple Design Award for best developer tool in 2006. See also Comparison of text editors References Further reading External links MacOS-only software MacOS text editors Proprietary software Free text editors
43819349
https://en.wikipedia.org/wiki/Seafile
Seafile
Seafile is an open-source, cross-platform file-hosting software system. Files are stored on a central server and can be synchronized with personal computers and mobile devices through apps. Files on the Seafile server can also be accessed directly via the server's web interface. Seafile's functionality is similar to other popular file hosting services such as Dropbox and Google Drive. The primary difference between Seafile and Dropbox/Google Drive is that Seafile is a self-hosted file sharing solution for private cloud applications. In private clouds, storage space and client connection limits are determined exclusively by the users' own infrastructure and settings rather than the terms and conditions of a cloud service provider. Additionally, organizations, whose data privacy policies bar them from using public cloud services can draw on Seafile to build a file sharing system of their own. History In 2009, Daniel Pan and other former students of Tsinghua University, Beijing embarked on a project aiming at building a peer to peer file sharing software, hence a system that does not rely on a centralized server. Seafile was the name chosen for their software project. The development team decided in 2010 to abandon this initial goal and refocussed on building a file syncing software with a more traditional client-server architecture – the architecture also used by Dropbox and other file hosting service providers. In 2012, Daniel Pan, Jonathan Xu and other key developers of the project established Seafile Ltd. with the objective to develop and distribute the file hosting software. At the beginning of 2015, the distribution company Seafile GmbH was founded by Silja and Alexander Jackson to promote Seafile in Germany. Seafile Ltd., which did not take an equity stake in Seafile GmbH, granted usage rights for the Seafile brand and provided funding in the form of an interest free loan to the new company. The partnership was abruptly terminated in July 2016 due to disagreements between the two companies over, among other things, product development and intellectual property rights. An amicable resolution to the dispute was reached in March 2017. The Mainz-based company datamate GmbH & Co. KG has since taken over distribution and support in Europe. Editions and versions Seafile has two editions: a free community edition and a professional edition with additional features for enterprise environments. The community edition is released under the terms of the GNU Affero General Public License v3. The professional edition is released under a proprietary license. Most Seafile installations – community as well as enterprise – are private cloud installations and service a clearly defined user group, i.e., the members of an organisation. There are also some public file hosting services based on Seafile. Features The feature set of the community and professional edition vary. Both editions share these features: Multi-platform file synchronisation Public link sharing (upload and download) Client-side encryption Per-folder access control Version control Two-factor authentication The additional features of the professional edition include: File locking Full text search MS Office document preview and office web app integration Activity logging Distributed storage Antivirus integration Platforms Seafile Server Community Edition can be installed on various Linux platforms. Seafile Ltd. maintains installation packages for Debian, Ubuntu, CentOS, Red Hat Enterprise Linux. Additionally, the developer provides a Docker container. A Seafile Server for Windows has been discontinued with version 6.0.7, though it is still available for download on the developer's download site. Users interested in installing Seafile on a Windows computer are referred to Docker. FreeBSD and Raspbian are two more supported platforms. Their install packages are community maintained. Seafile Server Professional Edition is available for Debian, Ubuntu, CentOS and RHEL. A Docker image is available too. For Seafile Professional's proprietary nature, they are all maintained by Seafile Ltd. Both servers offers a choice of MySQL/MariaDB or SQLite for database and supports file system or distributed storage as data storage. Desktop clients are available for personal computers running on Windows, macOS, and Linux. Mobile clients are available for iOS, Windows Phone 8 and Android. Files can also be viewed, downloaded from and uploaded to the Seafile Server without the client via Seafile's web interface. Disputes Seafile Ltd and Seafile GmbH In July 2016 a dispute came to light between Seafile Ltd. (the original company, from China) and Seafile GmbH (a German company established from JacksonIT by Silja Jackson and Alexander Jackson in 2015). Seafile Ltd. had funded Seafile GmbH to be a European partner. They then agreed to merge the main operations and license the cloud provision to a new company, but an agreement could not be reached on the number of shares to be allocated. Seafile Ltd. alleges that Seafile GmbH and its predecessors had attempted to register its trademark in the US and had taken steps to present itself in place of Seafile Ltd. Seafile Ltd. also alleges that Seafile GmbH had abused the sourcecode and were committing copyright infringements. Seafile GmbH stated it would fork based upon the most recent professional version and continue developing the file sharing software independently under the brand name Seafile, for which the company claims it hold the intellectual property rights in Europe and North America. Seafile GmbH has not released a new Seafile server version since the announcement. In March 2017, it was announced that an amicable resolution to the dispute between Seafile Ltd. and Seafile GmbH had been reached. All Seafile trademarks held by Seafile GmbH and the domain “seafile.de” will be transferred to Seafile Ltd.. Seafile GmbH will continue to do business and change its name to Syncwerk GmbH. Syncwerk GmbH will continue to provide software updates and support, as well as SaaS services to their existing customers, based on Seafile Professional Edition 5.1.8. New customers who are interested in purchasing Seafile Professional Edition need to contact Seafile Ltd.. Seafile GmbH / Syncwerk GmbH will no longer offer the Seafile Professional Edition (or software derived from it) to new customers who first contacted them after March, 10th 2017. PayPal In June 2016 Seafile GmbH had its payment services from PayPal removed. PayPal had accused Seafile of facilitating the illegal sharing of files and demanded that they monitor file transfers and provide statistical information to PayPal, which Seafile refused to do. Some days later, PayPal reverted its decision and apologised to Seafile, but Seafile said they would drop PayPal in favour of other payment options. See also Nextcloud (FOSS client-server software for file storage and transfer) Comparison of file hosting services Comparison of file synchronization software Comparison of online backup services References External links GitHub Seafile Manual for Seafile administrators Free software for cloud computing Free software programmed in C Free software programmed in Python Free and open-source Android software
87405
https://en.wikipedia.org/wiki/ChatZilla
ChatZilla
ChatZilla was an IRC client that is part of SeaMonkey. It was previously an extension for Mozilla-based browsers such as Firefox, introduced in 2000. It is cross-platform open source software which has been noted for its consistent appearance across platforms, CSS appearance customization and scripting. Early history On April 20, 1999, it was reported that Mozilla, at the time the open-source arm of AOL's Netscape Communications division, had announced the commencement of "an instant messaging and chat project with the stated goal of supporting a wide variety of chat protocols, including "the venerable Internet Relay Chat". Other companies were also developing chat systems. We recognize that there's a lot of interest in the instant messaging space,' said AOL spokesperson Catherine Corre, referring to the Mozilla project. 'This is a recognition of the interest in that area. At the time, the new chat client proposal was reported as being "competition" to AOL's own AOL Instant Messenger chat client, and on April 21, 1999, the announcement was rescinded "pending further review by Netscape." Independently, programmer Robert Ginda developed an IRC client and submitted it to the Mozilla project, which as of September 1999 planned to introduce it with the planned release of Mozilla browser. Named "ChatZilla", the client was available in development form in May 2000 for the Netscape 6.01 browser, and Mozilla 0.8. Features ChatZilla runs on any platform on which a Mozilla-based browser can run, including OS X, Linux, and Microsoft Windows, and provides a "consistent user interface across the board." It can also be used as a standalone app using XULRunner. It contains most general features of IRC clients, including connecting to multiple servers at once, maintaining a built-in list of standard networks, searching and sorting of available channels, chat logging, Direct Client-to-Client ("DCC") chat and file transfers, and user customization of the interface. ChatZilla includes automatic completion of nicknames with the Tab key, and appends a comma if the nickname is the first word on a line. It also provides completion of /commands with the Tab key, and a "quick double-Tab" presents a list of available commands based on what's been typed so far. The text entry window can be "single line", in which the Enter key sends the composed text, or "multiline" in which allows composing larger text sections with line breaks, and the Ctrl-Enter key combo sends the text block. JavaScript is used for running scripts and messages are styled with CSS, which can be controlled by the user: by selecting from the View menu, dragging a link to a .css file to the message window, or with the /motif command. DCC is supported which allows users to transfer files and chat directly between one another. The sender of each message is shown to the left of the text as a link—clicking the link opens a private chat window to that user. ChatZilla is included with SeaMonkey and was available for download to other Mozilla-based browsers such as Firefox as an extension. It could also be run in a tab in Firefox. Plugins ChatZilla offers many plugins, which extend the functionality in the user-experience of the add-on. Some of these plugins include: TinyURL – replaces long URLs (typically those with more than 80 characters) with TinyURL links googleapi – searches Google and displays the top result cZiRATE – shares the song the user is currently listening to on iRATE Radio WebExtension The introduction of Firefox Quantum (version 57) has dropped the support of add-ons, and so has stopped ChatZilla from working inside Firefox. Work has begun to move the code to a WebExtension. Reception Reviews of ChatZilla have varied from enthusiastic, in the case of users familiar with IRC, to unimpressed, for reviewers more accustomed to other chat client user interfaces. A 2003 review in Computers for Doctors of Mozilla 1.0, referred to IRC client applications as "not very user-friendly, and the same goes for ChatZilla. You won't find any pop-up icons, or happy little noises telling you somebody wants to chat." In 2004, Jennifer Golbeck, writing in IRC Hacks, pointed out its cross-platform consistency, and found it "quick and easy to start using", and has "great support for changing the appearance of chat windows with motifs...(CSS files)." In a 2008 overview of extensions for Firefox in Linux Journal, Dan Sawyer described ChatZilla as an "oldie-but-goodie", "venerable", "with all the trimmings", "handsomely organizes chat channels, logs, has an extensive built-in list of available channels, supports DCC chats and file transfers, and has its own plugin and theming architecture." The application "implements all the standards very well, and for those who prefer to keep desktop clutter to a minimum but still enjoy fighting with random strangers on IRC, ChatZilla is a must-have." Forks Ambassador Ambassador is a fork of ChatZilla compatible with Pale Moon, Basilisk, and Interlink Mail & News. See also Comparison of Internet Relay Chat clients List of Firefox extensions List of IRC commands References External links ChatZilla homepage Running ChatZilla on XULRunner Chatzilla networks.txt generator Mozilla Application Suite Free Internet Relay Chat clients Unix Internet Relay Chat clients Internet Relay Chat clients Windows Internet software Classic Mac OS software Instant messaging clients for Linux Unix Internet software Cross-platform software 2000 software Software using the Mozilla license Software that uses XUL Free Firefox legacy extensions
2421675
https://en.wikipedia.org/wiki/Transcription%20%28music%29
Transcription (music)
In music, transcription is the practice of notating a piece or a sound which was previously unnotated and/or unpopular as a written music, for example, a jazz improvisation or a video game soundtrack. When a musician is tasked with creating sheet music from a recording and they write down the notes that make up the piece in music notation, it is said that they created a musical transcription of that recording. Transcription may also mean rewriting a piece of music, either solo or ensemble, for another instrument or other instruments than which it was originally intended. The Beethoven Symphonies transcribed for solo piano by Franz Liszt are an example. Transcription in this sense is sometimes called arrangement, although strictly speaking transcriptions are faithful adaptations, whereas arrangements change significant aspects of the original piece. Further examples of music transcription include ethnomusicological notation of oral traditions of folk music, such as Béla Bartók's and Ralph Vaughan Williams' collections of the national folk music of Hungary and England respectively. The French composer Olivier Messiaen transcribed birdsong in the wild, and incorporated it into many of his compositions, for example his Catalogue d'oiseaux for solo piano. Transcription of this nature involves scale degree recognition and harmonic analysis, both of which the transcriber will need relative or perfect pitch to perform. In popular music and rock, there are two forms of transcription. Individual performers copy a note-for-note guitar solo or other melodic line. As well, music publishers transcribe entire recordings of guitar solos and bass lines and sell the sheet music in bound books. Music publishers also publish PVG (piano/vocal/guitar) transcriptions of popular music, where the melody line is transcribed, and then the accompaniment on the recording is arranged as a piano part. The guitar aspect of the PVG label is achieved through guitar chords written above the melody. Lyrics are also included below the melody. Adaptation Some composers have rendered homage to other composers by creating "identical" versions of the earlier composers' pieces while adding their own creativity through the use of completely new sounds arising from the difference in instrumentation. The most widely known example of this is Ravel's arrangement for orchestra of Mussorgsky's piano piece Pictures at an Exhibition. Webern used his transcription for orchestra of the six-part ricercar from Bach's The Musical Offering to analyze the structure of the Bach piece, by using different instruments to play different subordinate motifs of Bach's themes and melodies. In transcription of this form, the new piece can simultaneously imitate the original sounds while recomposing them with all the technical skills of an expert composer in such a way that it seems that the piece was originally written for the new medium. But some transcriptions and arrangements have been done for purely pragmatic or contextual reasons. For example, in Mozart's time, the overtures and songs from his popular operas were transcribed for small wind ensemble simply because such ensembles were common ways of providing popular entertainment in public places. Mozart himself did this in his opera Don Giovanni, transcribing for small wind ensemble several arias from other operas, including one from his own opera The Marriage of Figaro. A more contemporary example is Stravinsky´s transcription for four hands piano of The Rite of Spring, to be used on the ballet's rehearsals. Today musicians who play in cafes or restaurants will sometimes play transcriptions or arrangements of pieces written for a larger group of instruments. Other examples of this type of transcription include Bach's arrangement of Vivaldi's four-violin concerti for four keyboard instruments and orchestra; Mozart's arrangement of some Bach fugues from The Well-Tempered Clavier for string trio; Beethoven's arrangement of his Große Fuge, originally written for string quartet, for piano duet, and his arrangement of his Violin Concerto as a piano concerto; Franz Liszt's piano arrangements of the works of many composers, including the symphonies of Beethoven; Tchaikovsky's arrangement of four Mozart piano pieces into an orchestral suite called "Mozartiana"; Mahler's re-orchestration of Schumann symphonies; and Schoenberg's arrangement for orchestra of Brahms's piano quintet and Bach's "St. Anne" Prelude and Fugue for organ. Since the piano became a popular instrument, a large literature has sprung up of transcriptions and arrangements for piano of works for orchestra or chamber music ensemble. These are sometimes called "piano reductions", because the multiplicity of orchestral parts—in an orchestral piece there may be as many as two dozen separate instrumental parts being played simultaneously—has to be reduced to what a single pianist (or occasionally two pianists, on one or two pianos, such as the different arrangements for George Gershwin's Rhapsody in Blue) can manage to play. Piano reductions are frequently made of orchestral accompaniments to choral works, for the purposes of rehearsal or of performance with keyboard alone. Many orchestral pieces have been transcribed for concert band. Transcription aids Notation software Since the advent of desktop publishing, musicians can acquire music notation software, which can receive the user's mental analysis of notes and then store and format those notes into standard music notation for personal printing or professional publishing of sheet music. Some notation software can accept a Standard MIDI File (SMF) or MIDI performance as input instead of manual note entry. These notation applications can export their scores in a variety of formats like EPS, PNG, and SVG. Often the software contains a sound library that allows the user's score to be played aloud by the application for verification. Slow-down software Prior to the invention of digital transcription aids, musicians would slow down a record or a tape recording to be able to hear the melodic lines and chords at a slower, more digestible pace. The problem with this approach was that it also changed the pitches, so once a piece was transcribed, it would then have to be transposed into the correct key. Software designed to slow down the tempo of music without changing the pitch of the music can be very helpful for recognizing pitches, melodies, chords, rhythms and lyrics when transcribing music. However, unlike the slow-down effect of a record player, the pitch and original octave of the notes will stay the same, and not descend in pitch. This technology is simple enough that it is available in many free software applications. The software generally goes through a two-step process to accomplish this. First, the audio file is played back at a lower sample rate than that of the original file. This has the same effect as playing a tape or vinyl record at slower speed – the pitch is lowered meaning the music can sound like it is in a different key. The second step is to use Digital Signal Processing (or DSP) to shift the pitch back up to the original pitch level or musical key. Pitch tracking software As mentioned in the Automatic music transcription section, some commercial software can roughly track the pitch of dominant melodies in polyphonic musical recordings. The note scans are not exact, and often need to be manually edited by the user before saving to file in either a proprietary file format or in Standard MIDI File Format. Some pitch tracking software also allows the scanned note lists to be animated during audio playback. Automatic music transcription The term "automatic music transcription" was first used by audio researchers James A. Moorer, Martin Piszczalski, and Bernard Galler in 1977. With their knowledge of digital audio engineering, these researchers believed that a computer could be programmed to analyze a digital recording of music such that the pitches of melody lines and chord patterns could be detected, along with the rhythmic accents of percussion instruments. The task of automatic music transcription concerns two separate activities: making an analysis of a musical piece, and printing out a score from that analysis. This was not a simple goal, but one that would encourage academic research for at least another three decades. Because of the close scientific relationship of speech to music, much academic and commercial research that was directed toward the more financially resourced speech recognition technology would be recycled into research about music recognition technology. While many musicians and educators insist that manually doing transcriptions is a valuable exercise for developing musicians, the motivation for automatic music transcription remains the same as the motivation for sheet music: musicians who do not have intuitive transcription skills will search for sheet music or a chord chart, so that they may quickly learn how to play a song. A collection of tools created by this ongoing research could be of great aid to musicians. Since much recorded music does not have available sheet music, an automatic transcription device could also offer transcriptions that are otherwise unavailable in sheet music. To date, no software application can yet completely fulfill James Moorer’s definition of automatic music transcription. However, the pursuit of automatic music transcription has spawned the creation of many software applications that can aid in manual transcription. Some can slow down music while maintaining original pitch and octave, some can track the pitch of melodies, some can track the chord changes, and others can track the beat of music. Automatic transcription most fundamentally involves identifying the pitch and duration of the performed notes. This entails tracking pitch and identifying note onsets. After capturing those physical measurements, this information is mapped into traditional music notation, i.e., the sheet music. Digital Signal Processing is the branch of engineering that provides software engineers with the tools and algorithms needed to analyze a digital recording in terms of pitch (note detection of melodic instruments), and the energy content of un-pitched sounds (detection of percussion instruments). Musical recordings are sampled at a given recording rate and its frequency data is stored in any digital wave format in the computer. Such format represents sound by digital sampling. Pitch detection Pitch detection is often the detection of individual notes that might make up a melody in music, or the notes in a chord. When a single key is pressed upon a piano, what we hear is not just one frequency of sound vibration, but a composite of multiple sound vibrations occurring at different mathematically related frequencies. The elements of this composite of vibrations at differing frequencies are referred to as harmonics or partials. For instance, if we press the Middle C key on the piano, the individual frequencies of the composite's harmonics will start at 261.6 Hz as the fundamental frequency, 523 Hz would be the 2nd Harmonic, 785 Hz would be the 3rd Harmonic, 1046 Hz would be the 4th Harmonic, etc. The later harmonics are integer multiples of the fundamental frequency, 261.6 Hz ( ex: 2 x 261.6 = 523, 3 x 261.6 = 785, 4 x 261.6 = 1046 ). While only about eight harmonics are really needed to audibly recreate the note, the total number of harmonics in this mathematical series can be large, although the higher the harmonic's numeral the weaker the magnitude and contribution of that harmonic. Contrary to intuition, a musical recording at its lowest physical level is not a collection of individual notes, but is really a collection of individual harmonics. That is why very similar-sounding recordings can be created with differing collections of instruments and their assigned notes. As long as the total harmonics of the recording are recreated to some degree, it does not really matter which instruments or which notes were used. A first step in the detection of notes is the transformation of the sound file's digital data from the time domain into the frequency domain, which enables the measurement of various frequencies over time. The graphic image of an audio recording in the frequency domain is called a spectrogram or sonogram. A musical note, as a composite of various harmonics, appears in a spectrogram like a vertically placed comb, with the individual teeth of the comb representing the various harmonics and their differing frequency values. A Fourier Transform is the mathematical procedure that is used to create the spectrogram from the sound file’s digital data. The task of many note detection algorithms is to search the spectrogram for the occurrence of such comb patterns (a composite of harmonics) caused by individual notes. Once the pattern of a note's particular comb shape of harmonics is detected, the note's pitch can be measured by the vertical position of the comb pattern upon the spectrogram. There are basically two different types of music which create very different demands for a pitch detection algorithm: monophonic music and polyphonic music. Monophonic music is a passage with only one instrument playing one note at a time, while polyphonic music can have multiple instruments and vocals playing at once. Pitch detection upon a monophonic recording was a relatively simple task, and its technology enabled the invention of guitar tuners in the 1970s. However, pitch detection upon polyphonic music becomes a much more difficult task because the image of its spectrogram now appears as a vague cloud due to a multitude of overlapping comb patterns, caused by each note's multiple harmonics. Another method of pitch detection was invented by Martin Piszczalski in conjunction with Bernard Galler in the 1970s and has since been widely followed. It targets monophonic music. Central to this method is how pitch is determined by the human ear. The process attempts to roughly mimic the biology of the human inner ear by finding only but a few of the loudest harmonics at a given instant. That small set of found harmonics are in turn compared against all the possible resultant pitches' harmonic-sets, to hypothesize what the most probable pitch could be given that particular set of harmonics. To date, the complete note detection of polyphonic recordings remains a mystery to audio engineers, although they continue to make progress by inventing algorithms which can partially detect some of the notes of a polyphonic recording, such as a melody or bass line. Beat detection Beat tracking is the determination of a repeating time interval between perceived pulses in music. Beat can also be described as 'foot tapping' or 'hand clapping' in time with the music. The beat is often a predictable basic unit in time for the musical piece, and may only vary slightly during the performance. Songs are frequently measured for their Beats Per Minute (BPM) in determining the tempo of the music, whether it be fast or slow. Since notes frequently begin on a beat, or a simple subdivision of the beat's time interval, beat tracking software has the potential to better resolve note onsets that may have been detected in a crude fashion. Beat tracking is often the first step in the detection of percussion instruments. Despite the intuitive nature of 'foot tapping' of which most humans are capable, developing an algorithm to detect those beats is difficult. Most of the current software algorithms for beat detection use a group competing hypothesis for beats-per-minute, as the algorithm progressively finds and resolves local peaks in volume, roughly corresponding to the foot-taps of the music. How automatic music transcription works To transcribe music automatically, several problems must be solved: 1. Notes must be recognized – this is typically done by changing from the time domain into the frequency domain. This can be accomplished through the Fourier transform. Computer algorithms for doing this are common. The fast Fourier transform algorithm computes the frequency content of a signal, and is useful in processing musical excerpts. 2. A beat and tempo need to be detected (Beat detection)- this is a difficult, many-faceted problem. The method proposed in Costantini et al. 2009 focuses on note events and their main characteristics: the attack instant, the pitch and the final instant. Onset detection exploits a binary time-frequency representation of the audio signal. Note classification and offset detection are based on constant Q transform (CQT) and support vector machines (SVMs). A collection of public domain sheet music can be found here. This in turn leads to a “pitch contour” namely a continuously time-varying line that corresponds to what humans refer to as melody. The next step is to segment this continuous melodic stream to identify the beginning and end of each note. After that, each “note unit” is expressed in physical terms (e.g., 442 Hz, .52 seconds). The final step is then to map this physical information into familiar music-notation-like terms for each note (e.g., an A4, quarter note). Detailed computer steps behind automatic music transcription In terms of actual computer processing, the principal steps are to 1) digitize the performed, analog music, 2) do successive short-term, fast Fourier transform (FFTs) to obtain the time-varying spectra, 3) identify the peaks in each spectrum, 4) analyze the spectral peaks to get pitch candidates, 5) connect the strongest individual pitch candidates to get the most likely time-varying, pitch contour, 6) map this physical data into the closest music-notation terms. These fundamental steps, originated by Piszczalski in the 1970s, became the foundation of automatic music transcription. The most controversial and difficult step in this process is detecting pitch . The most successful pitch methods operate in the frequency domain, not the time domain. While time-domain methods have been proposed, they can break down for real-world musical instruments played in typically reverberant rooms. The pitch-detection method invented by Piszczalski again mimics human hearing. It follows how only certain sets of partials “fuse” together in human listening. These are the sets that create the perception of a single pitch only. Fusion occurs only when two partials are within 1.5% of being a perfect, harmonic pair (i.e., their frequencies approximate a low-integer pair set such as 1:2, 5:8, etc.) This near harmonic match is required of all the partials in order for a human to hear them as only a single pitch. See also Orchestration Timbre Composer tributes (classical music) :Category:Scorewriters Reduction (music) References Musical notation Musical tributes
3152502
https://en.wikipedia.org/wiki/Internet%20governance
Internet governance
Internet governance is the development and application of shared principles, norms, rules, decision-making procedures, and programs that shape the evolution and use of the Internet. This article describes how the Internet was and is currently governed, some of the controversies that occurred along the way, and the ongoing debates about how the Internet should or should not be governed in the future. Internet governance should not be confused with e-governance, which refers to governments' use of technology to carry out their governing duties. Background No one person, company, organization or government runs the Internet. It is a globally distributed network comprising many voluntarily interconnected autonomous networks. It operates without a central governing body with each constituent network setting and enforcing its own policies. Its governance is conducted by a decentralized and international multistakeholder network of interconnected autonomous groups drawing from civil society, the private sector, governments, the academic and research communities and national and international organizations. They work cooperatively from their respective roles to create shared policies and standards that maintain the Internet's global interoperability for the public good. However, to help ensure interoperability, several key technical and policy aspects of the underlying core infrastructure and the principal namespaces are administered by the Internet Corporation for Assigned Names and Numbers (ICANN), which is headquartered in Los Angeles, California. ICANN oversees the assignment of globally unique identifiers on the Internet, including domain names, Internet protocol addresses, application port numbers in the transport protocols, and many other parameters. This seeks to create a globally unified namespace to ensure the global reach of the Internet. ICANN is governed by an international board of directors drawn from across the Internet's technical, business, academic, and other non-commercial communities. There has been a long-held dispute over the management of the DNS root zone, whose final control fell under the supervision of the National Telecommunications and Information Administration (NTIA), an agency of the U.S. Department of Commerce. Considering that the U.S. Department of Commerce could unilaterally terminate the Affirmation of Commitments with ICANN, the authority of DNS administration was likewise seen as revocable and derived from a single State, namely the United States. The involvement of NTIA started in 1998 and was supposed to be temporal, but it wasn't until April 2014 in an ICANN meeting held in Brazil, partly heated after Snowden revelations, that this situation changed resulting in an important shift of control transitioning administrative duties of the DNS root zones from NTIA to the Internet Assigned Numbers Authority (IANA) during a period that ended in September 2016. The technical underpinning and standardization of the Internet's core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. On 16 November 2005, the United Nations-sponsored World Summit on the Information Society (WSIS), held in Tunis, established the Internet Governance Forum (IGF) to open an ongoing, non-binding conversation among multiple stakeholders about the future of Internet governance. Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues. Definition The definition of Internet governance has been contested by differing groups across political and ideological lines. One of the main debates concerns the authority and participation of certain actors, such as national governments, corporate entities and civil society, to play a role in the Internet's governance. A working group established after a UN-initiated World Summit on the Information Society (WSIS) proposed the following definition of Internet governance as part of its June 2005 report: Internet governance is the development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet. Law professor Yochai Benkler developed a conceptualization of Internet governance by the idea of three "layers" of governance: Physical infrastructure layer (through which information travels) Code or logical layer (controls the infrastructure) Content layer (contains the information signaled through the network) Professors Jovan Kurbalija and Laura DeNardis also offer comprehensive definitions to "Internet Governance". According to Kurbalija, the broad approach to Internet Governance goes "beyond Internet infrastructural aspects and address other legal, economic, developmental, and sociocultural issues"; along similar lines, DeNardis argues that "Internet Governance generally refers to policy and technical coordination issues related to the exchange of information over the Internet". One of the more policy-relevant questions today is exactly whether the regulatory responses are appropriate to police the content delivered through the Internet: it includes important rules for the improvement of Internet safety and for dealing with threats such as cyber-bullying, copyright infringement, data protection and other illegal or disruptive activities. History The original ARPANET is one of the components which eventually evolved to become the Internet. As its name suggests the ARPANET was sponsored by the Defense Advanced Research Projects Agency within the U.S. Department of Defense. During the development of ARPANET, a numbered series of Request for Comments (RFCs) memos documented technical decisions and methods of working as they evolved. The standards of today's Internet are still documented by RFCs. Between 1984 and 1986 the U.S. National Science Foundation (NSF) created the NSFNET backbone, using TCP/IP, to connect their supercomputing facilities. NSFNET became a general-purpose research network, a hub to connect the supercomputing centers to each other and to the regional research and education networks that would in turn connect campus networks. The combined networks became generally known as the Internet. By the end of 1989, Australia, Germany, Israel, Italy, Japan, Mexico, the Netherlands, New Zealand, and the UK were connected to the Internet, which had grown to contain more than 160,000 hosts. In 1990, the ARPANET was formally terminated. In 1991 the NSF began to relax its restrictions on commercial use on NSFNET and commercial network providers began to interconnect. The final restrictions on carrying commercial traffic ended on 30 April 1995, when the NSF ended its sponsorship of the NSFNET Backbone Service and the service ended. Today almost all Internet infrastructure in the United States, and large portion in other countries, is provided and owned by the private sector. Traffic is exchanged between these networks, at major interconnection points, in accordance with established Internet standards and commercial agreements. Governors During 1979 the Internet Configuration Control Board was founded by DARPA to oversee the network's development. During 1984 it was renamed the Internet Advisory Board (IAB), and during 1986 it became the Internet Activities Board. The Internet Engineering Task Force (IETF) was formed during 1986 by the U.S. government to develop and promote Internet standards. It consisted initially of researchers, but by the end of the year participation was available to anyone, and its business was performed largely by email. From the early days of the network until his death during 1998, Jon Postel oversaw address allocation and other Internet protocol numbering and assignments in his capacity as Director of the Computer Networks Division at the Information Sciences Institute of the University of Southern California, under a contract from the Department of Defense. This function eventually became known as the Internet Assigned Numbers Authority (IANA), and as it expanded to include management of the global Domain Name System (DNS) root servers, a small organization grew. Postel also served as RFC Editor. Allocation of IP addresses was delegated to five regional Internet registries (RIRs): American Registry for Internet Numbers (ARIN) for North America Réseaux IP Européens - Network Coordination Centre (RIPE NCC) for Europe, the Middle East, and Central Asia Asia-Pacific Network Information Centre (APNIC) for Asia and the Pacific region Latin American and Caribbean Internet Addresses Registry (LACNIC) for Latin America and the Caribbean region African Network Information Center (AfriNIC) was created in 2004 to manage allocations for Africa After Jon Postel's death in 1998, IANA became part of ICANN, a California nonprofit established in September 1998 by the U.S. government and awarded a contract by the U.S. Department of Commerce. Initially two board members were elected by the Internet community at large, though this was changed by the rest of the board in 2002 in a poorly attended public meeting in Accra, Ghana. In 1992 the Internet Society (ISOC) was founded, with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. The IAB was renamed the Internet Architecture Board, and became part of ISOC. The Internet Engineering Task Force also became part of the ISOC. The IETF is overseen currently by the Internet Engineering Steering Group (IESG), and longer-term research is carried on by the Internet Research Task Force and overseen by the Internet Research Steering Group. At the first World Summit on the Information Society in Geneva in 2003, the topic of Internet governance was discussed. ICANN's status as a private corporation under contract to the U.S. government created controversy among other governments, especially Brazil, China, South Africa, and some Arab states. Since no general agreement existed even on the definition of what comprised Internet governance, United Nations Secretary General Kofi Annan initiated a Working Group on Internet Governance (WGIG) to clarify the issues and report before the second part of the World Summit on the Information Society (WSIS) in Tunis 2005. After much controversial debate, during which the U.S. delegation refused to consider surrendering the U.S. control of the Root Zone file, participants agreed on a compromise to allow for wider international debate on the policy principles. They agreed to establish an Internet Governance Forum (IGF), to be convened by the United Nations Secretary General before the end of the second quarter of 2006. The Greek government volunteered to host the first such meeting. Annual global IGFs have been held since 2006, with the Forum renewed for five years by the United Nations General Assembly in December 2010. In addition to the annual global IGF, regional IGFs have been organized in Africa, the Arab region, Asia-Pacific, and Latin America and the Caribbean, as well as in sub-regions. in December 2015, the United Nations General Assembly renewed the IGF for another ten years, in the context of the WSIS 10-year overall review. Media Freedom Media, freedom of expression and freedom of information have been long recognized as principles of internet governance, included in the 2003 Geneva Declaration and 2005 Tunis Commitment of the World Summit on the Information Society (WSIS). Given the crossborder, decentralized nature of the internet, an enabling environment for media freedom in the digital age requires global multi-stakeholder cooperation and shared respect for human rights. In broad terms, two different visions have been seen to shape global internet governance debates in recent years: fragmentation versus common principles. On the one hand, some national governments, particularly in the Central and Eastern European and Asia-Pacific regions, have emphasized state sovereignty as an organizing premise of national and global internet governance. In some regions, data localization laws—requiring that data be stored, processed and circulated within a given jurisdiction—have been introduced to keep citizens' personal data in the country, both to retain regulatory authority over such data and to strengthen the case for greater jurisdiction. Countries in the Central and Eastern European, Asia-Pacific, and African regions all have legislation requiring some localization. Data localization requirements increase the likelihood of multiple standards and the fragmentation of the internet, limiting the free flow of information, and in some cases increasing the potential for surveillance, which in turn impacts on freedom of expression. On the other hand, the dominant practice has been towards a unified, universal internet with broadly shared norms and principles. The NETmundial meeting, held in Brazil in 2014, produced a multistakeholder statement the 'internet should continue to be a globally coherent, interconnected, stable, unfragmented, scalable and accessible network-of-networks.' In 2015, UNESCO's General Conference endorsed the concept of Internet Universality and the 'ROAM Principles', which state that the internet should be ‘(i) Human Rights-based (ii) Open, (iii) Accessible to all, and (iv) Nurtured by Multistakeholder participation’. The ROAM Principles combine standards for process (multi-stakeholderism to avoid potential capture of the internet by a single power center with corresponding risks), with recommendations about substance (what those principles should be). The fundamental position is for a global internet where ROAM principles frame regional, national and local diversities. In this context, significant objectives are media freedom, network interoperability, net neutrality and the free flow of information (minimal barriers to the rights to receive and impart information across borders, and any limitations to accord with international standards). In a study of 30 key initiatives aimed at establishing a bill of rights online during the period between 1999 and 2015, researchers at Harvard's Berkman Klein Center found that the right to freedom of expression online was protected in more documents (26) than any other right. The UN General Assembly committed itself to multistakeholderism in December 2015 through a resolution extending the WSIS process and IGF mandate for an additional decade. It further underlined the importance of human rights and media-related issues such as the safety of journalists. Growing support for the multistakeholder model was also observed in the Internet Assigned Numbers Authority (IANA) stewardship transition, in which oversight of the internet's addressing system shifted from a contract with the United States Department of Commerce to a new private sector entity with new multi-stakeholder accountability mechanisms. Another support of the multistakeholder approach has been the Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, the updated and considerably expanded second edition of the 2013 Tallinn Manual on the International Law Applicable to Cyber Warfare. The annual conferences linked to the Budapest Convention on Cybercrime and meetings of the United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, mandated by the United Nations General Assembly, have deliberated on norms such as protection of critical infrastructure and the application of international law to cyberspace. In the period 2012–2016, the African Union passed the Convention on Cyber Security and Personal Data Protection and the Commonwealth Secretariat adopted the Report of the Working Group of Experts on Cybercrime. The Economic Community of West African States (ECOWAS) compelled all 15 member states to implement data protection laws and authorities through the adoption of the Supplementary Act on Personal Data Protection in 2010. Again in 2011, the ECOWAS adopted a Directive on Fighting Cybercrime to combat growing Cybercrime activities in the West African region. In response to the growing need for ICT infrastructures, Cybersecurity, and increasing Cybercrime, the ECOWAS, on 18 January 2021, adopted the regional strategy for Cybersecurity and the fight against Cybercrime. In a bid to unify data protection across Europe and give data subjects autonomy over their data, the European Union implemented the General Data Protection Regulation on 25 May 2018. It replaced the insufficient Data Protection Directive of 1995. The EU describes it as the "toughest privacy and security law" globally. Under the GDPR, data subjects have the right of access, rectification, erasure, restriction of processing, profiling, object to automated processing, and data portability. Internet Encryption Privacy and security online have been of paramount concern to internet users with growing cybercrime and cyberattacks worldwide. A 2019 poll by Safety Monitor shows that 13 percent of people aged 15 and above have been victims of cybercrimes such as identity fraud, hacking, and cyberbullying in the Netherlands. INTERPOL recommends using encrypted internet to stay safe online. Encryption technology serves as a channel to ensuring privacy and security online. It is one of the strongest tools to help internet users globally stay secured on the internet, especially in the aspect of data protection. However, criminals leverage the privacy, security, and confidentiality of online encryption technology to perpetrate cybercrimes and sometimes be absolved of its legal criminal consequences. It has sparked debates between internet governors and governments of various countries on whether encryption technology should stay or its use stopped. The UK Government, in May 2021, proposed the Online Safety Bill, a new regulatory framework to address cyberattacks and cybercrimes in the UK, but without a strong encryption technology. This is in a bid to make the UK the safest place to use the internet in the world and curb the damaging effect of harmful content shared online, including child pornography. However, the Internet Society argues that a lack of strong encryption exposes internet users to even greater risks of cyber attacks, cybercrimes, adding that it overrides data protection laws. Globalization and governance controversy Role of ICANN and the U.S. Department of Commerce The position of the U.S. Department of Commerce as the controller of some aspects of the Internet gradually attracted criticism from those who felt that control should be more international. A hands-off philosophy by the Department of Commerce helped limit this criticism, but this was undermined in 2005 when the Bush administration intervened to help kill the .xxx top-level domain proposal, and, much more severely, following the 2013 disclosures of mass surveillance by the U.S. government. When the IANA functions were handed over to ICANN, a new U.S. nonprofit, controversy increased. ICANN's decision-making process was criticised by some observers as being secretive and unaccountable. When the directors' posts which had previously been elected by the "at-large" community of Internet users were abolished, some feared that ICANN would become illegitimate and its qualifications questionable, due to the fact that it was now losing the aspect of being a neutral governing body. ICANN stated that it was merely streamlining decision-making, and developing a structure suitable for the modern Internet. On 1 October 2015, following a community-led process spanning months, the stewardship of the IANA functions were transitioned to the global Internet community. Other topics of controversy included the creation and control of generic top-level domains (.com, .org, and possible new ones, such as .biz or .xxx), the control of country-code domains, recent proposals for a large increase in ICANN's budget and responsibilities, and a proposed "domain tax" to pay for the increase. There were also suggestions that individual governments should have more control, or that the International Telecommunication Union or the United Nations should have a function in Internet governance. IBSA proposal (2011) One controversial proposal to this effect, resulting from a September 2011 summit among India, Brazil, and South Africa (IBSA), would seek to move Internet governance into a "UN Committee on Internet-Related Policy" (UN-CIRP). The move was a reaction to a perception that the principles of the 2005 Tunis Agenda for the Information Society had not been met. The statement called for the subordination of independent technical organizations such as ICANN and the ITU to a political organization operating under the auspices of the United Nations. After outrage from India's civil society and media, the Indian government backed away from the proposal. Montevideo Statement on the Future of Internet Cooperation (2013) On 7 October 2013 the Montevideo Statement on the Future of Internet Cooperation was released by the leaders of a number of organizations involved in coordinating the Internet's global technical infrastructure, loosely known as the "I*" (or "I-star") group. Among other things, the statement "expressed strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations of pervasive monitoring and surveillance" and "called for accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing". This desire to move away from a United States centric approach is seen as a reaction to the ongoing NSA surveillance scandal. The statement was signed by the heads of the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Engineering Task Force, the Internet Architecture Board, the World Wide Web Consortium, the Internet Society, and the five regional Internet address registries (African Network Information Center, American Registry for Internet Numbers, Asia-Pacific Network Information Centre, Latin America and Caribbean Internet Addresses Registry, and Réseaux IP Européens Network Coordination Centre). Global Multistakeholder Meeting on the Future of Internet Governance (NetMundial) (2013) In October 2013, Fadi Chehadé, former President and CEO of ICANN, met with Brazilian President Dilma Rousseff in Brasilia. Upon Chehadé's invitation, the two announced that Brazil would host an international summit on Internet governance in April 2014. The announcement came after the 2013 disclosures of mass surveillance by the U.S. government, and President Rousseff's speech at the opening session of the 2013 United Nations General Assembly, where she strongly criticized the U.S. surveillance program as a "breach of international law". The "Global Multistakeholder Meeting on the Future of Internet Governance (NETMundial)" will include representatives of government, industry, civil society, and academia. At the IGF VIII meeting in Bali in October 2013 a commentator noted that Brazil intends the meeting to be a "summit" in the sense that it will be high level with decision-making authority. The organizers of the "NETmundial" meeting have decided that an online forum called "/1net", set up by the I* group, will be a major conduit of non-governmental input into the three committees preparing for the meeting in April. NetMundial managed to convene a large number of global actors to produce a consensus statement on internet governance principles and a roadmap for the future evolution of the internet governance ecosystem. NETmundial Multistakeholder Statement – the outcome of the Meeting – was elaborated in an open and participatory manner, by means of successive consultations. This consensus should be qualified in that even though the statement was adopted by consensus, some participants, specifically the Russian Federation, India, Cuba, and ARTICLE 19, representing some participants from civil society expressed some dissent with its contents and the process. NetMundial Initiative (2014) The NetMundial Initiative is an initiative by ICANN CEO Fadi Chehade along with representatives of the World Economic Forum (WEF) and the Brazilian Internet Steering Committee (Comitê Gestor da Internet no Brasil), commonly referred to as "CGI.br"., which was inspired by the 2014 NetMundial meeting. Brazil's close involvement derived from accusations of digital espionage against then-president Dilma Rousseff. A month later, the Panel On Global Internet Cooperation and Governance Mechanisms (convened by the Internet Corporation for Assigned Names and Numbers (ICANN) and the World Economic Forum (WEF) with assistance from The Annenberg Foundation), supported and included the NetMundial statement in its own report. End of U.S. Department of Commerce oversight On 1 October 2016 ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA). This marked a historic moment in the history of the Internet. The contract between ICANN and the U.S. Department of Commerce National Telecommunications and Information Administration (NTIA) for performance of the Internet Assigned Numbers Authority, or IANA, functions, drew its roots from the earliest days of the Internet. Initially the contract was seen as a temporary measure, according to Lawrence Strickling, U.S. Assistant Secretary of Commerce for Communications and Information from 2009 to 2017. Internet users saw no change or difference in their experience online as a result of what ICANN and others called the IANA Stewardship Transition. As Stephen D. Crocker, ICANN Board Chair from 2011 to 2017, said in a news release at the time of the contract expiration, “This community validated the multistakeholder model of Internet governance. It has shown that a governance model defined by the inclusion of all voices, including business, academics, technical experts, civil society, governments and many others is the best way to assure that the Internet of tomorrow remains as free, open, and accessible as the Internet of today.” The concerted effort began in March 2014, when NTIA asked ICANN to convene the global multistakeholder community – made up of private-sector representatives, technical experts, academics, civil society, governments and individual Internet end users – to come together and create a proposal to replace NTIA’s historic stewardship role. The community, in response to the NTIA’s request for a proposal, said that they wanted to enhance ICANN’s accountability mechanisms as well. NTIA later agreed to consider proposals for both together. People involved in global Internet governance worked for nearly two years to develop two consensus-based proposals. Stakeholders spent more than 26,000 working hours on the proposal, exchanged more than 33,000 messages on mailing lists, held more than 600 meetings and calls and incurred millions of dollars of legal fees to develop the plan, which the community completed, and ICANN submitted to NTIA for review in March 2016. On 24 May 2016, the U.S. Senate Commerce Committee held its oversight hearing on "Examining the Multistakeholder Plan for Transitioning the Internet Assigned Number Authority.” Though the Senators present expressed support for the transition, a few expressed concerns that the accountability mechanisms in the proposal should be tested during an extension of the NTIA’s contract with ICANN. Two weeks later, U.S. Senator Ted Cruz introduced the “Protecting Internet Freedom Act,” a bill to prohibit NTIA from allowing the IANA functions contract to lapse unless authorized by Congress. The bill never left the Senate Committee on Commerce, Science, and Transportation. On 9 June 2016, NTIA, after working with other U.S. Government agencies to conduct a thorough review, announced that the proposal package developed by the global Internet multistakeholder community met the criteria it had outlined in March 2014. In summary, NTIA found that the proposal package: Supported and enhanced the multistakeholder model because it was developed by a multistakeholder process that engaged Internet stakeholders around the world, and built on existing multistakeholder arrangements, processes, and concepts. Maintained the security, stability, and resiliency of the Internet DNS because it relied on ICANN’s current operational practices to perform the IANA functions. The proposed accountability and oversight provisions bolstered the ability of Internet stakeholders to ensure ongoing security, stability, and resiliency. Met the needs and expectations of the global customers and partners of the IANA services because it was directly created by those customers and partners of the IANA functions. The accountability recommendations ensured that ICANN would perform in accordance with the will of the multistakeholder community. Maintained the openness of the Internet because it required that the IANA functions, databases, operations, and related policymaking remain fully open and accessible, just as they were prior to the transition. The vast proposals required various changes to ICANN’s structure and Bylaws, which ICANN and its various stakeholder groups completed in advance of 30 September 2016, the date at which the IANA functions contract was set to expire. Paris Call for Trust and Security in Cyberspace On 12 November 2018 at the Internet Governance Forum (IGF) meeting in Paris, French President Emmanuel Macron launched the Paris Call for Trust and Security in Cyberspace. This high-level declaration presents a framework of common principles for regulating the Internet and fighting back against cyber attacks, hate speech and other cyber threats. Internet Shutdowns Internet shutdowns refer to when state authorities deliberately shut down the internet. In other cases, Internet shutdown could describe intentional acts by state authorities to slow down internet connections. Other terms used to describe internet shutdown include 'blanket shutdown,' 'kill switches,' 'blackout,' 'digital curfews.' Shutdowns could be for only a few hours, days, weeks, and sometimes months. Governments often justify internet shutdowns on grounds of public safety, prevention of mass hysteria, hate speech, fake news, national security, and sometimes for transparency of an ongoing electioneering process. However, reports indicate that shutdowns are a deliberate attempt at internet censorship by the governments. Apart from posing great harm to internet freedom, the shutdown of the internet harms public health, economies, educational systems, internet advancements, vulnerable groups, and democratic societies. This is because they impede on public communication through the internet for a while, thereby putting many activities at a standstill. In the past years, no fewer than 35 countries have experienced internet shutdowns. According to reports by Access Now a non-profit digital right group, 25 countries across the globe experienced government-induced internet shutdown 196 times in 2018. In 2019, Access Now reports indicated that 33 countries experienced a government-induced internet shutdown 213 times. The 2020 report from the digital right group implied that 29 countries deliberately shut down their internet 155 times. With the growing trend of internet shutdowns, digital rights groups, including Internet Society, Access Now, #KeepItOn Coalition, and others have condemned it, noting it is an 'infringement on digital rights' of netizens. These groups have also been at the forefront of tracking and reporting shutdowns in real-time as well as analyzing its impact on internet advancement, internet freedom, and societies. Internet bodies Global Commission on Internet Governance, launched in January 2014 by two international think tanks, the Centre for International Governance Innovation and Chatham House, to make recommendations about the future of global Internet governance. International Organization for Standardization, Maintenance Agency (ISO 3166 MA): Defines names and postal codes of countries, dependent territories, special areas of geographic significance. To date it has only played a minor role in developing Internet standards. Internet Architecture Board (IAB): Oversees the technical and engineering development of the IETF and IRTF. Internet Corporation for Assigned Names and Numbers (ICANN): Coordinates the Internet's systems of unique identifiers: IP addresses, Protocol-Parameter registries, top-level domain space (DNS root zone). Performs Internet Assigned Numbers Authority (IANA) functions for the global Internet community. Internet Engineering Task Force (IETF): Develops and promotes a wide range of Internet standards dealing in particular with standards of the Internet protocol suite. Their technical documents influence the way people design, use and manage the Internet. Internet Governance Forum (IGF): A multistakeholder forum for policy dialogue. Internet Research Task Force (IRTF): Promotes research of the evolution of the Internet by creating focused, long-term research groups working on Internet protocols, applications, architecture, and technology. Internet network operators' groups (NOGs): informal groups established to provide forums for network operators to discuss matters of mutual interest. Internet Society (ISOC): Assures the open development, evolution, and use of the Internet for the benefit of all people throughout the world. Currently ISOC has over 90 chapters in around 80 countries. Number Resource Organization (NRO): Established in October 2003, the NRO is an unincorporated organization uniting the five regional Internet registries. Regional Internet registries (RIRs): There are five regional Internet registries. They manage the allocation and registration of Internet number resources, such as IP addresses, within geographic regions of the world. (Africa: www.afrinic.net; Asia Pacific: www.apnic.net; Canada and United States: www.arin.net; Latin America & Caribbean: www.lacnic.net; Europe, the Middle East and parts of Central Asia: www.ripe.net) World Wide Web Consortium (W3C): Creates standards for the World Wide Web that enable an Open Web Platform, for example, by focusing on issues of accessibility, internationalization, and mobile web solutions. United Nations bodies Internet Governance Forum (IGF) IGF regional, national, and subject area initiatives Commission on Science and Technology for Development (CSTD) Working Group on Improvements to the IGF (CSTDWG), active from February 2011 to May 2012. International Telecommunication Union (ITU) World Conference on International Telecommunications (WCIT), a treaty-level conference facilitated by the ITU to address international telecommunications regulations, held in December 2012 in Dubai. World Summit on the Information Society (WSIS), summits held in 2003 (Geneva) and 2005 (Tunis). WSIS Forum, annual meetings held in Geneva starting in 2006 as a follow up of the WSIS Geneva Plan of Action. WSIS + 10, a high-level event and extended version of the WSIS Forum to take stock of achievements in the last 10 years and develop proposals for a new vision beyond 2015, held from 13 to 17 April 2014 in Sharm el-Sheikh, Egypt. Working Group on Internet Governance (WGIG), active from September 2004 to November 2005. See also History of the Internet Internet law Internet organizations Internet censorship Internet multistakeholder governance Sources References Further reading Roadmap for the Internet of Things - Its Impact, Architecture and Future Governance, by Mark Fell, Carré & Strauss. 2014. Manifesto for Smarter Intervention in Complex Systems, by Mark Fell, Carré & Strauss. 2013. What is Internet Governance? A primer from the Council on Foreign Relations An Introduction to Internet Governance by Dr Jovan Kurbalija, 2016, DiploFoundation (7th edition), paperback In English, Spanish, Russian, Chinese and Turkish. The Global War for Internet Governance, Laura DeNardis, Yale University Press, 2014. Explains global power dynamics around technical and political governance of the Internet. "Ruling the Root: Internet Governance and the Taming of Cyberspace" by Milton Mueller, MIT Press, 2002. The definitive study of DNS and ICANN's early history. "IP addressing and the migration to IPv6." IP addressing and the migration to IPv6. "One History of DNS" by Ross W. Rader. April 2001. Article contains historic facts about DNS and explains the reasons behind the so-called "dns war". "The Emerging Field of Internet Governance", by Laura DeNardis Suggests a framework for understanding problems in Internet governance. "Researching Internet Governance: Methods, Frameworks, Futures" edited by Laura DeNardis, Derrick Cogburn, Nanette S. Levinson, Francesca Musiani. September 2020. Open access. "Transnational Advocacy Networks in the Information Society: Partners or Pawns?" by Derrick L.Cogburn, 2017 Launching the DNS War: Dot-Com Privatization and the Rise of Global Internet Governance by Craig Simon. December 2006. PhD dissertation containing an extensive history of events which sparked the so-called "dns war". "[email protected]: Toward a Critical Theory of Cyberspace", by A. Michael Froomkin, 116 Harv. L. Rev. 749 (2003). Argues that the Internet standards process undertaken by the IETF fulfils Jürgen Habermas's conditions for the best practical discourse. Malte Ziewitz and Christian Pentzold provide in "In search of internet governance: Performing order in digitally networked environments", New Media & Society 16 (2014): pp. 306–322 an overview of definitions of Internet Governance and approaches to its study. External links APC Internet Rights Charter, Association for Progressive Communications, November 2006 Electronic Frontier Foundation, website The Future of Global Internet governance, Institute of Informatics and Telematics - Consiglio Nazionale delle Ricercha (IIT-CNR), Pisa Global Commission on Internet Governance, website Global Internet Governance Academic Network (GigaNet) ICANN - Internet Corporation for Assigned Names and Numbers Internet Governance Forum (IGF) Internet Governance Project Internet Society, website "United States cedes control of the internet - but what now? - Review of an extraordinary meeting", Kieren McCarthy, The Register, 27 July 2006 World Summit on the Information Society: Geneva 2003 and Tunis 2005 CircleID: Internet Governance "The Politics and Issues of Internet Governance", Milton L. Mueller, April 2007, analysis from the Institute of research and debate on Governance (Institut de recherche et débat sur la gouvernance) Governance History of the Internet
19332380
https://en.wikipedia.org/wiki/NetCDF
NetCDF
NetCDF (Network Common Data Form) is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The project homepage is hosted by the Unidata program at the University Corporation for Atmospheric Research (UCAR). They are also the chief source of netCDF software, standards development, updates, etc. The format is an open standard. NetCDF Classic and 64-bit Offset Format are an international standard of the Open Geospatial Consortium. The project started in 1988 and is still actively supported by UCAR. The original netCDF binary format (released in 1990, now known as "netCDF classic format") is still widely used across the world and continues to be fully supported in all netCDF releases. Version 4.0 (released in 2008) allowed the use of the HDF5 data file format. Version 4.1 (2010) added support for C and Fortran client access to specified subsets of remote data via OPeNDAP. Version 4.3.0 (2012) added a CMake build system for Windows builds. Version 4.7.0 (2019) added support for reading Amazon S3 objects. Further releases are planned to improve performance, add features, and fix bugs. History The format was originally based on the conceptual model of the Common Data Format developed by NASA, but has since diverged and is not compatible with it. Format description The netCDF libraries support multiple different binary formats for netCDF files: The classic format was used in the first netCDF release, and is still the default format for file creation. The 64-bit offset format was introduced in version 3.6.0, and it supports larger variable and file sizes. The netCDF-4/HDF5 format was introduced in version 4.0; it is the HDF5 data format, with some restrictions. The HDF4 SD format is supported for read-only access. The CDF5 format is supported, in coordination with the parallel-netcdf project. All formats are "self-describing". This means that there is a header which describes the layout of the rest of the file, in particular the data arrays, as well as arbitrary file metadata in the form of name/value attributes. The format is platform independent, with issues such as endianness being addressed in the software libraries. The data are stored in a fashion that allows efficient subsetting. Starting with version 4.0, the netCDF API allows the use of the HDF5 data format. NetCDF users can create HDF5 files with benefits not available with the netCDF format, such as much larger files and multiple unlimited dimensions. Full backward compatibility in accessing old netCDF files and using previous versions of the C and Fortran APIs is supported. Software Access libraries The software libraries supplied by UCAR provide read-write access to netCDF files, encoding and decoding the necessary arrays and metadata. The core library is written in C, and provides an API for C, C++ and two APIs for Fortran applications, one for Fortran 77, and one for Fortran 90. An independent implementation, also developed and maintained by Unidata, is written in 100% Java, which extends the core data model and adds additional functionality. Interfaces to netCDF based on the C library are also available in other languages including R (ncdf, ncvar and RNetCDF packages), Perl, Python, Ruby, Haskell, Mathematica, MATLAB, IDL, Julia and Octave. The specification of the API calls is very similar across the different languages, apart from inevitable differences of syntax. The API calls for version 2 were rather different from those in version 3, but are also supported by versions 3 and 4 for backward compatibility. Application programmers using supported languages need not normally be concerned with the file structure itself, even though it is available as open formats. Applications A wide range of application software has been written which makes use of netCDF files. These range from command line utilities to graphical visualization packages. A number are listed below, and a longer list is on the UCAR website. A commonly used set of Unix command line utilities for netCDF files is the NetCDF Operators (NCO) suite, which provide a range of commands for manipulation and analysis of netCDF files including basic record concatenating, array slicing and averaging. ncBrowse is a generic netCDF file viewer that includes Java graphics, animations and 3D visualizations for a wide range of netCDF file conventions. ncview is a visual browser for netCDF format files. This program is a simple, fast, GUI-based tool for visualising fields in a netCDF file. One can browse through the various dimensions of a data array, taking a look at the raw data values. It is also possible to change color maps, invert the data, etc. Panoply is a netCDF file viewer developed at the NASA Goddard Institute for Space Studies which focuses on presentation of geo-gridded data. It is written in Java and thus platform independent. Although its feature set overlaps with ncBrowse and ncview, Panoply is distinguished by offering a wide variety of map projections and ability to work with different scale color tables. The NCAR Command Language (NCL) is used to analyze and visualize data in netCDF files (among other formats). The Python programming language can access netCDF files with the PyNIO module (which also facilitates access to a variety of other data formats). netCDF files can also be read with the Python module netCDF4-python, and into a pandas-like DataFrame with the xarray module. Ferret is an interactive computer visualization and analysis environment designed to meet the needs of oceanographers and meteorologists analyzing large and complex gridded data sets. Ferret offers a Mathematica-like approach to analysis; new variables may be defined interactively as mathematical expressions involving data set variables. Calculations may be applied over arbitrarily shaped regions. Fully documented graphics are produced with a single command. The Grid Analysis and Display System (GrADS) is an interactive desktop tool that is used for easy access, manipulation, and visualization of earth science data. GrADS has been implemented worldwide on a variety of commonly used operating systems and is freely distributed over the Internet. nCDF_Browser is a visual nCDF browser, written in the IDL programming language. Variables, attributes, and dimensions can be immediately downloaded to the IDL command line for further processing. All the Coyote Library files necessary to run nCDF_Browser are available in the zip file. ArcGIS versions after 9.2 support netCDF files that follow the Climate and Forecast Metadata Conventions and contain rectilinear grids with equally-spaced coordinates. The Multidimensional Tools toolbox can be used to create raster layers, feature layers, and table views from netCDF data in ArcMap, or convert feature, raster, and table data to netCDF. OriginPro version 2021b supports netCDF CF Convention. Averaging can be performed during import to allow handling of large datasets in a GUI software. The Geospatial Data Abstraction Library provides support for read and write access to netCDF data. netCDF Explorer is multi-platform graphical browser for netCDF files. netCDF Explorer can browse files locally or remotely, by means of OPeNDAP R supports netCDF through packages such as ncdf4 (including HDF5 support) or RNetCDF (no HDF5 support). HDFql enables users to manage netCDF-4/HDF5 files through a high-level language (similar to SQL) in C, C++, Java, Python, C#, Fortran and R. ECMWF's Metview workstation and batch system can handle NetCDF together with GRIB and BUFR. OpenChrom ships a converter under the terms of the Eclipse Public License Common uses It is commonly used in climatology, meteorology and oceanography applications (e.g., weather forecasting, climate change) and GIS applications. It is an input/output format for many GIS applications, and for general scientific data exchange. To quote from their site: "NetCDF (network Common Data Form) is a set of interfaces for array-oriented data access and a freely-distributed collection of data access libraries for C, Fortran, C++, Java, and other languages. The netCDF libraries support a machine-independent format for representing scientific data. Together, the interfaces, libraries, and format support the creation, access, and sharing of scientific data." Conventions The Climate and Forecast (CF) conventions are metadata conventions for earth science data, intended to promote the processing and sharing of files created with the NetCDF Application Programmer Interface (API). The conventions define metadata that are included in the same file as the data (thus making the file "self-describing"), that provide a definitive description of what the data in each variable represents, and of the spatial and temporal properties of the data (including information about grids, such as grid cell bounds and cell averaging methods). This enables users of data from different sources to decide which data are comparable, and allows building applications with powerful extraction, regridding, and display capabilities. Parallel-NetCDF An extension of netCDF for parallel computing called Parallel-NetCDF (or PnetCDF) has been developed by Argonne National Laboratory and Northwestern University. This is built upon MPI-IO, the I/O extension to MPI communications. Using the high-level netCDF data structures, the Parallel-NetCDF libraries can make use of optimizations to efficiently distribute the file read and write applications between multiple processors. The Parallel-NetCDF package can read/write only classic and 64-bit offset formats. Parallel-NetCDF cannot read or write the HDF5-based format available with netCDF-4.0. The Parallel-NetCDF package uses different, but similar APIs in Fortran and C. Parallel I/O in the Unidata netCDF library has been supported since release 4.0, for HDF5 data files. Since version 4.1.1 the Unidata NetCDF C library supports parallel I/O to classic and 64-bit offset files using the Parallel-NetCDF library, but with the NetCDF API. Interoperability of C/Fortran/C++ libraries with other formats The netCDF C library, and the libraries based on it (Fortran 77 and Fortran 90, C++, and all third-party libraries) can, starting with version 4.1.1, read some data in other data formats. Data in the HDF5 format can be read, with some restrictions. Data in the HDF4 format can be read by the netCDF C library if created using the HDF4 Scientific Data (SD) API. NetCDF-Java common data model The NetCDF-Java library currently reads the following file formats and remote access protocols: BUFR Format Documentation (ongoing development) CINRAD level II (Chinese Radar format) DMSP (Defense Meteorological Satellite Program) DORADE radar file format GINI (GOES Ingest and NOAAPORT Interface) image format GEMPAK gridded data GRIB version 1 and version 2 (ongoing work on tables) GTOPO 30-sec elevation dataset (USGS) Hierarchical Data Format (HDF4, HDF-EOS2, HDF5, HDF-EOS5) NetCDF (classic and large format) NetCDF-4 (built on HDF5) NEXRAD Radar level 2 and level 3. There are a number of other formats in development. Since each of these is accessed transparently through the NetCDF API, the NetCDF-Java library is said to implement a Common Data Model for scientific datasets. The Common Data Model has three layers, which build on top of each other to add successively richer semantics: The data access layer, also known as the syntactic layer, handles data reading. The coordinate system layer identifies the coordinates of the data arrays. Coordinates are a completely general concept for scientific data; specialized georeferencing coordinate systems, important to the Earth Science community, are specially annotated. The scientific data type layer identifies specific types of data, such as grids, images, and point data, and adds specialized methods for each kind of data. The data model of the data access layer is a generalization of the NetCDF-3 data model, and substantially the same as the NetCDF-4 data model. The coordinate system layer implements and extends the concepts in the Climate and Forecast Metadata Conventions. The scientific data type layer allows data to be manipulated in coordinate space, analogous to the Open Geospatial Consortium specifications. The identification of coordinate systems and data typing is ongoing, but users can plug in their own classes at runtime for specialized processing. See also Common Data Format (CDF) CGNS (CFD General Notation System) EAS3 (Ein-Ausgabe-System) FITS (Flexible Image Transport System) GRIB (GRIdded Binary) Hierarchical Data Format (HDF) OPeNDAP client-server protocols Tecplot binary files XDMF (eXtensible Data Model Format) XMDF (eXtensible Model Data Format) References External links NetCDF User's Guide — describes the file format "An Introduction to Distributed Visualization"; section 4.2 contains a comparison of CDF, HDF, and netCDF. Animating NetCDF Data in ArcMap List of software utilities using netCDF files Computer file formats Earth sciences data formats Earth sciences graphics software Meteorological data and networks
24936492
https://en.wikipedia.org/wiki/Simple%20Firmware%20Interface
Simple Firmware Interface
Simple Firmware Interface (SFI) is developed by Intel Corporation as a lightweight method for firmware to export static tables to the operating system. It is supported by Intel's hand-held Moorestown platform. SFI tables are data structures in memory, and all SFI tables share a common table header format. The operating system finds the system table by searching 16 byte boundaries between physical address and . SFI has CPU, APIC, Memory Map, Idle, Frequency, M-Timer, M-RTC, OEMx, Wake Vector, I²C Device, and a SPI Device table. SFI provides access to a standard ACPI XSDT (Extended System Description Table). XSDT is used by SFI to prevent namespace collision between SPI and ACPI. It can access standard ACPI tables such as PCI Memory Configuration Table (MCFG). SFI support was merged into Linux kernel 2.6.32-rc1; the core SFI patch is about 1,000 lines of code. Linux is the first operating system with an SFI implementation. Linux kernel 5.6 marked SFI as obsolete. SFI support was removed in Linux kernel 5.12. References External links Firmware Intel
51781891
https://en.wikipedia.org/wiki/Sergey%20Aslanyan%20%28entrepreneur%29
Sergey Aslanyan (entrepreneur)
Sergey Aslanyan (; born September 18, 1973, in Yerevan, USSR) is a Russian businessman and executive manager. He is the founder and Chairman of the Board at MaximaTelecom. Biography Sergey Aslanyan was born in Yerevan, Armenia, graduated from Faculty of Computational Mathematics and Cybernetics at Lomonosov Moscow State University. In 1996 Aslanyan he received a degree in Applied Mathematics. Aslanyan started his career in 1997 as a senior consultant at the auditing company Coopers & Lybrand, which soon merged with Price Waterhouse and was titled PricewaterhouseCoopers. In 2001, he joined TNK-BP Management in the position of deputy head of the information technology unit. In December 2003, Aslanyan was invited to join Mobile TeleSystems as Vice President of Information Technology – this position was created specially for him. In July 2006 he was appointed as Vice President of Engineering and Information Technology. He supervised the transition of the telecommunications operator to a new billing system, the implementation of Oracle ERP system and workflow automation system, and he was responsible for the integration of companies acquired by MTS network. In October 2007 Aslanyan became president of the Sitronics group, which belonged to investment holding JSFC Sistema just as Mobile TeleSystems. Aslanyan presented a new strategy of the group company development, aimed at reducing non-profitable areas (e.g. consumer electronics), funding priority scientific developments of the state (e.g., microelectronics) and manufacture optimization. From 2007 to 2012 Aslanyan was a minority shareholder of Sitronics with a share of 0.687% (up to 2010 – 0.25%). In January 2013, he left the company. In January 2013, together with a group of private investors Aslanyan acquired the company MaximaTelecom which was owned by the JSFC Sistema's systems integrator and became the head of its board of directors. In July 2013 MaximaTelecom took part in the auction of the Moscow Metro for the right of creating and operating Wi-Fi network in the subway trains and signed a contract as the only bidder. Investments in the project amounted to more than 2 billion rubles. Since 1 December 2014 Free Wi-Fi was available on all subway lines. Awards and recognition Aslanyan twice won IT-leader award in 2004 and 2006 in category Telecommunication companies and Mobile operators respectively. He is a winner of Aristos award, which was established by Russian Association of Managers and the publishing house Kommersant in the category IT-director (2006). In 2009 his name was included in the biographical reference book Armenian business elite of Russia, published by scientific and educational foundation Erevank. In May 2011 Aslanyan was noted among the best IT-managers of Russian Internet by the internet periodical CNews. In 2011, he received the highest index of personal reputation among top managers of telecommunication companies in the ranking developed by TASS-Telecom. In September 2012 the newspaper Kommersant put Aslanyan in top-10 list of the best managers in the field of information technology in the XIII annual ranking of Russian top managers. Personal life Sergey Aslanyan enjoys playing tennis and squash. References 1973 births Living people Russian chief executives Russian businesspeople Businesspeople from Yerevan Moscow State University alumni
12794
https://en.wikipedia.org/wiki/Gopher%20%28protocol%29
Gopher (protocol)
The Gopher protocol is a communication protocol designed for distributing, searching, and retrieving documents in Internet Protocol networks. The design of the Gopher protocol and user interface is menu-driven, and presented an alternative to the World Wide Web in its early stages, but ultimately fell into disfavor, yielding to HTTP. The Gopher ecosystem is often regarded as the effective predecessor of the World Wide Web. The protocol was invented by a team led by Mark P. McCahill at the University of Minnesota. It offers some features not natively supported by the Web and imposes a much stronger hierarchy on the documents it stores. Its text menu interface is well-suited to computing environments that rely heavily on remote text-oriented computer terminals, which were still common at the time of its creation in 1991, and the simplicity of its protocol facilitated a wide variety of client implementations. More recent Gopher revisions and graphical clients added support for multimedia. Gopher's hierarchical structure provided a platform for the first large-scale electronic library connections. The Gopher protocol is still in use by enthusiasts, and although it has been almost entirely supplanted by the Web, a small population of actively-maintained servers remains. Origins Gopher system was released in mid-1991 by Mark P. McCahill, Farhad Anklesaria, Paul Lindner, Daniel Torrey, and Bob Alberti of the University of Minnesota in the United States. Its central goals were, as stated in : A file-like hierarchical arrangement that would be familiar to users. A simple syntax. A system that can be created quickly and inexpensively. Extensibility of the file system metaphor; allowing addition of searches for example. Gopher combines document hierarchies with collections of services, including WAIS, the Archie and Veronica search engines, and gateways to other information systems such as File Transfer Protocol (FTP) and Usenet. The general interest in campus-wide information systems (CWISs) in higher education at the time, and the ease of setup of Gopher servers to create an instant CWIS with links to other sites' online directories and resources were the factors contributing to Gopher's rapid adoption. The name was coined by Anklesaria as a play on several meanings of the word "gopher". The University of Minnesota mascot is the gopher, a gofer is an assistant who "goes for" things, and a gopher burrows through the ground to reach a desired location. Decline The World Wide Web was in its infancy in 1991, and Gopher services quickly became established. By the late 1990s, Gopher had ceased expanding. Several factors contributed to Gopher's stagnation: In February 1993, the University of Minnesota announced that it would charge licensing fees for the use of its implementation of the Gopher server. Users became concerned that fees might also be charged for independent implementations. Gopher expansion stagnated, to the advantage of the World Wide Web, to which CERN disclaimed ownership. In September 2000, the University of Minnesota re-licensed its Gopher software under the GNU General Public License. Gopher client functionality was quickly duplicated by the early Mosaic web browser, which subsumed its protocol. Gopher has a more rigid structure than the free-form HTML of the Web. Every Gopher document has a defined format and type, and the typical user navigates through a single server-defined menu system to get to a particular document. This can be quite different from the way a user finds documents on the Web. Gopher remains in active use by its enthusiasts, and there have been attempts to revive Gopher on modern platforms and mobile devices. One attempt is The Overbite Project, which hosts various browser extensions and modern clients. Server census , there remained about 160 gopher servers indexed by Veronica-2, reflecting a slow growth from 2007 when there were fewer than 100. They are typically infrequently updated. On these servers Veronica indexed approximately 2.5 million unique selectors. A handful of new servers were being set up every year by hobbyists with over 50 having been set up and added to Floodgap's list since 1999. A snapshot of Gopherspace in 2007 circulated on BitTorrent and was still available in 2010. Due to the simplicity of the Gopher protocol, setting up new servers or adding Gopher support to browsers is often done in a tongue-in-cheek manner, principally on April Fools' Day. In November 2014 Veronica indexed 144 gopher servers, reflecting a small drop from 2012, but within these servers Veronica indexed approximately 3 million unique selectors. In March 2016 Veronica indexed 135 gopher servers, within which it indexed approximately 4 million unique selectors. In March 2017 Veronica indexed 133 gopher servers, within which it indexed approximately 4.9 million unique selectors. In May 2018 Veronica indexed 260 gopher servers, within which it indexed approximately 3.7 million unique selectors. In May 2019 Veronica indexed 320 gopher servers, within which it indexed approximately 4.2 million unique selectors. In January 2020 Veronica indexed 395 gopher servers, within which it indexed approximately 4.5 million unique selectors. In February 2021 Veronica indexed 361 gopher servers, within which it indexed approximately 6 million unique selectors. In February 2022 Veronica indexed 325 gopher servers, within which it indexed approximately 5 million unique selectors. Technical details The conceptualization of knowledge in "Gopher space" or a "cloud" as specific information in a particular file, and the prominence of the FTP, influenced the technology and the resulting functionality of Gopher. Gopher characteristics Gopher is designed to function and to appear much like a mountable read-only global network file system (and software, such as gopherfs, is available that can actually mount a Gopher server as a FUSE resource). At a minimum, whatever can be done with data files on a CD-ROM, can be done on Gopher. A Gopher system consists of a series of hierarchical hyperlinkable menus. The choice of menu items and titles is controlled by the administrator of the server. Similar to a file on a Web server, a file on a Gopher server can be linked to as a menu item from any other Gopher server. Many servers take advantage of this inter-server linking to provide a directory of other servers that the user can access. Protocol The Gopher protocol was first described in . IANA has assigned TCP port 70 to the Gopher protocol. The protocol is simple to negotiate, making it possible to browse without using a client. User request First, the client establishes a TCP connection with the server on port 70, the standard gopher port. The client then sends a string followed by a carriage return followed by a line feed (a "CR + LF" sequence). This is the selector, which identifies the document to be retrieved. If the item selector were an empty line, the default directory would be selected. Server response The server then replies with the requested item and closes the connection. According to the protocol, before the connection is closed, the server should send a full-stop (i.e., a period character) on a line by itself. However, not all servers conform to this part of the protocol and the server may close the connection without returning the final full-stop. The main type of reply from the server is a text or binary resource. Alternatively, the resource can be a menu: a form of structured text resource providing references to other resources. Because of the simplicity of the Gopher protocol, tools such as netcat make it possible to download Gopher content easily from the command line: echo jacks/jack.exe | nc gopher.example.org 70 > jack.exe The protocol is also supported by cURL as of 7.21.2-DEV. Search request The selector string in the request can optionally be followed by a tab character and a search string. This is used by item type 7. Source code of a menu Gopher menu items are defined by lines of tab-separated values in a text file. This file is sometimes called a gophermap. As the source code to a gopher menu, a gophermap is roughly analogous to an HTML file for a web page. Each tab-separated line (called a selector line) gives the client software a description of the menu item: what it is, what it's called, and where it leads. The client displays the menu items in the order that they appear in the gophermap. The first character in a selector line indicates the item type, which tells the client what kind of file or protocol the menu item points to. This helps the client decide what to do with it. Gopher's item types are a more basic precursor to the media type system used by the Web and email attachments. The item type is followed by the user display string (a description or label that represents the item in the menu); the selector (a path or other string for the resource on the server); the hostname (the domain name or IP address of the server), and the network port. All lines in a gopher menu are terminated by "CR + LF". For example: The following selector line generates a link to the "/home" directory at the subdomain gopher.floodgap.com, on port 70. The item type of indicates that the resource is a Gopher menu. The string "Floodgap Home" is what the user sees in the menu. 1Floodgap Home /home gopher.floodgap.com 70 Item types In a Gopher menu's source code, a one-character code indicates what kind of content the client should expect. This code may either be a digit or a letter of the alphabet; letters are case-sensitive. The technical specification for Gopher, , defines 14 item types. The later gopher+ specification defined an additional 3 types. A one-character code indicates what kind of content the client should expect. Item type is an error code for exception handling. Gopher client authors improvised item types (HTML), (informational message), and (sound file) after the publication of RFC 1436. Browsers like Netscape Navigator and early versions of Microsoft Internet Explorer would prepend the item type code to the selector as described in , so that the type of the gopher item could be determined by the url itself. Most gopher browsers still available, use these prefixes in their urls. Here is an example gopher session where the user requires a gopher menu ( on the first line): /Reference 1CIA World Factbook /Archives/mirrors/textfiles.com/politics/CIA gopher.quux.org 70 0Jargon 4.2.0 /Reference/Jargon 4.2.0 gopher.quux.org 70 + 1Online Libraries /Reference/Online Libraries gopher.quux.org 70 + 1RFCs: Internet Standards /Computers/Standards and Specs/RFC gopher.quux.org 70 1U.S. Gazetteer /Reference/U.S. Gazetteer gopher.quux.org 70 + iThis file contains information on United States fake (NULL) 0 icities, counties, and geographical areas. It has fake (NULL) 0 ilatitude/longitude, population, land and water area, fake (NULL) 0 iand ZIP codes. fake (NULL) 0 i fake (NULL) 0 iTo search for a city, enter the city's name. To search fake (NULL) 0 ifor a county, use the name plus County -- for instance, fake (NULL) 0 iDallas County. fake (NULL) 0 The gopher menu sent back from the server, is a sequence of lines each of which describes an item that can be retrieved. Most clients will display these as hypertext links, and so allow the user to navigate through gopherspace by following the links. This menu includes a text resource (itemtype on the third line), multiple links to submenus (itemtype , on the second line as well as lines 4-6) and a non-standard information message (from line 7 on), broken down to multiple lines by providing dummy values for selector, host and port. Web links Historically, to create a link to a Web server, "GET /" was used as a pseudo-selector to emulate an HTTP GET request. John Goerzen created an addition to the Gopher protocol, commonly referred to as "URL links", that allows links to any protocol that supports URLs. For example, to create a link to http://gopher.quux.org/, the item type is , the display string is the title of the link, the item selector is "URL:http://gopher.quux.org/", and the domain and port are that of the originating Gopher server (so that clients that do not support URL links will query the server and receive an HTML redirection page). Related technology Gopher+ Gopher+ is a forward compatible enhancement to the Gopher protocol. Gopher+ works by sending metadata between the client and the server. The enhancement was never widely adopted by Gopher servers. How it works The client sends a tab followed by a +. A Gopher+ server will respond with a status line followed by the content the client requested. An item is marked as supporting Gopher+ in the Gopher directory listing by a tab + after the port (this is the case of some of the items in the example above). Other features Other features of Gopher+ include: Item attributes, which can include the items Administrator Last date of modification Different views of the file, like PostScript or plain text, or different languages Abstract, or description of the item Interactive queries Search Engines Veronica The master Gopherspace search engine is Veronica. Veronica offers a keyword search of all the public Internet Gopher server menu titles. A Veronica search produces a menu of Gopher items, each of which is a direct pointer to a Gopher data source. Individual Gopher servers may also use localized search engines specific to their content such as Jughead and Jugtail. Jugtail Jugtail (formerly Jughead) is a search engine system for the Gopher protocol. It is distinct from Veronica in that it searches a single server at a time. GopherVR GopherVR is a 3D virtual reality variant of the original Gopher system. Client software Gopher clients These are clients, libraries, and utilities primarily designed to access gopher resources. Web clients Web clients are browsers, libraries, and utilities primarily designed to access world wide web resources, but which maintain gopher support. Browsers that do not natively support Gopher can still access servers using one of the available Gopher to HTTP gateways. Gopher support was disabled in Internet Explorer versions 5.x and 6 for Windows in August 2002 by a patch meant to fix a security vulnerability in the browser's Gopher protocol handler to reduce the attack surface which was included in IE6 SP1; however, it can be re-enabled by editing the Windows registry. In Internet Explorer 7, Gopher support was removed on the WinINET level. Gopher browser extensions For Mozilla Firefox and SeaMonkey, Overbite extensions extend Gopher browsing and support the current versions of the browsers (Firefox Quantum v ≥57 and equivalent versions of SeaMonkey): OverbiteWX redirects gopher:// URLs to a proxy; OverbiteNX adds native-like support; for Firefox up to 56.*, and equivalent versions of SeaMonkey, OverbiteFF adds native-like support, but it is no longer maintained OverbiteWX includes support for accessing Gopher servers not on port 70 using a whitelist and for CSO/ph queries. OverbiteFF always uses port 70. For Chromium and Google Chrome, Burrow is available. It redirects gopher:// URLs to a proxy. In the past an Overbite proxy-based extension for these browsers was available but is no longer maintained and does not work with the current (>23) releases. For Konqueror, Kio gopher is available. Gopher over HTTP gateways Users of Web browsers that have incomplete or no support for Gopher can access content on Gopher servers via a server gateway or proxy server that converts Gopher menus into HTML; known proxies are the Floodgap Public Gopher proxy and Gopher Proxy. Similarly, certain server packages such as GN and PyGopherd have built-in Gopher to HTTP interfaces. Squid Proxy software gateways any gopher:// URL to HTTP content, enabling any browser or web agent to access gopher content easily. Gopher clients for mobile devices Some have suggested that the bandwidth-sparing simple interface of Gopher would be a good match for mobile phones and personal digital assistants (PDAs), but so far, mobile adaptations of HTML and XML and other simplified content have proven more popular. The PyGopherd server provides a built-in WML front-end to Gopher sites served with it. The early 2010s saw a renewed interest in native Gopher clients for popular smartphones: Overbite, an open source client for Android 1.5+ was released in alpha stage in 2010. PocketGopher was also released in 2010, along with its source code, for several Java ME compatible devices. Gopher Client was released in 2016 as a proprietary client for iPhone and iPad devices and is currently maintained. Other Gopher clients Gopher popularity was at its height at a time when there were still many equally competing computer architectures and operating systems. As a result, there are several Gopher clients available for Acorn RISC OS, AmigaOS, Atari MiNT, CMS, DOS, classic Mac OS, MVS, NeXT, OS/2 Warp, most UNIX-like operating systems, VMS, Windows 3.x, and Windows 9x. GopherVR was a client designed for 3D visualization, and there is even a Gopher client in MOO. The majority of these clients are hard-coded to work on TCP port 70. Server software Because the protocol is trivial to implement in a basic fashion, there are many server packages still available, and some are still maintained. See also References External links List of public Gopher servers (Gopher link) (proxied link) An announcement of Gopher on the Usenet 8 October 1991 Why is Gopher Still Relevant? — a position statement on Gopher's survival The Web may have won, but Gopher tunnels on — an article published by the technology discussion site Ars Technica about the Gopher community of enthusiasts as of 5 November 2009 History of Gopher — Article in MinnPost Gopherpedia — Gopher interface for Wikipedia (Gopher link) (proxied link, by another proxy) Mark McCahill and Farhad Anklesaria – gopher inventors – explain the evolution of gopher: part 1, part 2 Proposed Gopher+ Specification (gopher link) History of the Internet Internet Standards University of Minnesota software URI schemes
51887628
https://en.wikipedia.org/wiki/Witold%20Kosi%C5%84ski
Witold Kosiński
Witold Kosiński (August 13, 1946 in Kraków – March 14, 2014 in Warsaw) was a Polish mathematician and computer scientist. He was the lead inventor and main propagator of Ordered Fuzzy Numbers (now named after him: Kosiński's Fuzzy Numbers). For many years Professor Witold Kosiński was associated with the Institute of Fundamental Technological Research of the Polish Academy of Sciences. He has also worked as the Vice-Chancellor of the Polish-Japanese Institute of Information Technology - PJIIT (now called Polish-Japanese Academy of Information Technology) in Warsaw and the Head of the Artificial Systems Division at the PJIIT. Finally, he was a lecturer at the Faculty of Mathematics, Physics and Technical Sciences, the Kazimierz Wielki University in Bydgoszcz. Professor Kosiński was a researcher specialising in continuum mechanics, thermodynamics, and wave propagation as well as in mathematical foundations of information technology and particularly in artificial intelligence, fuzzy logic and evolutionary algorithms. His fields of research have also comprised applied mathematics and partial differential equations of hyperbolic type as well as neural networks and computational intelligence. He was a scientist, mentor to scientific staff and several generations of students, as well as an active athlete. Education and career Professor Kosiński defended his Master's Thesis, "On the existence of functions of two variables satisfying some differential inequality", at the Faculty of Mathematics and Mechanics at the University of Warsaw in 1969. Three years later, in 1972, he obtained a Doctor of Science degree and then in 1984 a further dr hab. ("doktor habilitowany") degree (see: Habilitation) at the Institute of the Fundamental Technological Research in the Polish Academy of Sciences (IPPT PAN). He was elevated to the degree of Professor in 1993 through a formal nomination by the President of the Republic of Poland. Over 25 years (1973–1999) he has worked at the Institute of Fundamental Technological Research in the Polish Academy of Science in Warsaw; first as an Assistant, later as an Associate Professor and finally (in 1993) as a Full Professor. Between 1986 and 1999 he headed the Division of Optical and Computer Methods in Mechanics IPPT PAN (SPOKoMM). In 1999 he obtained the position of Vice-Chancellor (scientific affairs) (“Vice-Rektor”) at the Polish-Japanese Institute of Information Technology (PJIIT) in Warsaw, a position that he held till 2005. At the PJIIT he was also the Head of Artificial Systems Division and of the Research Center. In addition he was a member of the PJIIT Senate and of the Council of the Faculty of Information Technology PJIIT. In 1996 he joined the Department of Environmental Mechanics at The Higher Pedagogical School in Bydgoszcz. In 2005, with the establishment of the Kazimierz Wielki University in Bydgoszcz, Kosiński became a Head of the Department of Database Systems and Computational Intelligence at the Faculty of Mathematics, Physics and Technical Sciences at the Institute of Mechanics and Applied Computer Science at that University. In 2009, he became a Chairman of the Council of this Institute. He managed several scientific projects financed by the Polish State Committee for Scientific Research (KBN). He participated in numerous international conferences and worked as a contract lecturer in Poland (e.g. at Bialystok University of Technology and Warsaw University of Life Sciences) and abroad. International collaboration Between 1975–1976 he was in the US as a National Science Foundation post-doctoral research fellow at the Division of Materials Engineering, the University of Iowa. Later as a research fellow of Alexander von Humboldt Foundation he undertook several research visits to numerous German scientific institutes, incl. Institute for Applied Mathematics of the University of Bonn (1983–1985), Institute for Applied Mathematics of the Heidelberg University and Institute of Mechanics of the Technische Universität Darmstadt (1988). Subsequently, he became a visiting professor at the following institutions: LMM, Universite Pierre et Marie Curie (Paris VI), at Universite d'Aix – Marseille III, France, Department of Mathematical and Computer Sciences, Loyola University New Orleans, USA, Dublin Institute for Advanced Studies, Ireland, Nagoya University, Japan, and departments of mathematics of the following universities: Genova, Ferrara, Catania, Napoli, Potenza, Univ. Roma "La Sapienza" and Terza, Italy and Rostock, Germany. In addition, he was a research fellow of the Japan International Cooperation Agency (JICA) and participated in several research and training programs in Japan. Books and work as a supervisor Kosiński was an editor of many volumes of collective works and conference materials and an author of two monographs: W. Kosiński: Field Singularities and Wave Analysis in Continuum Mechanics. Ellis Horwood Series: Mathematics and Its Applications, Ellis Horwood Ltd., Chichester, Halsted Press: a Division of John Wiley & Sons, New York Chichester Brisbane Toronto, PWN – Polish Scientific Publishers, Warsaw (1986) W. Kosiński: Wstęp do teorii osobliwości pola i analizy fal. PWN, Warsaw – Poznań (1981) as well as over 230 of other scientific publications. He was a supervisor of 11 Ph.D. theses (10 of which dealt with informatics) and a number of Engineering Diploma works and Master Theses. He was a member of editorial boards of several journals as well as a member of numerous Polish and international scientific associations. Between 2000–2011 he was an Editor-in-Chief of the Annales Societatis Mathematicae Polonae Series III. Mathematica Applicanda (a journal of the Polish Mathematical Society). References Recipients of the Silver Cross of Merit (Poland) Burials at Powązki Cemetery 1946 births 2014 deaths Polish mathematicians
604658
https://en.wikipedia.org/wiki/Structured%20systems%20analysis%20and%20design%20method
Structured systems analysis and design method
Structured Systems Analysis and Design Method (SSADM), originally released as methodology, is a systems approach to the analysis and design of information systems. SSADM was produced for the Central Computer and Telecommunications Agency, a UK government office concerned with the use of technology in government, from 1980 onwards. Overview SSADM is a waterfall method for the analysis and design of information systems. SSADM can be thought to represent a pinnacle of the rigorous document-led approach to system design, and contrasts with more contemporary agile methods such as DSDM or Scrum. SSADM is one particular implementation and builds on the work of different schools of structured analysis and development methods, such as Peter Checkland's soft systems methodology, Larry Constantine's structured design, Edward Yourdon's Yourdon Structured Method, Michael A. Jackson's Jackson Structured Programming, and Tom DeMarco's structured analysis. The names "Structured Systems Analysis and Design Method" and "SSADM" are registered trademarks of the Office of Government Commerce (OGC), which is an office of the United Kingdom's Treasury. History The principal stages of the development of Structured System Analysing And Design Methodology were: 1980: Central Computer and Telecommunications Agency (CCTA) evaluate analysis and design methods. 1981: Consultants working for Learmonth & Burchett Management Systems, led by John Hall, chosen to develop SSADM v1. 1982: John Hall and Keith Robinson left to found Model Systems Ltd, LBMS later developed LSDM, their proprietary version. 1983: SSADM made mandatory for all new information system developments 1984: Version 2 of SSADM released 1986: Version 3 of SSADM released, adopted by NCC 1988: SSADM Certificate of Proficiency launched, SSADM promoted as ‘open’ standard 1989: Moves towards Euromethod, launch of CASE products certification scheme 1990: Version 4 launched 1993: SSADM V4 Standard and Tools Conformance Scheme 1995: SSADM V4+ announced, V4.2 launched 2000: CCTA renamed SSADM as "Business System Development". The method was repackaged into 15 modules and another 6 modules were added. SSADM techniques The three most important techniques that are used in SSADM are as follows: Logical Data Modelling The process of identifying, modelling and documenting the data requirements of the system being designed. The result is a data model containing entities (things about which a business needs to record information), attributes (facts about the entities) and relationships (associations between the entities). Data Flow Modelling The process of identifying, modelling and documenting how data moves around an information system. Data Flow Modeling examines processes (activities that transform data from one form to another), data stores (the holding areas for data), external entities (what sends data into a system or receives data from a system), and data flows (routes by which data can flow). Entity Event Modelling A two-stranded process: Entity Behavior Modelling, identifying, modelling and documenting the events that affect each entity and the sequence (or life history) in which these events occur, and Event Modelling, designing for each event the process to coordinate entity life histories. Stages The SSADM method involves the application of a sequence of analysis, documentation and design tasks concerned with the following. Stage 0 – Feasibility study In order to determine whether or not a given project is feasible, there must be some form of investigation into the goals and implications of the project. For very small scale projects this may not be necessary at all as the scope of the project is easily understood. In larger projects, the feasibility may be done but in an informal sense, either because there is no time for a formal study or because the project is a “must-have” and will have to be done one way or the other. A data flow Diagram is used to describe how the current system works and to visualize the known problems. When a feasibility study is carried out, there are four main areas of consideration: Technical – is the project technically possible? Financial – can the business afford to carry out the project? Organizational – will the new system be compatible with existing practices? Ethical – is the impact of the new system socially acceptable? To answer these questions, the feasibility study is effectively a condensed version of a comprehensive systems analysis and design. The requirements and usages are analyzed to some extent, some business options are drawn up and even some details of the technical implementation. The product of this stage is a formal feasibility study document. SSADM specifies the sections that the study should contain including any preliminary models that have been constructed and also details of rejected options and the reasons for their rejection. Stage 1 – Investigation of the current environment The developers of SSADM understood that in almost all cases there is some form of current system even if it is entirely composed of people and paper. Through a combination of interviewing employees, circulating questionnaires, observations and existing documentation, the analyst comes to full understanding of the system as it is at the start of the project. This serves many purposes (Like examples?). Stage 2 – Business system options Having investigated the current system, the analyst must decide on the overall design of the new system. To do this, he or she, using the outputs of the previous stage, develops a set of business system options. These are different ways in which the new system could be produced varying from doing nothing to throwing out the old system entirely and building an entirely new one. The analyst may hold a brainstorming session so that as many and various ideas as possible are generated. The ideas are then collected to options which are presented to the user. The options consider the following: the degree of automation the boundary between the system and the users the distribution of the system, for example, is it centralized to one office or spread out across several? cost/benefit impact of the new system Where necessary, the option will be documented with a logical data structure and a level 1 data-flow diagram. The users and analyst together choose a single business option. This may be one of the ones already defined or may be a synthesis of different aspects of the existing options. The output of this stage is the single selected business option together with all the outputs of the feasibility stage. Stage 3 – Requirements specification This is probably the most complex stage in SSADM. Using the requirements developed in stage 1 and working within the framework of the selected business option, the analyst must develop a full logical specification of what the new system must do. The specification must be free from error, ambiguity and inconsistency. By logical, we mean that the specification does not say how the system will be implemented but rather describes what the system will do. To produce the logical specification, the analyst builds the required logical models for both the data-flow diagrams (DFDs) and the Logical Data Model (LDM), consisting of the Logical Data Structure (referred to in other methods as entity relationship diagrams) and full descriptions of the data and its relationships. These are used to produce function definitions of every function which the users will require of the system, Entity Life-Histories (ELHs) which describe all events through the life of an entity, and Effect Correspondence Diagrams (ECDs) which describe how each event interacts with all relevant entities. These are continually matched against the requirements and where necessary, the requirements are added to and completed. The product of this stage is a complete requirements specification document which is made up of: the updated data catalogue the updated requirements catalogue the processing specification which in turn is made up of user role/function matrix function definitions required logical data model entity life-histories effect correspondence diagrams Stage 4 – Technical system options This stage is the first towards a physical implementation of the new system. Like the Business System Options, in this stage a large number of options for the implementation of the new system are generated. This is narrowed down to two or three to present to the user from which the final option is chosen or synthesized. However, the considerations are quite different being: the hardware architectures the software to use the cost of the implementation the staffing required the physical limitations such as a space occupied by the system the distribution including any networks which that may require the overall format of the human computer interface All of these aspects must also conform to any constraints imposed by the business such as available money and standardization of hardware and software. The output of this stage is a chosen technical system option. Stage 5 – Logical design Though the previous level specifies details of the implementation, the outputs of this stage are implementation-independent and concentrate on the requirements for the human computer interface. The logical design specifies the main methods of interaction in terms of menu structures and command structures. One area of activity is the definition of the user dialogues. These are the main interfaces with which the users will interact with the system. Other activities are concerned with analyzing both the effects of events in updating the system and the need to make inquiries about the data on the system. Both of these use the events, function descriptions and effect correspondence diagrams produced in stage 3 to determine precisely how to update and read data in a consistent and secure way. The product of this stage is the logical design which is made up of: Data catalogue Required logical data structure Logical process model – includes dialogues and model for the update and inquiry processes Stress & Bending moment. Stage 6 – Physical design This is the final stage where all the logical specifications of the system are converted to descriptions of the system in terms of real hardware and software. This is a very technical stage and a simple overview is presented here. The logical data structure is converted into a physical architecture in terms of database structures. The exact structure of the functions and how they are implemented is specified. The physical data structure is optimized where necessary to meet size and performance requirements. The product is a complete Physical Design which could tell software engineers how to build the system in specific details of hardware and software and to the appropriate standards. References 5. Keith Robinson, Graham Berrisford: Object-oriented SSADM, Prentice Hall International (UK), Hemel Hempstead, External links What is SSADM? at webopedia.com Introduction to Methodologies and SSADM Case study using pragmatic SSADM Structured Analysis Wiki Information systems Software design Systems analysis
608668
https://en.wikipedia.org/wiki/Proof%20of%20concept
Proof of concept
Proof of concept (POC), also known as proof of principle, is a realization of a certain method or idea in order to demonstrate its feasibility, or a demonstration in principle with the aim of verifying that some concept or theory has practical potential. A proof of concept is usually small and may or may not be complete. These collaborative trials aim to test feasibility of business concepts and proposals to solve business problems and accelerate business innovation goals. Usage history The term has been in use since 1967. In a 1969 hearing of the Committee on Science and Astronautics, Subcommittee on Advanced Research and Technology, proof of concept was defined as following: One definition of the term "proof of concept" was by Bruce Carsten in the context of a "proof-of-concept prototype" in his magazine column "Carsten's Corner" (1989): The column also provided definitions for the related but distinct terms 'breadboard' (a term used since 1940), 'prototype', 'engineering prototype', and 'brassboard'. Examples Filmmaking Sky Captain and the World of Tomorrow, 300, and Sin City were all shot in front of a greenscreen with almost all backgrounds and props computer-generated. All three used proof-of-concept short films. In the case of Sin City, the short film became the prologue of the final film. Pixar sometimes creates short animated films that use a difficult or untested technique. Their short film Geri's Game used techniques for animation of cloth and of human facial expressions later used in Toy Story 2. Similarly, Pixar created several short films as proofs of concept for new techniques for water motion, sea anemone tentacles, and a slowly appearing whale in preparation for the production of Finding Nemo. Engineering In engineering and technology, a rough prototype of a new idea is often constructed as a "proof of concept". For example, a working concept of an electrical device may be constructed using a breadboard. A patent application often requires a demonstration of functionality prior to being filed. Some universities have proof of concept centers to "fill the 'funding gap'" for "seed-stage investing" and "accelerate the commercialization of university innovations". Proof of concept centers provide "seed funding to novel, early stage research that most often would not be funded by any other conventional source". Business development In the field of business development and sales, a vendor may allow a prospect customer to try a product. This use of proof of concept helps establish viability, isolate technical issues, and suggest an overall direction, as well as provide feedback for budgeting and other forms of internal decision-making processes. In these cases, the proof of concept may mean the use of specialized sales engineers to ensure that the vendor makes a best-possible effort. Security In both computer security and encryption, proof of concept refers to a demonstration that in principle shows how a system may be protected or compromised, without the necessity of building a complete working vehicle for that purpose. Winzapper was a proof of concept which possessed the bare minimum of capabilities needed to selectively remove an item from the Windows Security Log, but it was not optimized in any way. Software development In software development, the term 'proof of concept' often characterizes several distinct processes with different objectives and participant roles: vendor business roles may utilize a proof of concept to establish whether a system satisfies some aspect of the purpose it was designed for. Once a vendor is satisfied, a prototype is developed which is then used to seek funding or to demonstrate to prospective customers. The key benefits of the proof of concept in software development are: The possibility to choose the best technology stack for the software (application or web platform) A higher probability of investors' interest in the future software product The ability to simplify and improve the ease of testing and validating ideas for the software's functionality Receiving valuable feedback of a target audience (users) even before building a full-scope system Onboarding first clients before an official software release A 'steel thread' is technical proof of concept that touches all of the technologies in a solution. By contrast, a 'proof of technology' aims to determine the solution to some technical problem (such as how two systems might integrate) or to demonstrate that a given configuration can achieve a certain throughput. No business users need be involved in a proof of technology. A pilot project refers to an initial roll-out of a system into production, targeting a limited scope of the intended final solution. The scope may be limited by the number of users who can access the system, the business processes affected, the business partners involved, or other restrictions as appropriate to the domain. The purpose of a pilot project is to test, often in a production environment. Tech demos are designed as proof of concept for the development of video games. They can demonstrate graphical or gameplay capabilities crucial for particular games. Drug development Although not suggested by natural language, and in contrast to usage in other areas, proof of principle and proof of concept are not synonymous in drug development. A third term, proof of mechanism, is closely related and is also described here. All of these terms lack rigorous definitions and exact usage varies between authors, between institutions and over time. The descriptions given below are intended to be informative and practically useful. The underlying principle is related to the use of biomarkers as surrogate endpoints in early clinical trials. In early development it is not practical to directly measure that a drug is effective in treating the desired disease, and a surrogate endpoint is used to guide whether or not it is appropriate to proceed with further testing. For example, although it cannot be determined early that a new antibiotic cures patients with pneumonia, early indicators would include that the drug is effective in killing bacteria in laboratory tests, or that it reduces temperature in infected patients—such a drug would merit further testing to determine the appropriate dose and duration of treatment. A new anti-hypertension drug could be shown to reduce blood pressure, indicating that it would be useful to conduct more extensive testing of long-term treatment in the expectation of showing reductions in stroke (cerebrovascular accident) or heart attack (myocardial infarction). Surrogate endpoints are often based on laboratory blood tests or imaging investigations like X-ray or CT scan. Proof of mechanism or PoM relates to the earliest stages of drug development, often pre-clinical (i.e., before trialling the drug on humans, or before trialling with research animals). It could be based on showing that the drug interacts with the intended molecular receptor or enzyme, and/or affects cell biochemistry in the desired manner and direction. Proof of principle or PoP relates to early clinical development and typically refers to an evaluation of the effect of a new treatment on disease biomarkers, but not the clinical endpoints of the condition. Early stage clinical trials may aim to demonstrate Proof of Mechanism, or Proof of Principle, or both. A decision is made at this point as to whether to progress the drug into later development, or if it should be dropped. Proof of concept or PoC refers to early clinical drug development, conventionally divided into the phases of clinical research Phase I ("first-in-humans") and Phase IIA. Phase I is typically conducted with a small number of healthy volunteers who are given single doses or short courses of treatment (e.g., up to 2 weeks). Studies in this phase aim to show that the new drug has some of the desired clinical activity (e.g., that an experimental anti-hypertensive drug actually has some effect on reducing blood pressure), that it can be tolerated when given to humans, and to give guidance as to dose levels that are worthy of further study. Other Phase I studies aim to investigate how the new drug is absorbed, distributed, metabolised and excreted (ADME studies). Phase IIA is typically conducted in up to 100 patients with the disease of interest. Studies in this Phase aim to show that the new drug has a useful amount of the desired clinical activity (e.g., that an experimental anti-hypertensive drug reduces blood pressure by a useful amount), that it can be tolerated when given to humans in the longer term, and to investigate which dose levels might be most suitable for eventual marketing. A decision is made at this point as to whether to progress the drug into later development, or if it should be dropped. If the drug continues, it will progress into later stage clinical studies, termed Phase IIB and Phase III. Phase III studies involve larger numbers of patients—commonly multicenter trials—treated at doses and durations representative of marketed use, and in randomised comparison to placebo and/or existing active drugs. They aim to show convincing, statistically significant evidence of efficacy and to give a better assessment of safety than is possible in smaller, short-term studies. A decision is made at this point as to whether the drug is effective and safe, and if so an application is made to regulatory authorities (such as the US Food and Drug Administration FDA and the European Medicines Agency) for the drug to receive permission to be marketed for use outside of clinical trials. Clinical trials can continue after marketing authorization has been received, for example, to better delineate safety, to determine appropriate use alongside other drugs or to investigate additional uses. See also 3D printing Case study Concept car Feasibility study PoCGTFO Prototype Sanity testing Tech demo Technology readiness level Technology demonstration Trinity (nuclear test) References External links Evaluation methods
28970521
https://en.wikipedia.org/wiki/Mus2
Mus2
Mus2 (pronounced ) is a music application for the notation of microtonal works and, specifically, Turkish maqam music. Unlike most other scorewriters, Mus2 allows the user to work in almost any tuning system with customizable accidentals and play back the score with accurate intonation. The application has also received praise for its clean interface and usability. History Mus2 (from musiki, the Ottoman Turkish word for "music") was originally the name of a music application developed by M. Kemal Karaosmanoğlu for the notation of Turkish music pieces. However, this software was never publicly released as it was not deemed ready for general use. Turkish software developer Utku Uzmen independently began working on the microtonal notation application Nihavent in September 2009, and released the first beta version in May 2010. Nihavent went through several iterations before being picked up by Data-Soft for distribution, and the application was released on September 15, 2010 under the Mus2 name. Features The foremost feature of Mus2 is its ability to re-tune a staff to any tuning system using absolute frequencies, rationals and cents. Additionally, the user can import music symbols from graphics files and fonts for use as accidentals with arbitrary cent values. Turkish music theorist and composer Ozan Yarman has used this capability of Mus2 to devise a notation system, Mandalatura, for a 79-tone kanun that he also designed himself. Microtonal support isn't limited to symbols; when a score using an alternative tuning system is played back, Mus2 performs the piece with correct intonation using acoustic and electronic instrument sound samples. The program can work with uncommon time signatures such as 7/6 and create tuplets with ratios such as 10:7. Version 2.0 of the software adds MIDI recording capabilities with a simple sequencer and is able to map the keys of a MIDI instrument to any tuning in tandem with the built-in microtonal sampler. This also opens up the software to use as a microtonal instrument. Mus2 has been noted for its simple user interface and ease of use. The notation tools in the program are presented in a tool strip which only shows the relevant options for the selected tool. When a notation symbol is placed on the score paper, its layout and position is automatically determined, usually with no need for manual adjustment by the user. Mus2 uses its own file formats for storing scores, tunings and accidentals, which can include metadata for easy indexing and cataloguing. It is also possible to export the score to a variety of audio and graphics formats. Mus2 has been developed with the Qt framework and is available for Windows and Mac OS X. See also List of music software References Scorewriters Software that uses Qt
50042361
https://en.wikipedia.org/wiki/Katie%20Moussouris
Katie Moussouris
Katie Moussouris is an American computer security researcher, entrepreneur, and pioneer in vulnerability disclosure, and is best known for her ongoing work advocating responsible security research. Previously a member of @stake, she created the bug bounty program at Microsoft and was directly involved in creating the U.S. Department of Defense's first bug bounty program for hackers. She previously served as Chief Policy Officer at HackerOne, a vulnerability disclosure company based in San Francisco, California, and currently is the founder and CEO of Luta Security. Biography Moussouris was interested in computers at a young age and learned to program in BASIC on a Commodore 64 that her mother bought her in 3rd grade. She was the first girl to take AP Computer Science at her high school. She attended Simmons College to study molecular biology and mathematics and simultaneously worked on the Human Genome Project at the MIT Whitehead Institute. While at Whitehead she transitioned from a lab assistant to a systems administrator role, and after three years she became the systems administrator for the MIT Department of Aeronautics and Astronautics, where she helped design the computer system for a new lab that was to open in 2000. During this time she also worked as the systems administrator at the Harvard School of Engineering and Applied Sciences. She moved to California to work as a Linux developer at Turbolinux and started their computer security response program. She was active within the West Coast hacker scene and formally joined @stake as a penetration tester in 2002 by invitation of Chris Wysopal. Symantec Moussouris joined Symantec in October 2004 when they acquired @stake. While there, she founded and managed Symantec Vulnerability Research in 2004, which was the first program to allow Symantec researchers to publish vulnerability research. Microsoft In May 2007, Moussouris left Symantec to join Microsoft as a security strategist. She founded the Microsoft Vulnerability Research (MSVR) program, announced at BlackHat 2008. The program has coordinated the response to several significant vulnerabilities, including Dan Kaminsky's DNS flaw, and has also actively looked for bugs in third-party software affecting Microsoft customers (subsequent examples of this include Google's Project Zero). From September 2010 until May 2014, Moussouris was the Senior Security Strategist Lead at Microsoft, where she ran the Security Community Outreach and Strategy team for Microsoft as part of the Microsoft Security Response Center (MSRC) team. She instigated the Microsoft BlueHat Prize for Advancement of Exploit Mitigations, which awarded over $260,000 in prizes to researchers at BlackHat USA 2012. The grand prize of $200,000 was at the time the largest cash payout being offered by a software vendor. She also created Microsoft's first bug bounty program, which paid over $253,000 and received 18 vulnerabilities over the course of her tenure. ISO vulnerability disclosure standard Moussouris has helped edit the ISO/IEC 29147 document since around 2008. In April 2016, ISO made the standard freely available at no charge after a request from Moussouris and the CERT Coordination Center's Art Manion. HackerOne In May 2014, Moussouris was named the Chief Policy Officer at HackerOne, a vulnerability disclosure company based in San Francisco, California. In this role, Moussouris was responsible for the company's vulnerability disclosure philosophy, and worked to promote and legitimize security research among organizations, legislators and policy makers. "Hack the ..." series While still at Microsoft, Moussouris began discussing a bug bounty program with the federal government; she continued these talks when she moved to HackerOne. In March 2016, Moussouris was directly involved in creating the Department of Defense's "Hack the Pentagon" pilot program, organized and vetted by HackerOne. It was the first bug bounty program in the history of the US federal government. Moussouris followed up the Pentagon program with "Hack the Air Force". HackerOne and Luta Security are partnering to deliver up to 20 bug bounty challenges over three years to the Defense Department. Luta Security In April 2016, Moussouris founded Luta Security, a consultancy to help organizations and governments work collaboratively with hackers through bug bounty programs. New America Fellow During 2015-2016 and 2016-2017, Katie Moussouris served as a Cybersecurity Fellow at New America, a U.S.-based think tank. Wassenaar Arrangement amendment In 2013, the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies was amended to include "intrusion software". Moussouris wrote an op-ed in Wired criticizing the move as harmful to the vulnerability disclosure industry due to the overly-broad definition and encouraged security experts to write in to help regulators understand how to make the right changes. She was invited as a technical expert to directly assist in the US Wassenaar Arrangement negotiations, and helped rewrite the amendment to adopt end-use decontrol exemptions based on the intent of the user. Exploit labor market research Moussouris was a visiting scholar at the MIT Sloan School of Management and affiliate researcher at the Harvard Belfer Center for Science and International Affairs, where she conducted economic research on the labor market for security bugs. She coauthored a book chapter on the first system dynamics model of the vulnerability economy and exploit market, published by MIT Press in 2017. Congressional testimony In 2018, Moussouris testified in front of the U.S. Senate Subcommittee on Consumer Protection, Product Safety, Insurance, and Data Security about security research for defensive purposes. In 2021, Moussouris testified in front of the U.S. House Committee on Science, Space, & Technology about improving the cybersecurity of software supply chains. Anuncia Donecia Songsong Manglona Lab for Gender and Economic Equity In 2021, Moussouris donated $1 million dollars to found the Anuncia Donecia Songsong Manglona Lab for Gender and Economic Equity, at Penn State Law, named after her mother. The “Manglona Lab” will start with a gender equity litigation clinic intended to address workplace financial discrimination while promoting economic equity under the law. Awards In 2014, SC Magazine named Moussouris to its Women in IT Security list. She was also named as one of "10 Women in Information Security That Everyone Should Know," and the "One To Watch" among the 2011 Women of Influence awards. In 2018 she was featured among "America's Top 50 Women In Tech" by Forbes. Presentations Night of the Living ISO Draft on Vulnerability Disclosure, Symposium 2010. The Wolves of Vuln Street: The 1st Dynamic Systems Model of the 0day Market, RSA Conference 2015. Panel: How the Wassenaar Arrangement's Export Control of "Intrusion Software" Affects the Security Industry, BlackHatUSA 2015 Swinging From the Cyberlier: How to Hack Like Tomorrow Doesn't Exist Without Flying Sideways of Regulations, Kiwicon 2015 Publications and articles "Not All Hackers are Evil". Time. Retrieved April 4, 2016. "Vulnerability Disclosure Deja Vu: Prosecute Crime Not Research". Dark Reading. Retrieved April 4, 2016. "Mad World: The Truth About Bug Bounties". Dark Reading. Retrieved April 4, 2016. "How I Got Here: Katie Moussouris". Threat Post. Retrieved April 6, 2016. "Hackers Can Be Helpers". The New York Times. Retrieved June 18, 2017. "Administration should continue to seek changes to international cyber export controls". The Hill. Retrieved June 18, 2017. "The Time Has Come to Hack the Planet". Threatpost. Retrieved September 24, 2017. Microsoft lawsuit In September 2015, Moussouris filed a discrimination class-action lawsuit against Microsoft in federal court in Seattle. She alleged that Microsoft hiring practices upheld a practice of sex discrimination against women in technical and engineering roles with respect to performance evaluations, pay, promotions, and other terms and conditions of employment. References External links Luta Security HackerOne Living people American technology writers People in information technology Women technology writers Microsoft people NortonLifeLock people 21st-century American non-fiction writers 21st-century American women writers American women non-fiction writers Year of birth missing (living people) Computer security specialists
61359231
https://en.wikipedia.org/wiki/Syed%20Waqar%20Jaffry
Syed Waqar Jaffry
Syed Waqar ul Qounain Jaffry (; born 1978) is a Pakistani academic and researcher in the field of computer science and artificial intelligence. He is a full Professor of Artificial Intelligence, Director of National Centre of Artificial Intelligence, and Chairman of the Department of Information Technology at the University of the Punjab, Lahore. Early life Jaffry was born in 1978 to an educated, Punjabi-speaking, Syed family in Lahore, Pakistan. He received his college degrees from the Government College Lahore, and MSc. Computer Science from the Department of Computer Science, University of the Punjab, Lahore. He earned a PhD degree with ranked highest Accolade (excellent) in Computer Science and Artificial Intelligence at Agent Systems Research Group, Department of Artificial Intelligence, Vrije Universiteit Amsterdam. Career Jaffry started his career as a lecturer at the University of the Punjab and elevated to a full professor. He is associated with the Punjab University College of Information Technology (PUCIT) since 2000. He also served on various academic and administrative positions as Graduate Program Coordinator, ACM Student Chapter Adviser, Chairman Computer Society, Secretary of Doctoral Program Committee (DPC), Graduate Financial Support Committee, Research Advisory Committee, Curriculum Development Committee etc. Jaffry is currently serving as the Director of National Centre of Artificial Intelligence (NCAI), and Chairman of the Department of Information Technology, University of the Punjab, Lahore. Services Jaffry is serving as a Vice Chair, IEEE Computer Society, Lahore Section. He served as Technical Co-Chair in IEEE International Conference on Advancement in Computational Sciences (ICACS). As the chief organizer, he organized National Software Exhibition and Competition (SOFTExpo) for five consecutive years from 2013 to 2017. In this event, besides software and programming competitions, he initiated and executed first ever National Graduate Research Symposium. In K-12 education, he is the Convener of External Review Committee for Computer Science and Information Technology Textbooks at Punjab Curriculum and Textbook Board (PCTB), Lahore. Besides textbook evaluator at PCTB, he is also serving as the Convener of Review Committee for the Review of Curriculum of Computer Science and Information Technology for Grades VI-XII at Punjab Curriculum and Textbook Board, Lahore. In this committee he designed a Curriculum for the subject of Computer Science and Information Technology Grades VI-XII at Punjab province. Published works Jaffry has contributed to over 70 books, book chapters, researcher articles and policy drafts in well reputed international conferences, journals and public forums. Honours and awards Senior Member, Institute of Electrical and Electronics Engineers (IEEE) Senior Member, Association of Computing Machinery (ACM) Chair, Conferences Committee, Institute of Electrical and Electronics Engineers (IEEE), Lahore Section. PhD Approved Supervisor, Higher Education Commission (HEC), Islamabad, Pakistan. Evaluator, Research Funding Proposals of Ignite, National Information and Communication Technology Research and Development (ICTRnD) Ignite Fund, Islamabad, Pakistan. Program Evaluator, National Computing Education Accreditation Council (NCEAC), Higher Education Commission (HEC), Islamabad, Pakistan. Member Board of Studies, Department of Computer Science and Information Technology, University of Engineering and Technology, Peshawar, Pakistan. Member Board of Studies, Centre for Data Science, Government College University (GCU), Faisalabad, Pakistan. Director, Agent-based Computational Modeling Laboratory, National Center of Artificial Intelligence (NCAI), Pakistan. Member Judging Panel, National Grassroots ICT Research Initiative (NGIRI), Ignite, National Information and Communication Technology Research and Development (ICTRnD) Fund, Islamabad, Pakistan. Technical Co-Chair, IEEE International Conference on Advancements in Computational Sciences. Vice Chair, IEEE Computer Society (CS), Lahore Section, Lahore, Pakistan. Convener, External Review Committee for Computer Science and Information Technology Textbooks, Punjab Curriculum and Textbook Board, Punjab, Pakistan. Convener, Committee for Single National Curriculum (SNC) of Computer Studies Grades VI-XII, Punjab Curriculum and Textbook Board, Punjab, Pakistan. Convener, Review Committee for the review of Curriculum of Computer Science and Information Technology for Grades VI-XII, Punjab Curriculum and Textbook Board, Punjab, Pakistan. Advisor for Interview, Punjab Public Service Commission, Lahore, Pakistan. References 1978 births Living people University of the Punjab faculty University of the Punjab alumni Vrije Universiteit Amsterdam alumni
48258330
https://en.wikipedia.org/wiki/Vesess
Vesess
Vesess is a design and technology company founded in Sri Lanka in 2004. It was incorporated in United States as Vesess Inc. in 2007. It provides consultancy services in web design and development, UI and UX design, and online strategy. Vesess also develops and manages a software-as-a-service product: the online invoicing and billing service Hiveage. Products CurdBee CurdBee was an online invoicing software initially developed as an in-house PHP application to invoice clients of Vesess. A public version was developed using Ruby on Rails and launched in June 2008. It was the first step for Vesess in moving from being a web design agency to a product company. CurdBee was mainly used by small businesses and freelancers. It had a free plan which supported unlimited invoicing and unlimited clients, with PayPal and Google Checkout as payment gateways. Paid plans had additional features such as multiple payment gateways, recurring invoices, time and expense tracking and custom domains. CurdBee was discontinued in 2014. Hiveage Hiveage is an invoicing and billing software-as-a-service that replaced CurdBee in 2014. Like its predecessor, Hiveage started as a freemium service, but changed to a fixed plan pricing strategy in September 2016. It inherited the CurdBee user base, and has grown to cover more than 140 countries. Vgo Vgo was a Software as a service launched by Vesess in September 2015, designed for taxi and logistics companies to compete with other vehicle for hire companies such as Uber. It had a web-based management application that supports online booking, automatic and manual dispatching, fleet, staff and customer management, and detailed reporting. Vgo also had set of mobile applications for customers with smartphones to submit orders, and for drivers to be notified of assignments and track their trips. Vesess and a group of angel investors led by Just In Time Group had invested $1 million into Vgo. Vgo was discontinued in August 2016. Notes External links Vesess website Hiveage website Vgo website Software companies of Sri Lanka Software companies of the United States Sri Lankan companies established in 2004 Companies established in 2004
79673
https://en.wikipedia.org/wiki/Diff
Diff
In computing, the utility diff is a data comparison tool that computes and displays the differences between the contents of files. Unlike edit distance notions used for other purposes, diff is line-oriented rather than character-oriented, but it is like Levenshtein distance in that it tries to determine the smallest set of deletions and insertions to create one file from the other. The utility displays the changes in one of several standard formats, such that both humans or computers can parse the changes, and use them for patching. Typically, diff is used to show the changes between two versions of the same file. Modern implementations also support binary files. The output is called a "diff", or a patch, since the output can be applied with the Unix program . The output of similar file comparison utilities is also called a "diff"; like the use of the word "grep" for describing the act of searching, the word diff became a generic term for calculating data difference and the results thereof. The POSIX standard specifies the behavior of the "diff" and "patch" utilities and their file formats. History diff was developed in the early 1970s on the Unix operating system, which was emerging from Bell Labs in Murray Hill, New Jersey. The first released version shipped with the 5th Edition of Unix in 1974, and was written by Douglas McIlroy, and James Hunt. This research was published in a 1976 paper co-written with James W. Hunt, who developed an initial prototype of . The algorithm this paper described became known as the Hunt–Szymanski algorithm. McIlroy's work was preceded and influenced by Steve Johnson's comparison program on GECOS and Mike Lesk's program. also originated on Unix and, like , produced line-by-line changes and even used angle-brackets (">" and "<") for presenting line insertions and deletions in the program's output. The heuristics used in these early applications were, however, deemed unreliable. The potential usefulness of a diff tool provoked McIlroy into researching and designing a more robust tool that could be used in a variety of tasks, but perform well in the processing and size limitations of the PDP-11's hardware. His approach to the problem resulted from collaboration with individuals at Bell Labs including Alfred Aho, Elliot Pinson, Jeffrey Ullman, and Harold S. Stone. In the context of Unix, the use of the line editor provided with the natural ability to create machine-usable "edit scripts". These edit scripts, when saved to a file, can, along with the original file, be reconstituted by into the modified file in its entirety. This greatly reduced the secondary storage necessary to maintain multiple versions of a file. McIlroy considered writing a post-processor for where a variety of output formats could be designed and implemented, but he found it more frugal and simpler to have be responsible for generating the syntax and reverse-order input accepted by the command. Late in 1984 Larry Wall created a separate utility, patch, releasing its source code on the mod.sources and net.sources newsgroups. This program generalized and extended the ability to modify files with output from . Modes in Emacs also allow for converting the format of patches and even editing patches interactively. In 's early years, common uses included comparing changes in the source of software code and markup for technical documents, verifying program debugging output, comparing filesystem listings and analyzing computer assembly code. The output targeted for was motivated to provide compression for a sequence of modifications made to a file. The Source Code Control System (SCCS) and its ability to archive revisions emerged in the late 1970s as a consequence of storing edit scripts from . Algorithm The operation of is based on solving the longest common subsequence problem. In this problem, given two sequences of items: h q e i k r x y and we want to find a longest sequence of items that is present in both original sequences in the same order. That is, we want to find a new sequence which can be obtained from the first original sequence by deleting some items, and from the second original sequence by deleting other items. We also want this sequence to be as long as possible. In this case it is a b c d f g j z From a longest common subsequence it is only a small step to get -like output: if an item is absent in the subsequence but present in the first original sequence, it must have been deleted (as indicated by the '-' marks, below). If it is absent in the subsequence but present in the second original sequence, it must have been inserted (as indicated by the '+' marks). e h i q k r x y + - + - + + + + Usage The diff command is invoked from the command line, passing it the names of two files: diff original new. The output of the command represents the changes required to transform the original file into the new file. If original and new are directories, then will be run on each file that exists in both directories. An option, -r, will recursively descend any matching subdirectories to compare files between directories. Any of the examples in the article use the following two files, original and new: original: This part of the document has stayed the same from version to version. It shouldn't be shown if it doesn't change. Otherwise, that would not be helping to compress the size of the changes. This paragraph contains text that is outdated. It will be deleted in the near future. It is important to spell check this dokument. On the other hand, a misspelled word isn't the end of the world. Nothing in the rest of this paragraph needs to be changed. Things can be added after it. new: This is an important notice! It should therefore be located at the beginning of this document! This part of the document has stayed the same from version to version. It shouldn't be shown if it doesn't change. Otherwise, that would not be helping to compress the size of the changes. It is important to spell check this document. On the other hand, a misspelled word isn't the end of the world. Nothing in the rest of this paragraph needs to be changed. Things can be added after it. This paragraph contains important new additions to this document. The command diff original new produces the following normal diff output: Here, the diff output is shown with colors to make it easier to read. The diff utility does not produce colored output; its output is plain text. However, many tools can show the output with colors by using syntax highlighting. In this traditional output format, a stands for added, d for deleted and c for changed. Line numbers of the original file appear before a/d/c and those of the new file appear after. The less-than and greater-than signs (at the beginning of lines that are added, deleted or changed) indicate which file the lines appear in. Addition lines are added to the original file to appear in the new file. Deletion lines are deleted from the original file to be missing in the new file. By default, lines common to both files are not shown. Lines that have moved are shown as added at their new location and as deleted from their old location. However, some diff tools highlight moved lines. Output variations Edit script An ed script can still be generated by modern versions of diff with the -e option. The resulting edit script for this example is as follows: 24a This paragraph contains important new additions to this document. . 17c check this document. On . 11,15d 0a This is an important notice! It should therefore be located at the beginning of this document! . In order to transform the content of file original into the content of file new using , we should append two lines to this diff file, one line containing a w (write) command, and one containing a q (quit) command (e.g. by ). Here we gave the diff file the name mydiff and the transformation will then happen when we run . Context format The Berkeley distribution of Unix made a point of adding the context format () and the ability to recurse on filesystem directory structures (), adding those features in 2.8 BSD, released in July 1981. The context format of diff introduced at Berkeley helped with distributing patches for source code that may have been changed minimally. In the context format, any changed lines are shown alongside unchanged lines before and after. The inclusion of any number of unchanged lines provides a context to the patch. The context consists of lines that have not changed between the two files and serve as a reference to locate the lines' place in a modified file and find the intended location for a change to be applied regardless of whether the line numbers still correspond. The context format introduces greater readability for humans and reliability when applying the patch, and an output which is accepted as input to the patch program. This intelligent behavior isn't possible with the traditional diff output. The number of unchanged lines shown above and below a change hunk can be defined by the user, even zero, but three lines is typically the default. If the context of unchanged lines in a hunk overlap with an adjacent hunk, then diff will avoid duplicating the unchanged lines and merge the hunks into a single hunk. A "" represents a change between lines that correspond in the two files, whereas a "" represents the addition of a line, and a "" the removal of a line. A blank space represents an unchanged line. At the beginning of the patch is the file information, including the full path and a time stamp delimited by a tab character. At the beginning of each hunk are the line numbers that apply for the corresponding change in the files. A number range appearing between sets of three asterisks applies to the original file, while sets of three dashes apply to the new file. The hunk ranges specify the starting and ending line numbers in the respective file. The command produces the following output: *** /path/to/original timestamp --- /path/to/new timestamp *************** *** 1,3 **** --- 1,9 ---- + This is an important + notice! It should + therefore be located at + the beginning of this + document! + This part of the document has stayed the same from version to *************** *** 8,20 **** compress the size of the changes. - This paragraph contains - text that is outdated. - It will be deleted in the - near future. It is important to spell ! check this dokument. On the other hand, a misspelled word isn't the end of the world. --- 14,21 ---- compress the size of the changes. It is important to spell ! check this document. On the other hand, a misspelled word isn't the end of the world. *************** *** 22,24 **** --- 23,29 ---- this paragraph needs to be changed. Things can be added after it. + + This paragraph contains + important new additions + to this document. Here, the diff output is shown with colors to make it easier to read. The diff utility does not produce colored output; its output is plain text. However, many tools can show the output with colors by using syntax highlighting. Unified format The unified format (or unidiff) inherits the technical improvements made by the context format, but produces a smaller diff with old and new text presented immediately adjacent. Unified format is usually invoked using the "-u" command line option. This output is often used as input to the patch program. Many projects specifically request that "diffs" be submitted in the unified format, making unified diff format the most common format for exchange between software developers. Unified context diffs were originally developed by Wayne Davison in August 1990 (in unidiff which appeared in Volume 14 of comp.sources.misc). Richard Stallman added unified diff support to the GNU Project's diff utility one month later, and the feature debuted in GNU diff 1.15, released in January 1991. GNU diff has since generalized the context format to allow arbitrary formatting of diffs. The format starts with the same two-line header as the context format, except that the original file is preceded by "---" and the new file is preceded by "+++". Following this are one or more change hunks that contain the line differences in the file. The unchanged, contextual lines are preceded by a space character, addition lines are preceded by a plus sign, and deletion lines are preceded by a minus sign. A hunk begins with range information and is immediately followed with the line additions, line deletions, and any number of the contextual lines. The range information is surrounded by double at signs, and combines onto a single line what appears on two lines in the context format (above). The format of the range information line is as follows: @@ -l,s +l,s @@ optional section heading The hunk range information contains two hunk ranges. The range for the hunk of the original file is preceded by a minus symbol, and the range for the new file is preceded by a plus symbol. Each hunk range is of the format l,s where l is the starting line number and s is the number of lines the change hunk applies to for each respective file. In many versions of GNU diff, each range can omit the comma and trailing value s, in which case s defaults to 1. Note that the only really interesting value is the l line number of the first range; all the other values can be computed from the diff. The hunk range for the original should be the sum of all contextual and deletion (including changed) hunk lines. The hunk range for the new file should be a sum of all contextual and addition (including changed) hunk lines. If hunk size information does not correspond with the number of lines in the hunk, then the diff could be considered invalid and be rejected. Optionally, the hunk range can be followed by the heading of the section or function that the hunk is part of. This is mainly useful to make the diff easier to read. When creating a diff with GNU diff, the heading is identified by regular expression matching. If a line is modified, it is represented as a deletion and addition. Since the hunks of the original and new file appear in the same hunk, such changes would appear adjacent to one another. An occurrence of this in the example below is: -check this dokument. On +check this document. On The command diff -u original new produces the following output: --- /path/to/original timestamp +++ /path/to/new timestamp @@ -1,3 +1,9 @@ +This is an important +notice! It should +therefore be located at +the beginning of this +document! + This part of the document has stayed the same from version to @@ -8,13 +14,8 @@ compress the size of the changes. -This paragraph contains -text that is outdated. -It will be deleted in the -near future. - It is important to spell -check this dokument. On +check this document. On the other hand, a misspelled word isn't the end of the world. @@ -22,3 +23,7 @@ this paragraph needs to be changed. Things can be added after it. + +This paragraph contains +important new additions +to this document. Here, the diff output is shown with colors to make it easier to read. The diff utility does not produce colored output; its output is plain text. However, many tools can show the output with colors by using syntax highlighting. Note that to successfully separate the file names from the timestamps, the delimiter between them is a tab character. This is invisible on screen and can be lost when diffs are copy/pasted from console/terminal screens. There are some modifications and extensions to the diff formats that are used and understood by certain programs and in certain contexts. For example, some revision control systems—such as Subversion—specify a version number, "working copy", or any other comment instead of or in addition to a timestamp in the diff's header section. Some tools allow diffs for several different files to be merged into one, using a header for each modified file that may look something like this: Index: path/to/file.cpp The special case of files that do not end in a newline is not handled. Neither the unidiff utility nor the POSIX diff standard define a way to handle this type of files. (Indeed, such files are not "text" files by strict POSIX definitions.) The patch program is not aware even of an implementation specific diff output. Implementations and related programs Changes since 1975 include improvements to the core algorithm, the addition of useful features to the command, and the design of new output formats. The basic algorithm is described in the papers An O(ND) Difference Algorithm and its Variations by Eugene W. Myers and in A File Comparison Program by Webb Miller and Myers. The algorithm was independently discovered and described in Algorithms for Approximate String Matching, by Esko Ukkonen. The first editions of the diff program were designed for line comparisons of text files expecting the newline character to delimit lines. By the 1980s, support for binary files resulted in a shift in the application's design and implementation. GNU diff and diff3 are included in the diffutils package with other diff and patch related utilities. Nowadays there is also a patchutils package that can combine, rearrange, compare and fix context diffs and unified diffs. Formatters and front-ends Postprocessors sdiff and diffmk render side-by-side diff listings and applied change marks to printed documents, respectively. Both were developed elsewhere in Bell Labs in or before 1981. Diff3 compares one file against two other files by reconciling two diffs. It was originally conceived by Paul Jensen to reconcile changes made by two people editing a common source. It is also used by revision control systems, e.g. RCS, for merging. Emacs has Ediff for showing the changes a patch would provide in a user interface that combines interactive editing and merging capabilities for patch files. Vim provides vimdiff to compare from two to eight files, with differences highlighted in color. While historically invoking the diff program, modern vim uses git's fork of xdiff library (LibXDiff) code, providing improved speed and functionality. GNU Wdiff is a front end to diff that shows the words or phrases that changed in a text document of written language even in the presence of word-wrapping or different column widths. colordiff is a Perl wrapper for 'diff' and produces the same output but with pretty 'syntax' highlighting. Algorithmic derivatives Utilities that compare source files by their syntactic structure have been built mostly as research tools for some programming languages; some are available as commercial tools. In addition, free tools that perform syntax-aware diff include: C++: zograscope, AST-based. HTML: Daisydiff, html-differ. XML: xmldiffpatch by Microsoft and xmldiffmerge for IBM. JavaScript: astii (AST-based). Multi-language: Pretty Diff (format code and then diff) spiff is a variant of diff that ignores differences in floating point calculations with roundoff errors and whitespace, both of which are generally irrelevant to source code comparison. Bellcore wrote the original version. An HPUX port is the most current public release. spiff does not support binary files. spiff outputs to the standard output in standard diff format and accepts inputs in the C, Bourne shell, Fortran, Modula-2 and Lisp programming languages. LibXDiff is an LGPL library that provides an interface to many algorithms from 1998. An improved Myers algorithm with Rabin fingerprint was originally implemented (as of the final release of 2008), but git and libgit2's fork has since expanded the repository with many of its own. One algorithm called "histogram" is generally regarded as much better than the original Myers algorithm, both in speed and quality. This is the modern version of LibXDiff used by Vim. See also Comparison of file comparison tools Delta encoding Difference operator Edit distance Levenshtein distance History of software configuration management Longest common subsequence problem Microsoft File Compare Revision control Software configuration management Other free file comparison tools cmp comm Kompare tkdiff WinMerge (Microsoft Windows) meld Pretty Diff References Further reading A technique for isolating differences between files A generic implementation of the Myers SES/LCS algorithm with the Hirschberg linear space refinement (C source code) External links JavaScript Implementation 1974 software Free file comparison tools Formal languages Pattern matching Data differencing Standard Unix programs Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands
2005452
https://en.wikipedia.org/wiki/RC%204000%20multiprogramming%20system
RC 4000 multiprogramming system
The RC 4000 Multiprogramming System (also termed Monitor or RC 4000 depending on reference) is a discontinued operating system developed for the RC 4000 minicomputer in 1969. For clarity, this article mostly uses the term Monitor. Overview The RC 4000 Multiprogramming System is historically notable for being the first attempt to break down an operating system into a group of interacting programs communicating via a message passing kernel. RC 4000 was not widely used, but was highly influential, sparking the microkernel concept that dominated operating system research through the 1970s and 1980s. Monitor was created largely by one programmer, Per Brinch Hansen, who worked at Regnecentralen where the RC 4000 was being designed. Leif Svalgaard participated in implementing and testing Monitor. Brinch Hansen found that no existing operating system was suited to the new machine, and was tired of having to adapt existing systems. He felt that a better solution was to build an underlying kernel, which he referred to as the nucleus, that could be used to build up an operating system from interacting programs. Unix, for instance, uses small interacting programs for many tasks, transferring data through a system called pipelines or pipes. However, a large amount of fundamental code is integrated into the kernel, notably things like file systems and program control. Monitor would relocate such code also, making almost the entire system a set of interacting programs, reducing the kernel (nucleus) to a communications and support system only. Monitor used a pipe-like system of shared memory as the basis of its inter-process communication (IPC). Data to be sent from one process to another was copied into an empty memory data buffer, and when the receiving program was ready, back out again. The buffer was then returned to the pool. Programs had a very simple application programming interface (API) for passing data, using an asynchronous set of four methods. Client applications send data with send message and could optionally block using wait answer. Servers used a mirroring set of calls, wait message and send answer. Note that messages had an implicit "return path" for every message sent, making the semantics more like a remote procedure call than Mach's completely input/output (I/O) based system. Monitor divided the application space in two: internal processes were the execution of traditional programs, started on request, while external processes were effectively device drivers. External processes were handled outside of user space by the nucleus, although they could be started and stopped just like any other program. Internal processes were started in the context of the parent that launched them, so each user could effectively build up their own operating system by starting and stopping programs in their own context. Scheduling was left entirely to the programs, if required at all (in the 1960s, computer multitasking was a feature of debatable value). One user could start a session in a pre-emptive multitasking environment, while another might start in a single-user mode to run batch processing at higher speed. Real-time scheduling could be supported by sending messages to a timer process that would only return at the appropriate time. These two areas have seen the vast majority of development since Monitor's release, driving newer designs to use hardware to support messaging, and supporting threads within applications to reduce launch times. For instance, Mach required a memory management unit to improve messaging by using the copy-on-write protocol and mapping (instead of copying) data from process to process. Mach also used threading extensively, allowing the external programs, or servers in more modern terms, to easily start up new handlers for incoming requests. Still, Mach IPC was too slow to make the microkernel approach practically useful. This only changed when Jochen Liedtke's L4 microkernel demonstrated IPC overheads reduced by an order-of-magnitude. See also THE multiprogramming system Timeline of operating systems References RC 4000 Software: Multiprogramming System RC 4000 Reference Manual at bitsavers.org Microkernel-based operating systems Microkernels 1969 software
10059597
https://en.wikipedia.org/wiki/GPS%20signals
GPS signals
GPS signals are broadcast by Global Positioning System satellites to enable satellite navigation. Receivers on or near the Earth's surface can determine location, time, and velocity using this information. The GPS satellite constellation is operated by the 2nd Space Operations Squadron (2SOPS) of Space Delta 8, United States Space Force. GPS signals include ranging signals, used to measure the distance to the satellite, and navigation messages. The navigation messages include ephemeris data, used to calculate the position of each satellite in orbit, and information about the time and status of the entire satellite constellation, called the almanac. There are four GPS signal specifications designed for civilian use. In order of date of introduction, these are: L1 C/A, L2C, L5 and L1C. L1 C/A is also called the legacy signal and is broadcast by all currently operational satellites. L2C, L5 and L1C are modernized signals, are only broadcast by newer satellites (or not yet at all), and , none are yet considered to be fully operational for civilian use. In addition, there are restricted signals with published frequencies and chip rates but encrypted coding intended to be used only by authorized parties. Some limited use of restricted signals can still be made by civilians without decryption; this is called codeless and semi-codeless access, and is officially supported. The interface to the User Segment (GPS receivers) is described in the Interface Control Documents (ICD). The format of civilian signals is described in the Interface Specification (IS) which is a subset of the ICD. Common characteristics The GPS satellites (called space vehicles in the GPS interface specification documents) transmit simultaneously several ranging codes and navigation data using binary phase-shift keying (BPSK). Only a limited number of central frequencies are used; satellites using the same frequency are distinguished by using different ranging codes; in other words, GPS uses code division multiple access. The ranging codes are also called chipping codes (in reference to CDMA/DSSS), pseudorandom noise and pseudorandom binary sequences (in reference to the fact that it is predictable, but statistically it resembles noise). Some satellites transmit several BPSK streams at the same frequency in quadrature, in a form of quadrature amplitude modulation. However, unlike typical QAM systems where a single bit stream is split in two half-symbol-rate bit streams to improve spectral efficiency, in GPS signals the in-phase and quadrature components are modulated by separate (but functionally related) bit streams. Satellites are uniquely identified by a serial number called space vehicle number (SVN) which does not change during its lifetime. In addition, all operating satellites are numbered with a space vehicle identifier (SV ID) and pseudorandom noise number (PRN number) which uniquely identifies the ranging codes that a satellite uses. There is a fixed one-to-one correspondence between SV identifiers and PRN numbers described in the interface specification. Unlike SVNs, the SV ID/PRN number of a satellite may be changed (also changing the ranging codes it uses). At any point in time, any SV ID/PRN number is in use by at most a single satellite. A single SV ID/PRN number may have been used by several satellites at different points in time and a single satellite may have used different SV ID/PRN numbers at different points in time. The current SVNs and PRN numbers for the GPS constellation may be found at NAVCEN. Legacy GPS signals The original GPS design contains two ranging codes: the coarse/acquisition (C/A) code, which is freely available to the public, and the restricted precision (P) code, usually reserved for military applications. Coarse/acquisition code The C/A PRN codes are Gold codes with a period of 1023 chips transmitted at 1.023 Mchip/s, causing the code to repeat every 1 millisecond. They are exclusive-ored with a 50 bit/s navigation message and the result phase modulates the carrier as previously described. These codes only match up, or strongly autocorrelate when they are almost exactly aligned. Each satellite uses a unique PRN code, which does not correlate well with any other satellite's PRN code. In other words, the PRN codes are highly orthogonal to one another. The 1 ms period of the C/A code corresponds to 299.8 km of distance, and each chip corresponds to a distance of 293 m. (Receivers track these codes well within one chip of accuracy, so measurement errors are considerably smaller than 293 m.) The C/A codes are generated by combining (using "exclusive or") 2-bit streams generated by maximal period 10 stage linear-feedback shift registers (LFSR). Different codes are obtained by selectively delaying one of those bit streams. Thus: C/Ai(t) = A(t) ⊕ B(t-Di) where: C/Ai is the code with PRN number i. A is the output of the first LFSR whose generator polynomial is x → x10 + x3 + 1, and initial state is 11111111112. B is the output of the second LFSR whose generator polynomial is x → x10 + x9 + x8 + x6 + x3 + x2 + 1 and initial state is also 11111111112. Di is a delay (by an integer number of periods) specific to each PRN number i; it is designated in the GPS interface specification. ⊕ is exclusive or. The arguments of the functions therein are the number of bits or chips since their epochs, starting at 0. The epoch of the LFSRs is the point at which they are at the initial state; and for the overall C/A codes it is the start of any UTC second plus any integer number of milliseconds. The output of LFSRs at negative arguments is defined consistent with the period which is 1,023 chips (this provision is necessary because B may have a negative argument using the above equation). The delay for PRN numbers 34 and 37 is the same; therefore their C/A codes are identical and are not transmitted at the same time (it may make one or both of those signals unusable due to mutual interference depending on the relative power levels received on each GPS receiver). Precision code The P-code is a PRN sequence much longer than the C/A code: 6.187104 · 1012 chips (773,388 MByte). Even though the P-code chip rate (10.23 Mchips/s) is ten times that of the C/A code, it repeats only once per week, eliminating range ambiguity. It was assumed that receivers could not directly acquire such a long and fast code so they would first "bootstrap" themselves with the C/A code to acquire the spacecraft ephemerides, produce an approximate time and position fix, and then acquire the P-code to refine the fix. Whereas the C/A PRNs are unique for each satellite, each satellite transmits a different segment of a master P-code sequence approximately 2.35 · 1014 chips long (235,000,000,000,000 bits, ~26.716 terabytes). Each satellite repeatedly transmits its assigned segment of the master code, restarting every Sunday at 00:00:00 GPS time. (The GPS epoch was Sunday January 6, 1980 at 00:00:00 UTC, but GPS does not follow UTC leap seconds. So GPS time is ahead of UTC by an integer number of seconds.) The P code is public, so to prevent unauthorized users from using or potentially interfering with it through spoofing, the P-code is XORed with W-code, a cryptographically generated sequence, to produce the Y-code. The Y-code is what the satellites have been transmitting since the anti-spoofing module was set to the "on" state. The encrypted signal is referred to as the P(Y)-code. The details of the W-code are secret, but it is known that it is applied to the P-code at approximately 500 kHz, about 20 times slower than the P-code chip rate. This has led to semi-codeless approaches for tracking the P(Y) signal without knowing the W-code. Navigation message In addition to the PRN ranging codes, a receiver needs to know the time and position of each active satellite. GPS encodes this information into the navigation message and modulates it onto both the C/A and P(Y) ranging codes at 50 bit/s. The navigation message format described in this section is called LNAV data (for legacy navigation). The navigation message conveys information of three types: The GPS date and time and the satellite's status. The ephemeris: precise orbital information for the transmitting satellite. The almanac: status and low-resolution orbital information for every satellite. An ephemeris is valid for only four hours; an almanac is valid with little dilution of precision for up to two weeks. The receiver uses the almanac to acquire a set of satellites based on stored time and location. As each satellite is acquired, its ephemeris is decoded so the satellite can be used for navigation. The navigation message consists of 30-second frames 1,500 bits long, divided into five 6-second subframes of ten 30-bit words each. Each subframe has the GPS time in 6-second increments. Subframe 1 contains the GPS date (week number) and satellite clock correction information, satellite status and health. Subframes 2 and 3 together contain the transmitting satellite's ephemeris data. Subframes 4 and 5 contain page 1 through 25 of the 25-page almanac. The almanac is 15,000 bits long and takes 12.5 minutes to transmit. A frame begins at the start of the GPS week and every 30 seconds thereafter. Each week begins with the transmission of almanac page 1. There are two navigation message types: LNAV-L is used by satellites with PRN numbers 1 to 32 (called lower PRN numbers) and LNAV-U is used by satellites with PRN numbers 33 to 63 (called upper PRN numbers). The 2 types use very similar formats. Subframes 1 to 3 are the same while subframes 4 and 5 are almost the same. Each message type contains almanac data for all satellites using the same navigation message type, but not the other. Each subframe begins with a Telemetry Word (TLM) that enables the receiver to detect the beginning of a subframe and determine the receiver clock time at which the navigation subframe begins. Next is the handover word (HOW) giving the GPS time (actually the time when the first bit of the next subframe will be transmitted) and identifies the specific subframe within a complete frame. The remaining eight words of the subframe contain the actual data specific to that subframe. Each word includes 6 bits of parity generated using an algorithm based on Hamming codes, which take into account the 24 non-parity bits of that word and the last 2 bits of the previous word. After a subframe has been read and interpreted, the time the next subframe was sent can be calculated through the use of the clock correction data and the HOW. The receiver knows the receiver clock time of when the beginning of the next subframe was received from detection of the Telemetry Word thereby enabling computation of the transit time and thus the pseudorange. Time GPS time is expressed with a resolution of 1.5 seconds as a week number and a time of week count (TOW). Its zero point (week 0, TOW 0) is defined to be 1980-01-06T00:00Z. The TOW count is a value ranging from 0 to 403,199 whose meaning is the number of 1.5 second periods elapsed since the beginning of the GPS week. Expressing TOW count thus requires 19 bits (219 = 524,288). GPS time is a continuous time scale in that it does not include leap seconds; therefore the start/end of GPS weeks may differ from that of the corresponding UTC day by an integer number of seconds. In each subframe, each hand-over word (HOW) contains the most significant 17 bits of the TOW count corresponding to the start of the next following subframe. Note that the 2 least significant bits can be safely omitted because one HOW occurs in the navigation message every 6 seconds, which is equal to the resolution of the truncated TOW count thereof. Equivalently, the truncated TOW count is the time duration since the last GPS week start/end to the beginning of the next frame in units of 6 seconds. Each frame contains (in subframe 1) the 10 least significant bits of the corresponding GPS week number. Note that each frame is entirely within one GPS week because GPS frames do not cross GPS week boundaries. Since rollover occurs every 1,024 GPS weeks (approximately every 19.6 years; 1,024 is 210), a receiver that computes current calendar dates needs to deduce the upper week number bits or obtain them from a different source. One possible method is for the receiver to save its current date in memory when shut down, and when powered on, assume that the newly decoded truncated week number corresponds to the period of 1,024 weeks that starts at the last saved date. This method correctly deduces the full week number if the receiver is never allowed to remain shut down (or without a time and position fix) for more than 1,024 weeks (~19.6 years). Almanac The almanac consists of coarse orbit and status information for each satellite in the constellation, an ionospheric model, and information to relate GPS derived time to Coordinated Universal Time (UTC). Each frame contains a part of the almanac (in subframes 4 and 5) and the complete almanac is transmitted by each satellite in 25 frames total (requiring 12.5 minutes). The almanac serves several purposes. The first is to assist in the acquisition of satellites at power-up by allowing the receiver to generate a list of visible satellites based on stored position and time, while an ephemeris from each satellite is needed to compute position fixes using that satellite. In older hardware, lack of an almanac in a new receiver would cause long delays before providing a valid position, because the search for each satellite was a slow process. Advances in hardware have made the acquisition process much faster, so not having an almanac is no longer an issue. The second purpose is for relating time derived from the GPS (called GPS time) to the international time standard of UTC. Finally, the almanac allows a single-frequency receiver to correct for ionospheric delay error by using a global ionospheric model. The corrections are not as accurate as GNSS augmentation systems like WAAS or dual-frequency receivers. However, it is often better than no correction, since ionospheric error is the largest error source for a single-frequency GPS receiver. Structure of subframes 4 and 5 Data updates Satellite data is updated typically every 24 hours, with up to 60 days data loaded in case there is a disruption in the ability to make updates regularly. Typically the updates contain new ephemerides, with new almanacs uploaded less frequently. The Control Segment guarantees that during normal operations a new almanac will be uploaded at least every 6 days. Satellites broadcast a new ephemeris every two hours. The ephemeris is generally valid for 4 hours, with provisions for updates every 4 hours or longer in non-nominal conditions. The time needed to acquire the ephemeris is becoming a significant element of the delay to first position fix, because as the receiver hardware becomes more capable, the time to lock onto the satellite signals shrinks; however, the ephemeris data requires 18 to 36 seconds before it is received, due to the low data transmission rate. Frequency information For the ranging codes and navigation message to travel from the satellite to the receiver, they must be modulated onto a carrier wave. In the case of the original GPS design, two frequencies are utilized; one at 1575.42 MHz (10.23 MHz × 154) called L1; and a second at 1227.60 MHz (10.23 MHz × 120), called L2. The C/A code is transmitted on the L1 frequency as a 1.023 MHz signal using a bi-phase shift keying (BPSK) modulation technique. The P(Y)-code is transmitted on both the L1 and L2 frequencies as a 10.23 MHz signal using the same BPSK modulation, however the P(Y)-code carrier is in quadrature with the C/A carrier (meaning it is 90° out of phase). Besides redundancy and increased resistance to jamming, a critical benefit of having two frequencies transmitted from one satellite is the ability to measure directly, and therefore remove, the ionospheric delay error for that satellite. Without such a measurement, a GPS receiver must use a generic model or receive ionospheric corrections from another source (such as the Wide Area Augmentation System or WAAS). Advances in the technology used on both the GPS satellites and the GPS receivers has made ionospheric delay the largest remaining source of error in the signal. A receiver capable of performing this measurement can be significantly more accurate and is typically referred to as a dual frequency receiver. Modernization and additional GPS signals Having reached full operational capability on July 17, 1995 the GPS system had completed its original design goals. However, additional advances in technology and new demands on the existing system led to the effort to "modernize" the GPS system. Announcements from the Vice President and the White House in 1998 heralded the beginning of these changes and in 2000, the U.S. Congress reaffirmed the effort, referred to as GPS III. The project involves new ground stations and new satellites, with additional navigation signals for both civilian and military users, and aims to improve the accuracy and availability for all users. A goal of 2013 was established with incentives offered to the contractors if they can complete it by 2011. General features Modernized GPS civilian signals have two general improvements over their legacy counterparts: a dataless acquisition aid and forward error correction (FEC) coding of the NAV message. A dataless acquisition aid is an additional signal, called a pilot carrier in some cases, broadcast alongside the data signal. This dataless signal is designed to be easier to acquire than the data encoded and, upon successful acquisition, can be used to acquire the data signal. This technique improves acquisition of the GPS signal and boosts power levels at the correlator. The second advancement is to use forward error correction (FEC) coding on the NAV message itself. Due to the relatively slow transmission rate of NAV data (usually 50 bits per second), small interruptions can have potentially large impacts. Therefore, FEC on the NAV message is a significant improvement in overall signal robustness. L2C One of the first announcements was the addition of a new civilian-use signal, to be transmitted on a frequency other than the L1 frequency used for the coarse/acquisition (C/A) signal. Ultimately, this became the L2C signal, so called because it is broadcast on the L2 frequency. Because it requires new hardware on board the satellite, it is only transmitted by the so-called Block IIR-M and later design satellites. The L2C signal is tasked with improving accuracy of navigation, providing an easy to track signal, and acting as a redundant signal in case of localized interference. L2C signals have been broadcast beginning in April 2014 on satellites capable of broadcasting it, but are still considered pre-operational. , L2C is broadcast on 23 satellites and is expected on 24 satellites by 2023. Unlike the C/A code, L2C contains two distinct PRN code sequences to provide ranging information; the civil-moderate code (called CM), and the civil-long length code (called CL). The CM code is 10,230 bits long, repeating every 20 ms. The CL code is 767,250 bits long, repeating every 1,500 ms. Each signal is transmitted at 511,500 bits per second (bit/s); however, they are multiplexed together to form a 1,023,000-bit/s signal. CM is modulated with the CNAV Navigation Message (see below), whereas CL does not contain any modulated data and is called a dataless sequence. The long, dataless sequence provides for approximately 24 dB greater correlation (~250 times stronger) than L1 C/A-code. When compared to the C/A signal, L2C has 2.7 dB greater data recovery and 0.7 dB greater carrier-tracking, although its transmission power is 2.3 dB weaker. The current status of the L2C signal as of June 9 2021 is: Pre-operational signal with message set "healthy" Broadcasting from 23 GPS satellites (as of January 9, 2021) Began launching in 2005 with GPS Block IIR-M Available on 24 GPS satellites with ground segment control capability by 2023 (as of Jan 2020) CM and CL codes The civil-moderate and civil-long ranging codes are generated by a modular LFSR which is reset periodically to a predetermined initial state. The period of the CM and CL is determined by this resetting and not by the natural period of the LFSR (as is the case with the C/A code). The initial states are designated in the interface specification and are different for different PRN numbers and for CM/CL. The feedback polynomial/mask is the same for CM and CL. The ranging codes are thus given by: CMi(t) = A(Xi,t mod 10 230) CLi(t) = A(Yi,t mod 767 250) where: CMi and CLi are the ranging codes for PRN number i and their arguments are the integer number of chips elapsed (starting at 0) since start/end of GPS week, or equivalently since the origin of the GPS time scale (see § Time). A(x, t) is the output of the LFSR when initialized with initial state x after being clocked t times. Xi and Yi are the initial states for CM and CL respectively. for PRN number . mod is the remainder of division operation. t is the integer number of CM and CL chip periods since the origin of GPS time or equivalently, since any GPS second (starting from 0). The initial states are described in the GPS interface specification as numbers expressed in octal following the convention that the LFSR state is interpreted as the binary representation of a number where the output bit is the least significant bit, and the bit where new bits are shifted in is the most significant bit. Using this convention, the LFSR shifts from most significant bit to least significant bit and when seen in big endian order, it shifts to the right. The states called final state in the IS are obtained after cycles for CM and after cycles for LM (just before reset in both cases). The feedback bit mask is 1001001010010010101001111002. Again with the convention that the least significant bit is the output bit of the LFSR and the most significant bit is the shift-in bit of the LFSR, 0 means no feedback into that position, and 1 means feedback into that position. CNAV navigation message The CNAV data is an upgraded version of the original NAV navigation message. It contains higher precision representation and nominally more accurate data than the NAV data. The same type of information (time, status, ephemeris, and almanac) is still transmitted using the new CNAV format; however, instead of using a frame / subframe architecture, it uses a new pseudo-packetized format made of 12-second 300-bit messages analogous to LNAV frames. While LNAV frames have a fixed information content, CNAV messages may be of one of several defined types. The type of a frame determines its information content. Messages do not follow a fixed schedule regarding which message types will be used, allowing the Control Segment some versatility. However, for some message types there are lower bounds on how often they will be transmitted. In CNAV, at least 1 out of every 4 packets are ephemeris data and the same lower bound applies for clock data packets. The design allows for a wide variety of packet types to be transmitted. With a 32-satellite constellation, and the current requirements of what needs to be sent, less than 75% of the bandwidth is used. Only a small fraction of the available packet types have been defined; this enables the system to grow and incorporate advances without breaking compatibility. There are many important changes in the new CNAV message: It uses forward error correction (FEC) provided by a rate 1/2 convolutional code, so while the navigation message is 25-bit/s, a 50-bit/s signal is transmitted. Messages carry a 24-bit CRC, against which integrity can be checked. The GPS week number is now represented as 13 bits, or 8192 weeks, and only repeats every 157.0 years, meaning the next return to zero won't occur until the year 2137. This is longer compared to the L1 NAV message's use of a 10-bit week number, which returns to zero every 19.6 years. There is a packet that contains a GPS-to-GNSS time offset. This allows better interoperability with other global time-transfer systems, such as Galileo and GLONASS, both of which are supported. The extra bandwidth enables the inclusion of a packet for differential correction, to be used in a similar manner to satellite based augmentation systems and which can be used to correct the L1 NAV clock data. Every packet contains an alert flag, to be set if the satellite data can not be trusted. This means users will know within 12 seconds if a satellite is no longer usable. Such rapid notification is important for safety-of-life applications, such as aviation. Finally, the system is designed to support 63 satellites, compared with 32 in the L1 NAV message. CNAV messages begin and end at start/end of GPS week plus an integer multiple of 12 seconds. Specifically, the beginning of the first bit (with convolution encoding already applied) to contain information about a message matches the aforesaid synchronization. CNAV messages begin with an 8-bit preamble which is a fixed bit pattern and whose purpose is to enable the receiver to detect the beginning of a message. Forward error correction code The convolutional code used to encode CNAV is described by: where: and are the unordered outputs of the convolutional encoder is the raw (non FEC encoded) navigation data, consisting of the simple concatenation of the 300-bit messages. is the integer number of non FEC encoded navigation data bits elapsed since an arbitrary point in time (starting at 0). is the FEC encoded navigation data. is the integer number of FEC encoded navigation data bits elapsed since the same epoch than (likewise starting at 0). Since the FEC encoded bit stream runs at 2 times the rate than the non FEC encoded bit as already described, then . FEC encoding is performed independently of navigation message boundaries; this follows from the above equations. L2C frequency information An immediate effect of having two civilian frequencies being transmitted is the civilian receivers can now directly measure the ionospheric error in the same way as dual frequency P(Y)-code receivers. However, users utilizing the L2C signal alone, can expect 65% more position uncertainty due to ionospheric error than with the L1 signal alone. Military (M-code) A major component of the modernization process is a new military signal. Called the Military code, or M-code, it was designed to further improve the anti-jamming and secure access of the military GPS signals. Very little has been published about this new, restricted code. It contains a PRN code of unknown length transmitted at 5.115 MHz. Unlike the P(Y)-code, the M-code is designed to be autonomous, meaning that a user can calculate their position using only the M-code signal. From the P(Y)-code's original design, users had to first lock onto the C/A code and then transfer the lock to the P(Y)-code. Later, direct-acquisition techniques were developed that allowed some users to operate autonomously with the P(Y)-code. MNAV navigation message A little more is known about the new navigation message, which is called MNAV. Similar to the new CNAV, this new MNAV is packeted instead of framed, allowing for very flexible data payloads. Also like CNAV it can utilize Forward Error Correction (FEC) and advanced error detection (such as a CRC). M-code frequency information The M-code is transmitted in the same L1 and L2 frequencies already in use by the previous military code, the P(Y)-code. The new signal is shaped to place most of its energy at the edges (away from the existing P(Y) and C/A carriers). In a major departure from previous GPS designs, the M-code is intended to be broadcast from a high-gain directional antenna, in addition to a full-Earth antenna. This directional antenna's signal, called a spot beam, is intended to be aimed at a specific region (several hundred kilometers in diameter) and increase the local signal strength by 20 dB, or approximately 100 times stronger. A side effect of having two antennas is that the GPS satellite will appear to be two GPS satellites occupying the same position to those inside the spot beam. While the whole Earth M-code signal is available on the Block IIR-M satellites, the spot beam antennas will not be deployed until the Block III satellites are deployed, which began in December 2018. An interesting side effect of having each satellite transmit four separate signals is that the MNAV can potentially transmit four different data channels, offering increased data bandwidth. The modulation method is binary offset carrier, using a 10.23 MHz subcarrier against the 5.115 MHz code. This signal will have an overall bandwidth of approximately 24 MHz, with significantly separated sideband lobes. The sidebands can be used to improve signal reception. L5 The L5 signal provides a means of radionavigation secure and robust enough for life critical applications, such as aircraft precision approach guidance. The signal is broadcast in a frequency band protected by the ITU for aeronautical radionavigation services. It was first demonstrated from satellite USA-203 (Block IIR-M), and is available on all satellites from GPS IIF and GPS III. L5 signals have been broadcast beginning in April 2014 on satellites that support it. , 16 GPS satellites are broadcasting L5 signals, and the signals are considered pre-operational, scheduled to reach 24 satellites by approximately 2027. The L5 band provides additional robustness in the form of interference mitigation, the band being internationally protected, redundancy with existing bands, geostationary satellite augmentation, and ground-based augmentation. The added robustness of this band also benefits terrestrial applications. Two PRN ranging codes are transmitted on L5 in quadrature: the in-phase code (called I5-code) and the quadrature-phase code (called Q5-code). Both codes are 10,230 bits long, transmitted at 10.23 MHz (1 ms repetition period), and are generated identically (differing only in initial states). Then, I5 is modulated (by exclusive-or) with navigation data (called L5 CNAV) and a 10-bit Neuman-Hofman code clocked at 1 kHz. Similarly, the Q5-code is then modulated but with only a 20-bit Neuman-Hofman code that is also clocked at 1 kHz. Compared to L1 C/A and L2, these are some of the changes in L5: Improved signal structure for enhanced performance Higher transmitted power than L1/L2 signal (~3 dB, or 2× as powerful) Wider bandwidth provides a 10× processing gain, provides sharper autocorrelation (in absolute terms, not relative to chip time duration) and requires a higher sampling rate at the receiver. Longer spreading codes (10× longer than C/A) Uses the Aeronautical Radionavigation Services band The current status of the L5 signal as of June 9 2021 is: Pre-operational signal with message set "unhealthy" until sufficient monitoring capability established Broadcasting from 16 GPS satellites (as of January 9, 2021) Began launching in 2010 with GPS Block IIF Available on 24 GPS satellites ~2027 (as of Jan 2020) I5 and Q5 codes The I5-code and Q5-code are generated using the same structure but with different parameters. These codes are the combination (by exclusive-or) of the output of 2 differing linear-feedback shift registers (LFSRs) which are selectively reset. 5i(t) = U(t) ⊕ Vi(t) U(t) = XA((t mod 10 230) mod 8 190) Vi(t) = XBi(Xi, t mod 10 230) where: i is an ordered pair (P, n) where P ∈ {I, Q} for in-phase and quadrature-phase, and n a PRN number; both phases and a single PRN are required for the L5 signal from a single satellite. 5i is the ranging codes for i; also denoted as I5n and Q5n. U and Vi are intermediate codes, with U not depending on phase or PRN. The output of two 13-stage LFSRs with clock state t is used: XA(x,t''') has feedback polynomial x13 + x12 + x10 + x9 + 1, and initial state 11111111111112.XBi(x,t) has feedback polynomial x13 + x12 + x8 + x7 + x6 + x4 + x3 + x + 1, and initial state Xi.Xi is the initial state specified for the phase and PRN number given by i (designated in the IS).t is the integer number of chip periods since the origin of GPS time or equivalently, since any GPS second (starting from 0).A and B are maximal length LFSRs. The modulo operations correspond to resets. Note that both are reset each millisecond (synchronized with C/A code epochs). In addition, the extra modulo operation in the description of A is due to the fact it is reset 1 cycle before its natural period (which is 8,191) so that the next repetition becomes offset by 1 cycle with respect to B (otherwise, since both sequences would repeat, I5 and Q5 would repeat within any 1 ms period as well, degrading correlation characteristics). L5 navigation message The L5 CNAV data includes SV ephemerides, system time, SV clock behavior data, status messages and time information, etc. The 50 bit/s data is coded in a rate 1/2 convolution coder. The resulting 100 symbols per second (sps) symbol stream is modulo-2 added to the I5-code only; the resultant bit-train is used to modulate the L5 in-phase (I5) carrier. This combined signal is called the L5 Data signal. The L5 quadrature-phase (Q5) carrier has no data and is called the L5 Pilot signal. The format used for L5 CNAV is very similar to that of L2 CNAV. One difference is that it uses 2 times the data rate. The bit fields within each message, message types, and forward error correction code algorithm are the same as those of L2 CNAV. L5 CNAV messages begin and end at start/end of GPS week plus an integer multiple of 6 seconds (this applies to the beginning of the first bit to contain information about a message, as is the case for L2 CNAV). L5 frequency information Broadcast on the L5 frequency (1176.45 MHz, 10.23 MHz × 115), which is an aeronautical navigation band. The frequency was chosen so that the aviation community can manage interference to L5 more effectively than L2. L1C L1C is a civilian-use signal, to be broadcast on the L1 frequency (1575.42 MHz), which contains the C/A signal used by all current GPS users. The L1C signals will be broadcast from GPS III and later satellites, the first of which was launched in December 2018. , L1C signals are not yet broadcast, and only four operational satellites are capable of broadcasting them. L1C is expected on 24 GPS satellites in the late 2020s. L1C consists of a pilot (called L1CP) and a data (called L1CD) component. These components use carriers with the same phase (within a margin of error of 100 milliradians), instead of carriers in quadrature as with L5. The PRN codes are 10,230 bits long and transmitted at 1.023 Mbit/s. The pilot component is also modulated by an overlay code called L1CO (a secondary code that has a lower rate than the ranging code and is also predefined, like the ranging code). Of the total L1C signal power, 25% is allocated to the data and 75% to the pilot. The modulation technique used is BOC(1,1) for the data signal and TMBOC for the pilot. The time multiplexed binary offset carrier (TMBOC) is BOC(1,1) for all except 4 of 33 cycles, when it switches to BOC(6,1). Implementation will provide C/A code to ensure backward compatibility Assured of 1.5 dB increase in minimum C/A code power to mitigate any noise floor increase Data-less signal component pilot carrier improves tracking compared with L1 C/A Enables greater civil interoperability with Galileo L1 The current status of the L1C signal as of June 10 2021 is: Developmental signal with message set "unhealthy" and no navigation data Broadcasting from 4 GPS satellites (as of January 9, 2021) Began launching in 2018 with GPS III Available on 24 GPS satellites in late 2020s L1C ranging code The L1C pilot and data ranging codes are based on a Legendre sequence with length used to build an intermediate code (called a Weil code) which is expanded with a fixed 7-bit sequence to the required 10,230 bits. This 10,230-bit sequence is the ranging code and varies between PRN numbers and between the pilot and data components. The ranging codes are described by: where: is the ranging code for PRN number and component . represents a period of ; it is introduced only to allow a more clear notation. To obtain a direct formula for start from the right side of the formula for and replace all instances of with . is the integer number of L1C chip periods (which is  µs) since the origin of GPS time or equivalently, since any GPS second (starting from 0). is an ordered pair identifying a PRN number and a code (L1CP or L1CD) and is of the form or where is the PRN number of the satellite, and are symbols (not variables) that indicate the L1CP code or L1CD code, respectively. is an intermediate code: a Legendre sequence whose domain is the set of integers for which . is an intermediate code called Weil code, with the same domain as . is a 7-bit long sequence defined for 0-based indexes 0 to 6. is the 0-based insertion index of the sequence into the ranging code (specific for PRN number and code ). It is defined in the Interface Specification (IS) as a 1-based index , therefore . is the Weil index for PRN number and code designated in the IS. is the remainder of division (or modulo) operation, which differs to the notation in statements of modular congruence, also used in this article. According to the formula above and the GPS IS, the first bits (equivalently, up to the insertion point of ) of and are the first bits the corresponding Weil code; the next 7 bits are ; the remaining bits are the remaining bits of the Weil code. The IS asserts that . For clarity, the formula for does not account for the hypothetical case in which , which would cause the instance of inserted into to wrap from index to 0. L1C overlay code The overlay codes are 1,800 bits long and is transmitted at 100 bit/s, synchronized with the navigation message encoded in L1CD. For PRN numbers 1 to 63 they are the truncated outputs of maximal period LFSRs which vary in initial conditions and feedback polynomials. For PRN numbers 64 to 210 they are truncated Gold codes generated by combining 2 LFSR outputs ( and , where is the PRN number) whose initial state varies. has one of the 4 feedback polynomials used overall (among PRN numbers 64–210). has the same feedback polynomial for all PRN numbers in the range 64–210. CNAV-2 navigation message The L1C navigation data (called CNAV-2) is broadcast in 1,800 bits long (including FEC) frames and is transmitted at 100 bit/s. The frames of L1C are analogous to the messages of L2C and L5. While L2 CNAV and L5 CNAV use a dedicated message type for ephemeris data, all CNAV-2 frames include that information. The common structure of all messages consists of 3 frames, as listed in the adjacent table. The content of subframe 3 varies according to its page number which is analogous to the type number of L2 CNAV and L5 CNAV messages. Pages are broadcast in an arbitrary order. The time of messages (not to be confused with clock correction parameters) is expressed in a different format than the format of the previous civilian signals. Instead it consists of 3 components: The week number, with the same meaning as with the other civilian signals. Each message contains the week number modulo 8,192 or equivalently, the 13 least significant bits of the week number, allowing direct specification of any date within a cycling 157-year range. An interval time of week (ITOW): the integer number of 2 hour periods elapsed since the latest start/end of week. It has range 0 to 83 (inclusive), requiring 7 bits to encode. A time of interval (TOI): the integer number of 18 second periods elapsed since the period represented by the current ITOW to the beginning of the next message. It has range 0 to 399 (inclusive) and requires 9 bits of data. TOI is the only content of subframe 1. The week number and ITOW are contained in subframe 2 along with other information. Subframe 1 is encoded by a modified BCH code. Specifically, the 8 least significant bits are BCH encoded to generate 51 bits, then combined using exclusive or with the most significant bit and finally the most significant bit is appended as the most significant bit of the previous result to obtain the final 52 bits. Subframes 2 and 3 are individually expanded with a 24-bit CRC, then individually encoded using a low-density parity-check code, and then interleaved as a single unit using a block interleaver. Overview of frequencies All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique where the low-bitrate message data is encoded with a high-rate pseudo-random noise (PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 million chips per second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code. The P code can be encrypted as a so-called P(Y) code which is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user. Each composite signal (in-phase and quadrature phase) becomes: where and represent signal powers; and represent codes with/without data . This is a formula for the ideal case (which is not attained in practice) as it does not model timing errors, noise, amplitude mismatch between components or quadrature error (when components are not exactly in quadrature). Demodulation and decoding A GPS receiver processes the GPS signals received on its antenna to determine position, velocity and/or timing. The signal at antenna is amplified, down converted to baseband or intermediate frequency, filtered (to remove frequencies outside the intended frequency range for the digital signal that would alias into it) and digitalized; these steps may be chained in a different order. Note that aliasing is sometimes intentional (specifically, when undersampling is used) but filtering is still required to discard frequencies not intended to be present in the digital representation. For each satellite used by the receiver, the receiver must first acquire the signal and then track it as long as that satellite is in use; both are performed in the digital domain in by far most (if not all) receivers. Acquiring a signal is the process of determining the frequency and code phase (both relative to receiver time) when it was previously unknown. Code phase must be determined within an accuracy that depends on the receiver design (especially the tracking loop); 0.5 times the duration of code chips (approx. 0.489 µs) is a representative value. Tracking is the process of continuously adjusting the estimated frequency and phase to match the received signal as close as possible and therefore is a phase locked loop. Note that acquisition is performed to start using a particular satellite, but tracking is performed as long as that satellite is in use. In this section, one possible procedure is described for L1 C/A acquisition and tracking, but the process is very similar for the other signals. The described procedure is based on computing the correlation of the received signal with a locally generated replica of the ranging code and detecting the highest peak or lowest valley. The offset of the highest peak or lowest valley contains information about the code phase relative to receiver time. The duration of the local replica is set by receiver design and is typically shorter than the duration of navigation data bits, which is 20 ms. Acquisition Acquisition of a given PRN number can be conceptualized as searching for a signal in a bidimensional search space where the dimensions are (1) code phase, (2) frequency. In addition, a receiver may not know which PRN number to search for, and in that case a third dimension is added to the search space: (3) PRN number. Frequency space The frequency range of the search space is the band where the signal may be located given the receiver knowledge. The carrier frequency varies by roughly 5 kHz due to the Doppler effect when the receiver is stationary; if the receiver moves, the variation is higher. The code frequency deviation is 1/1,540 times the carrier frequency deviation for L1 because the code frequency is 1/1,540 of the carrier frequency (see § Frequencies used by GPS). The down conversion does not affect the frequency deviation; it only shifts all the signal frequency components down. Since the frequency is referenced to the receiver time, the uncertainty in the receiver oscillator frequency adds to the frequency range of the search space. Code phase space The ranging code has a period of 1,023 chips each of which lasts roughly 0.977 µs (see § Coarse/acquisition code). The code gives strong autocorrelation only at offsets less than 1 in magnitude. The extent of the search space in the code phase dimension depends on the granularity of the offsets at which correlation is computed. It is typical to search for the code phase within a granularity of 0.5 chips or finer; that means 2,046 offsets. There may be more factors increasing the size of the search space of code phase. For example, a receiver may be designed so as to examine 2 consecutive windows of the digitalized signal, so that at least one of them does not contain a navigation bit transition (which worsens the correlation peak); this requires the signal windows to be at most 10 ms long. PRN number space The lower PRN numbers range from 1 to 32 and therefore there are 32 PRN numbers to search for when the receiver does not have information to narrow the search in this dimension. The higher PRN numbers range from 33 to 66. See § Navigation message. If the almanac information has previously been acquired, the receiver picks which satellites to listen for by their PRNs. If the almanac information is not in memory, the receiver enters a search mode and cycles through the PRN numbers until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then decode the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. Simple correlation The simplest way to acquire the signal (not necessarily the most effective or least computationally expensive) is to compute the dot product of a window of the digitalized signal with a set of locally generated replicas. The locally generated replicas vary in carrier frequency and code phase to cover all the already mentioned search space which is the Cartesian product of the frequency search space and the code phase search space. The carrier is a complex number where real and imaginary components are both sinusoids as described by Euler's formula. The replica that generates the highest magnitude of dot product is likely the one that best matches the code phase and frequency of the signal; therefore, if that magnitude is above a threshold, the receiver proceeds to track the signal or further refine the estimated parameters before tracking. The threshold is used to minimize false positives (apparently detecting a signal when there is in fact no signal), but some may still occur occasionally. Using a complex carrier allows the replicas to match the digitalized signal regardless of the signal's carrier phase and to detect that phase (the principle is the same used by the Fourier transform). The dot product is a complex number; its magnitude represents the level of similarity between the replica and the signal, as with an ordinary correlation of real-valued time series. The argument of the dot product is an approximation of the corresponding carrier in the digitalized signal. As an example, assume that the granularity for the search in code phase is 0.5 chips and in frequency is 500 Hz, then there are 1,023/0.5=2,046 code phases and 10,000 Hz/500 Hz=20 frequencies to try for a total of 20×2,046=40,920 local replicas. Note that each frequency bin is centered on its interval and therefore covers 250 Hz in each direction; for example, the first bin has a carrier at −4.750 Hz and covers the interval −5,000 Hz to −4,500 Hz. Code phases are equivalent modulo 1,023 because the ranging code is periodic; for example, phase −0.5 is equivalent to phase 1,022.5. The following table depicts the local replicas that would be compared against the digitalized signal in this example. "•" means a single local replica while "..." is used for elided local replicas: Fourier transform As an improvement over the simple correlation method, it is possible to implement the computation of dot products more efficiently with a Fourier transform. Instead of performing one dot product for each element in the Cartesian product of code and frequency, a single operation involving FFT and covering all frequencies is performed for each code phase; each such operation is more computationally expensive, but it may still be faster overall than the previous method due to the efficiency of FFT algorithms, and it recovers carrier frequency with a higher accuracy, because the frequency bins are much closely spaced in a DFT. Specifically, for all code phases in the search space, the digitalized signal window is multiplied element by element with a local replica of the code (with no carrier), then processed with a discrete Fourier transform. Given the previous example to be processed with this method, assume real-valued data (as opposed to complex data, which would have in-phase and quadrature components), a sampling rate of 5 MHz, a signal window of 10 ms, and an intermediate frequency of 2.5 MHz. There will be 5 MHz×10 ms=50,000 samples in the digital signal, and therefore 25,001 frequency components ranging from 0 Hz to 2.5 MHz in steps of 100 Hz (note that the 0 Hz component is real because it is the average of a real-valued signal and the 2.5 MHz component is real as well because it is the critical frequency). Only the components (or bins) within 5 kHz of the central frequency are examined, which is the range from 2.495 MHz to 2.505 MHz, and it is covered by 51 frequency components. There are 2,046 code phases as in the previous case, thus in total 51×2,046=104,346 complex frequency components will be examined. Circular correlation with Fourier transform Likewise, as an improvement over the simple correlation method, it is possible to perform a single operation covering all code phases for each frequency bin. The operation performed for each code phase bin involves forward FFT, element-wise multiplication in the frequency domain. inverse FFT, and extra processing so that overall, it computes circular correlation instead of circular convolution. This yields more accurate code phase determination than the simple correlation method in contrast with the previous method, which yields more accurate carrier frequency determination'' than the previous method. Tracking and navigation message decoding Since the carrier frequency received can vary due to Doppler shift, the points where received PRN sequences begin may not differ from O by an exact integral number of milliseconds. Because of this, carrier frequency tracking along with PRN code tracking are used to determine when the received satellite's PRN code begins. Unlike the earlier computation of offset in which trials of all 1,023 offsets could potentially be required, the tracking to maintain lock usually requires shifting of half a pulse width or less. To perform this tracking, the receiver observes two quantities, phase error and received frequency offset. The correlation of the received PRN code with respect to the receiver generated PRN code is computed to determine if the bits of the two signals are misaligned. Comparisons of the received PRN code with receiver generated PRN code shifted half a pulse width early and half a pulse width late are used to estimate adjustment required. The amount of adjustment required for maximum correlation is used in estimating phase error. Received frequency offset from the frequency generated by the receiver provides an estimate of phase rate error. The command for the frequency generator and any further PRN code shifting required are computed as a function of the phase error and the phase rate error in accordance with the control law used. The Doppler velocity is computed as a function of the frequency offset from the carrier nominal frequency. The Doppler velocity is the velocity component along the line of sight of the receiver relative to the satellite. As the receiver continues to read successive PRN sequences, it will encounter a sudden change in the phase of the 1,023-bit received PRN signal. This indicates the beginning of a data bit of the navigation message. This enables the receiver to begin reading the 20 millisecond bits of the navigation message. The TLM word at the beginning of each subframe of a navigation frame enables the receiver to detect the beginning of a subframe and determine the receiver clock time at which the navigation subframe begins. The HOW word then enables the receiver to determine which specific subframe is being transmitted. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data before computing the intersections of sphere surfaces. After a subframe has been read and interpreted, the time the next subframe was sent can be calculated through the use of the clock correction data and the HOW. The receiver knows the receiver clock time of when the beginning of the next subframe was received from detection of the Telemetry Word thereby enabling computation of the transit time and thus the pseudorange. The receiver is potentially capable of getting a new pseudorange measurement at the beginning of each subframe or every 6 seconds. Then the orbital position data, or ephemeris, from the navigation message is used to calculate precisely where the satellite was at the start of the message. A more sensitive receiver will potentially acquire the ephemeris data more quickly than a less sensitive receiver, especially in a noisy environment. See also In-phase and quadrature components Sources and references Bibliography GPS Interface Specification (describes L1, L2C and P). (describes L5). (describes L1C). Notes Global Positioning System Navigation
67308953
https://en.wikipedia.org/wiki/Accelerate%20%28book%29
Accelerate (book)
Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations is a software engineering book co-authored by Nicole Forsgren, Jez Humble and Gene Kim. The book explores how software development teams can measure their performance and the performance of software engineering teams impacts the overall performance of an organization. The book is based on the authors 4 year experience working with Puppet on the State of DevOps Report (SODR). The authors identified that “highest performers are twice as likely to meet or exceed their organizational performance goals.” "Four Key Metrics" Considering that case studies weren't sufficient for their research, the authors analysed 23,000 data points from a variety of companies of various different sizes (from start-up to enterprises), for-profit and not-for-profit and both those with legacy systems and those born digital. DevOps research conducted by the authors and summarised in Accelerate demonstrates four key metrics that are indicators of software delivery performance, leading to higher rates of profitability, market share and customer satisfaction for their respective companies. The four metrics identified are as follows: Change Lead Time - Time to implement, test, and deliver code for a feature (measured from first commit to deployment) Deployment Frequency - Number of deployments in a given duration of time Change Failure Rate - Percentage of failed changes over all changes (regardless of success) Mean Time to Recovery (MTTR) - Time it takes to restore service after production failure The authors are able to further measure how various technical practices (like outsourcing) and risk factors impact upon these North Star metrics for an engineering team. These metrics can be crudely measured using psychometrics or using commercial services like Haystack Analytics. References External links Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations (IT Revolution) QCon Plus (May 17-28): Stay Ahead of Emerging Software Trends Q&A on the Book Accelerate: Building and Scaling High Performance Technology Organizations Beginner's Guide to Software Delivery Metrics Computer programming books Computer books
2413659
https://en.wikipedia.org/wiki/Keith%20B.%20Alexander
Keith B. Alexander
Keith Brian Alexander (born December 2, 1951) is a retired four-star general of the United States Army, who served as director of the National Security Agency, chief of the Central Security Service, and commander of the United States Cyber Command. He previously served as Deputy Chief of Staff, G-2 (Intelligence), United States Army from 2003 to 2005. He assumed the positions of Director of the National Security Agency and Chief of the Central Security Service on August 1, 2005, and the additional duties as Commander United States Cyber Command on May 21, 2010. Alexander announced his retirement on October 16, 2013. His retirement date was March 28, 2014. In May 2014, Alexander founded IronNet Cybersecurity, a private-sector cybersecurity firm based in Fulton, Maryland. Early life and education Alexander was born on December 2, 1951 in Syracuse, New York, the son of Charlotte L. (Colvin) and Donald Henry Alexander. He was raised in Onondaga Hill, New York, a suburb of Syracuse. He was a paperboy for The Post-Standard and attended Westhill Senior High School, where he ran track. Alexander attended the United States Military Academy at West Point, and in his class were three other future four-star generals: David Petraeus, Martin Dempsey and Walter L. Sharp. Just before graduation in April 1974, Alexander married Deborah Lynn Douglas, who was a classmate in high school and who grew up near his family in Onondaga Hill. They had four daughters. Alexander entered active duty at West Point, intending to serve for only five years. Alexander's military education includes the Armor Officer Basic Course, the Military Intelligence Officer Advanced Course, the United States Army Command and General Staff College, and the National War College. Alexander worked on signals intelligence at a number of secret National Security Agency bases in the United States and Germany. He earned a Master of Science in business administration in 1978 from Boston University, a Master of Science in systems technology (electronic warfare) and a Master in Science in physics in 1983 from the Naval Postgraduate School, and a Master of Science in national security strategy from the National Defense University. He rose quickly up the military ranks, due to his expertise in advanced technology and his competency at administration. Military career Alexander's assignments include the Deputy Chief of Staff (DCS, G-2), Headquarters, Department of the Army, Washington, D.C. from 2003 to 2005; Commanding General of the United States Army Intelligence and Security Command at Fort Belvoir, Virginia from 2001 to 2003; Director of Intelligence (J-2), United States Central Command, MacDill Air Force Base, Florida from 1998 to 2001; and Deputy Director for Intelligence (J-2) for the Joint Chiefs of Staff from 1997 to 1998. Alexander served in a variety of command assignments in Germany and the United States. These include tours as Commander of Border Field Office, 511th MI Battalion, 66th MI Group; 336th Army Security Agency Company, 525th MI Group; 204th MI Battalion; and 525th Military Intelligence Brigade. Additionally, Alexander held key staff assignments as Deputy Director and Operations Officer, Executive Officer, 522nd MI Battalion, 2nd Armored Division; G-2 for the 1st Armored Division both in Germany and during the Gulf War, in Operation Desert Shield and Operation Desert Storm, in Saudi Arabia. He also served in Afghanistan on a peace keeping mission for the Army Deputy Chief of Staff for Intelligence. Alexander headed the Army Intelligence and Security Command, where in 2001 he was in charge of 10,700 spies and eavesdroppers worldwide. In the words of James Bamford, who wrote his biography for Wired, "Alexander and the rest of the American intelligence community suffered a devastating defeat when they were surprised by the attacks on 9/11." Alexander's reaction was to order his intercept operators to begin to monitor the email and phone calls of American citizens who were unrelated to terrorist threats, including the personal calls of journalists. In 2003, Alexander was named deputy chief of staff for intelligence for the United States Army. The 205th MI Brigade involved in the Abu Ghraib torture and prisoner abuse in Baghdad, Iraq was part of V Corps (US) and not under Alexander's command. Testifying to the Senate Armed Services Committee, Alexander called the abuse "totally reprehensible" and described the perpetrators as a "group of undisciplined MP soldiers". Mary Louise Kelly, who interviewed him later for NPR, said that because he was "outside the chain of command that oversaw interrogations in Iraq", Alexander was able to survive with his "reputation intact". In 2004, along with Alberto Gonzales and others in the George W. Bush administration, Alexander presented a memorandum that sought to justify the treatment of those who were deemed "unlawful enemy combatants". In June 2013, the National Security Agency was revealed by whistle-blower Edward Snowden to be secretly spying on the American people with FISA-approved surveillance programs, such as PRISM and XKeyscore. On October 16, 2013, it was publicly announced that Alexander and his deputy, Chris Inglis were leaving the NSA. On April 13, 2016, President Obama announced Alexander as a member of his Commission on Enhancing National Cybersecurity. NSA appointment In 2005, secretary of defense Donald Rumsfeld named Alexander, then a three-star general, as Director of the National Security Agency. There, according to Bamford, Alexander deceived the House Intelligence Committee when his agency was involved in warrantless wiretapping. Also during this period, Alexander oversaw the implementation of the Real Time Regional Gateway in Iraq, an NSA data collection program that consisted of gathering all electronic communication, storing it, and then searching and otherwise analyzing it. A former senior U.S. intelligence agent described Alexander's program: "Rather than look for a single needle in the haystack, his approach was, 'Let's collect the whole haystack. Collect it all, tag it, store it ... And whatever it is you want, you go searching for it." By 2008, the Regional Gateway was effective in providing information about Iraqi insurgents who had eluded less comprehensive techniques. This "collect it all" strategy introduced by Keith Alexander is believed by Glenn Greenwald of The Guardian to be the model for the comprehensive world-wide mass archiving of communications which NSA had become engaged in by 2013. According to Siobhan Gorman of The Wall Street Journal, a government official stated that Alexander offered to resign after the 2013 global surveillance disclosures first broke out in June 2013, but that the Obama Administration asked him not to. Cyber command Alexander was confirmed by the United States Senate for appointment to the rank of general on May 7, 2010, and was officially promoted to that rank in a ceremony on May 21, 2010. Alexander assumed command of United States Cyber Command in the same ceremony that made him a four-star general. Alexander delivered the keynote address at Black Hat USA in July 2013. The organizers describe Alexander as an advocate of "battlefield visualization and 'data fusion' for more useful intelligence". He provided them with this quote: Statements to the public regarding NSA operations Alexander gave the most comprehensive interview of his career, which spanned some 17,000 words, on 8 May 2014 to the Australian Financial Review journalist Christopher Joye, which was subsequently cited by Edward Snowden. The full transcript, which covers NSA operations, Snowden, the metadata debates, encryption controversies, and Chinese and Russian spying, has been published online. On Snowden, Alexander told Joye: "I suspect Russian intelligence are driving what he does. Understand as well that they're only going to let him do those things that benefit Russia, or stand to help improve Snowden's credibility". Wired magazine said the AFR interview with Alexander showed he was defending the stock-piling of zero-days while The Wall Street Journal and other media focused on Alexander's claims about Snowden working for Russian intelligence. In July 2012, in response to a question from Jeff Moss, founder of the DEF CON hacker convention, "... does the NSA really keep a file on everyone?," Alexander replied, "No, we don't. Absolutely no. And anybody who would tell you that we're keeping files or dossiers on the American people knows that's not true." In March 2012, in response to questions from Representative Hank Johnson during a United States Congress hearing about allegations made by former NSA officials that the NSA engages in collection of voice and digital information of U.S. citizens, Alexander said that, despite the allegations of "James Bashford" in Wired magazine, the NSA does not collect that data. On July 9, 2012, when asked by a member of the press if a large data center in Utah was used to store data on American citizens, Alexander stated, "No. While I can't go into all the details on the Utah Data Center, we don't hold data on U.S. citizens." At DEF CON 2012, Alexander was the keynote speaker; during the question and answers session, in response to the question "Does the NSA really keep a file on everyone, and if so, how can I see mine?" Alexander replied "Our job is foreign intelligence" and that "Those who would want to weave the story that we have millions or hundreds of millions of dossiers on people, is absolutely false ... From my perspective, this is absolute nonsense." On June 6, 2013, the day after Snowden's revelations, then-Director of National Intelligence James Clapper released a statement admitting the NSA collects telephony metadata on millions of Americans telephone calls. This metadata information included originating and terminating telephone number, telephone calling card number, IMEI number, time and duration of phone calls. Andy Greenberg of Forbes said that NSA officials, including Alexander, in the years 2012 and 2013 "publicly denied—often with carefully hedged words—participating in the kind of snooping on Americans that has since become nearly undeniable." In September 2013, Alexander was asked by Senator Mark Udall if it is the goal of the NSA to "collect the phone records of all Americans", to which Alexander replied: Retirement Alexander announced his retirement on October 16, 2013. His retirement date was March 28, 2014, and his replacement was U.S. Navy Vice Admiral Michael S. Rogers. Founder and CEO of IronNet In May 2014, after his retirement from NSA, Alexander founded IronNet Cybersecurity. IronNet provides cybersecurity coverage for private-sector companies using its IronDefense program and a team of cybersecurity analysts and experts. The company is headquartered in Fulton, Maryland with offices in Frederick, Maryland, McLean, Virginia, and New York City. In October 2015, IronNet received $32.5 million in funding from Trident Capital Cybersecurity (now ForgePoint Capital) and Kleiner Perkins Caufield & Byers in a Series A investment. In May 2018, IronNet raised an additional $78 million in a round led by C5 Capital alongside existing investors ForgePoint Capital and Kleiner Perkins Caufield & Byers. Alexander maintains his role as CEO of IronNet today. Amazon appointment Alexander joined Amazon's board of directors, as revealed in an SEC filing on September 9, 2020. Awards and decorations Medals and ribbons Alexander was inducted into the NPS Hall of Fame in 2013. Tax identity theft In the fall of 2014 Alexander told a public forum that someone else had claimed a $9,000 IRS refund in his name, and that the thieves used his identity to apply for about 20 credit cards. References External links NSA biography Press Release, NSA/CSS Welcomes LTG Keith B. Alexander, USA Public Intelligence profile Senate Confirmation of promotion to rank of General IronNet website 1951 births Boston University School of Management alumni Directors of the National Security Agency Living people Naval Postgraduate School alumni Recipients of the Air Medal Recipients of the Defense Superior Service Medal Recipients of the Distinguished Service Medal (US Army) Recipients of the Legion of Merit United States Army generals United States Military Academy alumni Recipients of the Defense Distinguished Service Medal Mass surveillance Identity theft victims Articles containing video clips Recipients of the National Intelligence Distinguished Service Medal Recipients of the Humanitarian Service Medal
2814347
https://en.wikipedia.org/wiki/Programming%20complexity
Programming complexity
Programming complexity (or software complexity) is a term that includes many properties of a piece of software, all of which affect internal interactions. According to several commentators, there is a distinction between the terms complex and complicated. Complicated implies being difficult to understand but with time and effort, ultimately knowable. Complex, on the other hand, describes the interactions between a number of entities. As the number of entities increases, the number of interactions between them would increase exponentially, and it would get to a point where it would be impossible to know and understand all of them. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions and so increases the chance of introducing defects when making changes. In more extreme cases, it can make modifying the software virtually impossible. The idea of linking software complexity to the maintainability of the software has been explored extensively by Professor Manny Lehman, who developed his Laws of Software Evolution from his research. He and his co-Author Les Belady explored numerous possible Software Metrics in their oft-cited book, that could be used to measure the state of the software, eventually reaching the conclusion that the only practical solution would be to use one that uses deterministic complexity models. Measures Many measures of software complexity have been proposed. Many of these, although yielding a good representation of complexity, do not lend themselves to easy measurement. Some of the more commonly used metrics are McCabe's cyclomatic complexity metric Halsteads software science metrics Henry and Kafura introduced Software Structure Metrics Based on Information Flow in 1981 which measures complexity as a function of fan in and fan out. They define fan-in of a procedure as the number of local flows into that procedure plus the number of data structures from which that procedure retrieves information. Fan-out is defined as the number of local flows out of that procedure plus the number of data structures that the procedure updates. Local flows relate to data passed to and from procedures that call or are called by, the procedure in question. Henry and Kafura's complexity value is defined as "the procedure length multiplied by the square of fan-in multiplied by fan-out" (Length ×(fan-in × fan-out)²). A Metrics Suite for Object-Oriented Design was introduced by Chidamber and Kemerer in 1994 focusing, as the title suggests, on metrics specifically for object-oriented code. They introduce six OO complexity metrics; weighted methods per class, coupling between object classes, response for a class, number of children, depth of inheritance tree and lack of cohesion of methods There are several other metrics that can be used to measure programming complexity: Branching complexity (Sneed Metric) Data access complexity (Card Metric) Data complexity (Chapin Metric) Data flow complexity (Elshof Metric) Decisional complexity (McClure Metric) Path Complexity (Bang Metric) Tesler's Law is an adage in human–computer interaction stating that every application has an inherent amount of complexity that cannot be removed or hidden. Types Associated with, and dependent on the complexity of an existing program, is the complexity associated with changing the program. The complexity of a problem can be divided into two parts: Accidental complexity: Relates to difficulties a programmer faces due to the chosen software engineering tools. A better fitting set of tools or a more high-level programming language may reduce it. Accidental complexity is often also a consequence of the lack of using the domain to frame the form of the solution i.e. the code. One practice that can help in avoiding accidental complexity is domain-driven design. Essential complexity: Is caused by the characteristics of the problem to be solved and cannot be reduced. Chidamber and Kemerer Metrics Chidamber and Kemerer proposed a set of programing complexity metrics, widely used in many measurements and academic articles. They are WMC, CBO, RFC, NOC, DIT, and LCOM, described below: WMC - weighted methods per class n is the number of methods on the class is the complexity of the method CBO - coupling between object classes number of other class which is coupled (using or being used) RFC - response for a class where is set of methods called by method i is the set of methods in the class NOC - number of children sum of all classes that inherit this class or a descendant of it DIT - depth of inheritance tree maximum depth of the inheritance tree for this class LCOM- lack of cohesion of methods Measures the intersection of the attributes used in common by the class methods Where And With is the set of attributes (instance variables) accessed (read from or written to) by the -th method of the class See also Software crisis (and subsequent programming paradigm solutions) Software metrics - quantitative measure of some property of a program. References Software metrics Complex systems theory
640080
https://en.wikipedia.org/wiki/Ghost%20%28disk%20utility%29
Ghost (disk utility)
Ghost (an acronym for general hardware-oriented system transfer) is a disk cloning and backup tool originally developed by Murray Haszard in 1995 for Binary Research. The technology was acquired in 1998 by Symantec. The backup and recovery functionality has been replaced by Symantec System Recovery (SSR), although the Ghost imaging technology is still actively developed and is available as part of Symantec Ghost Solution Suite. History Binary Research developed Ghost in Auckland, New Zealand. After the Symantec acquisition, a few functions (such as translation into other languages) were moved elsewhere, but the main development remained in Auckland until October 2009 at which time much was moved to India. Technologies developed by 20/20 Software were integrated into Ghost after their acquisition by Symantec in April 2000. Ghost 3.1 The first versions of Ghost supported only the cloning of entire disks. However, version 3.1, released in 1997 supports cloning individual partitions. Ghost could clone a disk or partition to another disk or partition or to an image file. Ghost allows for writing a clone or image to a second disk in the same machine, another machine linked by a parallel or network cable, a network drive, or to a tape drive. Ghost 4.0 and 4.1 Version 4.0 of Ghost added multicast technology, following the lead of a competitor, ImageCast. Multicasting supports sending a single backup image simultaneously to other machines without putting greater stress on the network than by sending an image to a single machine. This version also introduced Ghost Explorer, a Windows program which supports browsing the contents of an image file and extract individual files from it. Explorer was subsequently enhanced to support adding and deleting files in an image with FAT, and later with ext2, ext3 and NTFS file systems. Until 2007, Ghost Explorer could not edit NTFS images. Ghost Explorer could work with images from older versions but only slowly; version 4 images contain indexes to find files rapidly. Version 4.0 also moved from real-mode DOS to 286 protected mode. The additional memory available allows Ghost to provide several levels of compression for images, and to provide the file browser. In 1998, Ghost 4.1 supports password-protected images. Ghost 5.0 (Ghost 2000) Version 5.0 moved to 386 protected mode. Unlike the text-based user interface of earlier versions, 5.0 uses a graphical user interface (GUI). The Binary Research logo, two stars revolving around each other, plays on the main screen when the program is idle. In 1998, Gdisk, a script-based partition manager, was integrated in Ghost. Gdisk serves a role similar to Fdisk, but has greater capabilities. Ghost for NetWare A Norton Ghost version for Novell NetWare (called 2.0), released around 1999, supports NSS partitions (although it runs in DOS, like the others). Ghost 6.0 (Ghost 2001) Ghost 6.0, released in 2000, includes a management console for managing large numbers of machines. The console communicates with client software on managed computers and allows a system administrator to refresh the disk of a machine remotely. As a DOS-based program, Ghost requires machines running Windows to reboot to DOS to run it. Ghost 6.0 requires a separate DOS partition when used with the console. Ghost 7.0 / Ghost 2002 Released March 31, 2001, Norton Ghost version 7.0 (retail) was marketed as Norton Ghost 2002 Personal Edition. Ghost 7.5 Released December 14, 2001, Ghost 7.5 creates a virtual partition, a DOS partition which actually exists as a file within a normal Windows file system. This significantly eased systems management because the user no longer had to set up their own partition tables. Ghost 7.5 can write images to CD-R discs. Later versions can write DVDs. Symantec Ghost 8.0 Ghost 8.0 can run directly from Windows. It is well-suited for placement on bootable media, such as BartPE′s bootable CD. The corporate edition supports unicast, multicast and peer-to-peer transfers via TCP/IP. Ghost 8.0 supports NTFS file system, although NTFS is not accessible from a DOS program. Transition from DOS The off-line version of Ghost, which runs from bootable media in place of the installed operating system, originally faced a number of driver support difficulties due to limitations of the increasingly obsolete 16-bit DOS environment. Driver selection and configuration within DOS was non-trivial from the beginning, and the limited space available on floppy disks made disk cloning of several different disk controllers a difficult task, where different SCSI, USB, and CD-ROM drives were involved. Mouse support was possible but often left out due to the limited space for drivers on a floppy disk. Some devices such as USB often did not work using newer features such as USB 2.0, instead only operating at 1.0 speeds and taking hours to do what should have taken only a few minutes. As widespread support for DOS went into decline, it became increasingly difficult to get hardware drivers for DOS for the newer hardware. Disk imaging competitors to Ghost have dealt with the decline of DOS by moving to other recovery environments such as FreeBSD, Linux or Windows PE, where they can draw on current driver development to be able to image newer models of disk controllers. Nevertheless, the DOS version of Ghost on compatible hardware configurations works much faster than most of the *nix based image and backup tools. Ghost 8 and later are Windows programs; as such, they can run on Windows PE, BartPE or Hiren's BootCD and use the same plug and play hardware drivers as a standard desktop computer, making hardware support for Ghost much simpler. Norton Ghost 2003 Norton Ghost 2003, a consumer edition of Ghost, was released on September 6, 2002. Available as an independent product, Norton Ghost 2003 was also included as a component of Norton SystemWorks 2003 Professional. A simpler, non-corporate version of Ghost, Norton Ghost 2003 does not include the console but has a Windows front-end to script Ghost operations and create a bootable Ghost diskette. The machine still needs to reboot to the virtual partition, but the user does not need to interact with DOS. Symantec deprecated LiveUpdate support for Norton Ghost 2003 in early 2006. Symantec Ghost Solution Suite 1.0 (Ghost 8.2) Released November 15, 2004, Symantec renamed the Enterprise version of Ghost to Symantec Ghost Solution Suite 1.0. This helped clarify the difference between the consumer and business editions of the product. According to Symantec, Symantec Ghost and Norton Ghost are two separate product lines based around different technologies developed by different teams. This was further defined in February 2006, with the release of Norton Save And Restore (also known as Norton Backup And Restore), a standalone backup application based on Ghost 10.0. Symantec Ghost Solution Suite 1.1 (Ghost 8.3) Ghost Solution Suite 1.1 is a bundle of an updated version of Ghost, Symantec Client Migration (a user data and settings migration tool) and the former PowerQuest equivalent, DeployCenter (using PQI images). Ghost Solution Suite 1.1 was released on December 2005. It can create an image file that is larger than 2 GB. (In Ghost 8.2 or earlier, such image files are automatically split into two or more segments, so that each segment has a maximum size of 2 GB.) Other new features include more comprehensive manufacturing tools, and the ability to create a "universal boot disk". Acquisition of PowerQuest At the end of 2003, Symantec acquired its largest competitor in the disk-cloning field, PowerQuest. On August 2, 2004, Norton Ghost 9.0 was released as a new consumer version of Ghost, which is based on PowerQuest's Drive Image version 7, and provides Live imaging of a Windows system. Ghost 9 continues to leverage the PowerQuest file format, meaning it is not backward compatible with previous versions of Ghost. However, a version of Ghost 8.0 is included on the Ghost 9 recovery disk to support existing Ghost customers. Norton Ghost 9.0 (includes Ghost 2003) Ghost 9.0 was released August 2, 2004. It represents a significant shift in the consumer product line from Ghost 2003, in several ways: It uses a totally different code base, based on the DriveImage/V2i Protector product via Symantec's acquisition of PowerQuest. It is a Windows program that must be installed on the target system. Images can be made while Windows is running, rather than only when booted directly into DOS mode. Incremental images (containing only changes since the last image) are supported. Requires Product Activation in order to function fully. The bootable environment on the Ghost 9 CD is only useful for recovery of existing backups. It cannot be used to create new images. Since Ghost 9 does not support the older .gho format disk images, a separate CD containing Ghost 2003 is included in the retail packaging for users needing to access those older images. The limitations of Ghost 9 compared to Ghost 2003 were not well-communicated by Symantec, and resulted in many dissatisfied customers who purchased Ghost 9 expecting the previous version's features (like making images from the bootable Ghost environment, no installation required, and no product activation). Norton Ghost 10.0 Supports creating images on CDs, DVDs, Iomega Zip and Jaz disks as well as IEEE 1394 (FireWire) and USB mass storage devices. Supports encrypting images and Maxtor external hard disk drives with Maxtor OneTouch buttons. Ghost 10.0 is compatible with previous versions, but not with future versions. Norton Save And Restore 1.0 (Ghost 10.0) Norton Save And Restore 1.0, released in February 2006, was the renamed consumer version of Ghost. It used Ghost 10.0's engine, with the addition of features to allow backup and restoration of individual files. Symantec Ghost Solution Suite 2.0 (Ghost 11.0) Ghost Solution Suite 2.0 was released in November 2006. This version provides significant improvements in performance, as well as the ability to edit NTFS images. This version also adds support for Windows Vista, x64 versions of Windows, and GUID Partition Table (GPT) disks. However, the software does not fully support systems with Extensible Firmware Interface (EFI) firmware. Ghost 11.0 supports saving and restoring from native Ghost image format (.gho, .ghs) and raw images (.img, .raw). Norton Ghost 12.0 Ghost 12.0 includes Windows Vista support with an updated and more thorough user interface. It supports both full system backup and individual files or folders backup. This version provides a "LightsOut Restore" feature, which restores a system from an on-disk software recovery environment similar to Windows RE, thereby allowing recovery without a bootable CD. Upon system startup, a menu asks whether start the operating system or the LightsOut recovery environment. LightsOut restore would augment the ISO image, which comes with Ghost. The latter contains a recovery environment that can recover a system without a working operating system. Norton Save & Restore 2.0 (Ghost 13.0) NSR 2.0 has fewer features in comparison to Norton Ghost 12. NSR 2.0 offers one-time backups, file and folder backup, simplified schedule editor, Maxtor OneTouch integration and modifiable Symantec recovery disc. This version supports 32-bit and 64-bit versions of Windows XP and Vista. Norton Ghost 14.0 Version 14.0 uses Volume Snapshot Service (VSS) to make backups and can store backups to an FTP site. Ghost can connect to ThreatCon, a Symantec service that monitors malware activity around the world, and performs incremental backups when a specific threat level is reached. Other features include the ability to back up to network-attached storage devices and support for NTFS partitions up to 16TB. Ghost can manage other installations of version 12.0 or later across a network. This version no longer supports opening .gho image files. It stores images in .v2i format. Incremental backup images created with Norton Ghost are saved with .iv2i filename extensions alone the original full backup (with .v2i filename extension) on a regular basis. Older .gho image files can be restored using Ghost Explorer, a separate utility. Symantec Ghost Solution Suite 2.5 (Ghost 11.5) The ghost software for enterprise, including Ghost 11.5, was released in May 2008. New features include: As of January 6, 2010, the latest build from Live Update is 11.5.1.2266 (Live Update 5 (LU5)). This updates Ghost Solution Suite to 2.5.1 and provides support for Windows 7 and Windows Server 2008 R2. Furthermore, Ghost 11.5 is compatible with BartPE's bootable CD using a PE Builder plug-in for Symantec Ghost 11. Norton Ghost 15.0 According to the Norton community on Symantec's site, the following features are available in Norton Ghost 15: Symantec Ghost Solution Suite 3.0 (Ghost 12.0) The ghost software for enterprise, including Ghost 12.0 and Deployment Solution 6.9, was announced in March 2015. Symantec Ghost Solution Suite 3.1 (Ghost 12.0) The ghost software for enterprise, including Ghost 12.0 and Deployment Solution 6.9, was released in 7 March 2016. Symantec Ghost Solution Suite 3.2 (Ghost 12.0) The ghost software for enterprise, including Ghost 12.0 and Deployment Solution 6.9, was released in 18 May 2017. Release Update 3, which was released 22 September 2017, added support for the ext4 filesystem. Symantec Ghost Solution Suite 3.3 (Ghost 12.0) The ghost software for enterprise, including Ghost 12.0 and Deployment Solution 6.9, was released in 31 October 2018. This release added support for Ghost Solution Suite Web Console, iPXE, Windows Server 2016, Smart raw imaging, 4K native drive support. Features Ghost is marketed as an OS deployment solution. Its capture and deployment environment requires booting to a Windows PE environment. This can be accomplished by creating an ISO (to burn to a DVD) or a USB bootable disk, installed to a client as an automation folder or delivered by a pxe server. This provides an environment to perform offline system recovery or image creation. Ghost can mount a backup volume to recover individual files. Ghost can copy the contents of one volume to another or convert a volume contents to a virtual disk in VMDK or VHD format. Initially, Ghost supported only FAT file system, although it could copy (but not resize) other file systems by performing a sector-by-sector transfer. Ghost added support for NTFS later in 1996, and also provided a program, Ghostwalker, to change the Security ID (SID) that made Windows NT systems distinguishable from each other. Ghostwalker is capable of modifying the name of the Windows NT computer from its own interface. Ghost added support for the ext2 file system in 1999 and for ext3 subsequently. Support for ext4 was added in September 2017. Discontinuation Norton Ghost was discontinued on April 30, 2013. Support via chat and knowledge base was available until June 30, 2014. Until it was removed, the Symantec Ghost Web page invited Ghost customers to try Symantec System Recovery, described as software for backup and disaster recovery. See also Comparison of disk cloning software References External links Symantec Ghost – Solution suite (previously Norton Ghost Enterprise Edition) product page Symantec News NortonLifeLock software Proprietary software Backup software Storage software Disk cloning
90541
https://en.wikipedia.org/wiki/ICON%20%28microcomputer%29
ICON (microcomputer)
The ICON (also the CEMCorp ICON, Burroughs ICON, and Unisys ICON, and nicknamed the bionic beaver) was a networked personal computer built specifically for use in schools, to fill a standard created by the Ontario Ministry of Education. It was based on the Intel 80186 CPU and ran an early version of QNX, a Unix-like operating system. The system was packaged as an all-in-one machine similar to the Commodore PET, and included a trackball for mouse-like control. Over time a number of GUI-like systems appeared for the platform, based on the system's NAPLPS-based graphics system. The ICON was widely used, mostly in high schools in the mid to late 1980s, but disappeared after that time with the widespread introduction of PCs and Apple Macintoshes. History Development Origin In 1981, four years after the first microcomputers for mainstream consumers appeared, the Ontario Ministry of Education sensed that microcomputers could be an important component of education. In June the Minister of Education, Bette Stephenson, announced the need for computer literacy for all students and formed the Advisory Committee on Computers in Education to guide their efforts. She stated that: It is now clear that one of the major goals that education must add to its list of purposes, is computer literacy. The world of the very near future requires that all of us have some understanding of the processes and uses of computers. According to several contemporary sources, Stephenson was the driving force behind the project; "whenever there was a problem she appears to have 'moved heaven and earth' to get it back on the tracks." The Ministry recognized that a small proportion of teachers and other school personnel were already quite involved with microcomputers and that some schools were acquiring first-generation machines. These acquisitions were uneven, varying in brand and model not just between school boards, but among schools within boards and even classroom to classroom. Among the most popular were the Commodore PET which had a strong following in the new computer programming classes due to its tough all-in-one construction and built-in support for Microsoft BASIC, and the Apple II which had a wide variety of educational software, mostly aimed at early education. The Ministry wanted to encourage uses of microcomputers that supported its curriculum guidelines and was willing to underwrite the development of software for that purpose. However, the wide variety of machines being used meant that development costs had to be spread over several platforms. Additionally, many of the curriculum topics they wanted to cover required more storage or graphics capability than at least some of the machines then in use, if not all of them. Educational software was in its infancy, and many hardware acquisitions were made without a clear provision for educational software or a plan for use. A series of Policy Memos followed outlining the Committee's views. Policy Memo 47 stated that computers are to be used creatively, and for information retrieval; at the time most systems were used solely for programming. They also announced funding for the development of educational software on an estimated 6000 machines. The Ministry decided that standardizing the computers would reduce maintenance costs, and allow for the development of consistent educational software. The Ministry contracted the Canadian Advanced Technology Alliance (CATA) to help develop specifications for the new system. Design selection Policy Memos 68–73 followed in early 1983, stating that none of the existing platforms had all the qualities needed to be truly universal. The idea of a new machine quickly gained currency, with the added bonus that it would help develop a local microcomputer industry. In order to make the new machine attractive, the Ministry agreed to fund up to 75% of the purchase price from their own budget. When the plan was first announced there was widespread concern among educators. Their main complaint is that the Ministry would select a standard that was not powerful enough for their needs. A secondary concern was that the time delay between announcing and introducing the computer would be lengthy, a period in which existing purchases could be funded instead. The first set of concerns were rendered moot when the specifications were introduced in March 1983 in the "Functional Requirements for Microcomputers for Educational Use in Ontario Schools—Stage I." The physical design required a PET-like all-in-one case, headphones output for voice and sound effects, and a trackball for mouse-like pointing support. Inside the case, the specification called for a processor and support systems to allow a multitasking operating system to be used, selecting the Intel 80186 as the CPU. Color graphics were specified, at least as an option, along with monochrome and color monitors on top. Voice synthesis was built in, and the keyboard provided for accented characters. Additionally, the systems would include no local storage at all, and would instead rely on a networked file server containing a hard drive. The specification was considerably in advance of the state of the art of the time, and when it was delivered commentators immediately reversed their earlier concerns and suggested the machine was too powerful, and would therefore be available in too small numbers. CEMCORP To deliver such a machine, Robert Arn, a member of the CATA team, set up CEMCORP, the Canadian Educational Microprocessor Corporation. When the specification was announced in 1983, CEMCORP was announced as the winner of a $10 million contract to develop and supply the initial machines. An additional $5 million in funding was announced to cover development of new software applications, while the Ontario Institute for Studies in Education (OISE) was asked to convert 30 existing programs to the new machine. In order to be able to afford what was expected to be an expensive machine, the Ministry announced a special "Recognized Extraordinary Expenditure" (REE) grant that would provide for up to 75% of the purchase costs of machines meeting the "Grant Eligible Microcomputer Systems" or "G.E.M.S." specifications. At the time, only the ICON met the GEMS requirements, which cut its purchase price from around CAD$2500 to a mere $495 (USD$2700 and $696) – less expensive than most existing microcomputers. The entire program was politically explosive throughout its gestation as a result, causing a continual stream of news stories. Critics complained that other machines could be bought for half the cost, but supporters pushed back that no other machine at that price point supported the GEMS specifications. The release of the IBM Personal Computer/AT in 1984 reopened the debate and made nightly news, as it used a newer and more advanced CPU than the ICON: the 80286. Around this time other platforms, such as the Waterloo PORT networking system, gained approval for the government support that had originally been the province of the ICON. Production The basic ICON design had reached "beta quality" after just over a year, using off the shelf parts, the hardware manufactured by Microtel and operating system from Quantum Software Systems. The original Microtel machines were first introduced to Ontario schools in 1984 in small numbers, packaged in a short-lived dark brown case. At this point Burroughs Canada was brought in to sell and support the machine. Soon, Sperry and Burroughs merged to form Unisys in 1986. Several generations of ICON machines were produced, evolving steadily to become more PC-like. They were built into the early 1990s, but by this point were used almost entirely for running DOS and Windows programs. Cancellation Throughout the project's lifetime it was subject to continual debate and much political rhetoric. A 1992 article on the topic complained: Bette Stephenson favoured top-down decision making and as a result got trapped by her tunnel vision. Her ICON computer fiasco drained millions from the provincial treasury and created a white elephant scorned by boards and shunned by teachers.... Computer resources were forced upon the school system as a result of a top-down government decision that was taken precipitously and without research. The Ministry ceased all support for the ICON in 1994, making it orphaned technology, and the Archives of Ontario declined to take ICON hardware and copies of the ICON software, which were destroyed. This was controversial in its own right, as others maintained that it could be sent to other schools that lacked extensive Information Technology. Despite the development of the ICON program, equality among schools was not assured because each school community could afford different capital outlays depending on the parents' affluence. Design The ICON system was based on a workstation/file server model, with no storage local to the workstations. The workstations and servers were internally similar, based on Intel 80186 microprocessors, and connected to each other using ARCNET. Several upgrades were introduced into the ICON line over time. The ICON2 sported a redesigned case, a detached keyboard with integrated trackball, expanded RAM, and facilities for an internal hard disk. The CPU was upgraded to the 386 in the Series III, while an "ICON-on-a-card" for PCs also appeared. The original ICON workstations were housed in a large wedge-shaped steel case, with a full-sized keyboard mounted slightly left-of-center and a trackball mounted to the right. A rubber bumper-strip ran along the front edge, a precaution against a particular type of cut users sometimes got from the PET's sharp case. The EGA monitor was mounted on top of a tilt-and-swivel mount, a welcome improvement on the PET. It also included TI's TMS5220 speech chip, originally designed for the TI-99, and would speak the vaguely obscene word "dhtick" when starting up. Early Microtel machines were dark brown, but the vast majority of examples in the classroom were a more nondescript beige. The fileserver, sometimes referred to as the LexICON, was a simple rectangular box with an internal 10MB hard drive and a 5.25" floppy drive opening to the front, and parallel port for a shared printer. Later Lexicons included a 64MB hard disk, divided into two partitions. Unlike the PET's floppy system, however, users of the ICON used Unix commands to copy data to their personal floppy disks from its "natural" location in the user's home directory on the hard drive. Both the client and server ran the Unix-like QNX as their operating system with the addition of network file-sharing, the basic portions of it embedded in ROM. To this they added a NAPLPS/Telidon-based graphics system, which was intended to be used with the trackball to make interactive programs. The system included a Paint programme that used the trackball, but did not include a usable GUI, although there were several attempts to produce one. QNX 2.0.1 included a modest one called "House", and another was built at least to the prototype stage by Helicon Systems in Toronto and appeared in one form as Ambience, though its capabilities were limited. A later upgrade called ICONLook improved upon this greatly, but it was apparently too slow to use realistically. Helicon Systems also produced a MIDI interface for the original ICON. The biggest problem for the machine was a lack of software. The ICON was originally designed to let teachers create and share their own lessonware, using a simple hypertext-based system where pages could either link to other pages or run programs written in C. The "anyone can create lessonware" model was rejected by the Ministry of Education before the ICON shipped (in favour of a model under which the Ministry funded and controlled all lessonware), leaving the ICON with only the QNX command line interface and the Cemcorp-developed text editor application. The various Watcom programming languages were quickly ported to the system, but beyond that, the educational software teachers could expect was few and far between. The Ministry contracted for a number of applications, but the small target market and the sometimes difficult process required to secure such contracts were significant obstacles for realistic commercial development. Software The Bartlett Saga, a four-part game about the History of Canada; consisting of Part I: Refugees in the Wilderness: United Empire Loyalists, 1784-1793; Part II: The Rebels: Rebellion in Upper Canada, 1830-1844; Part III: United We Stand: Confederation, 1864-1873; Part IV: The Golden West: Settling the Plains, 1897-1911 Build-A-Bird [Ergonomics Lab, University of Toronto] Cargo Sailor (1987), a game about delivering goods to different ports around the world, given the latitude and longitude. Cross Country Canada, a game of travelling across Canada in a truck, picking up and delivering cargo. Ernie's Big Splash, a video game including Sesame Street characters. Logo, an implementation of the Logo programming language. Northwest Fur Trader, educational software simulating the fur trade in Canada. Lemonade Stand, an educational game of setting lemonade prices based on the weather forecast. A Day in the Life Of, a strange game following the life of a student. There was an arcade game inside it where you could catch rabbits. Spectricon, the drawing software. It used a particularly beautiful noise generator to create dithering patterns. Offshore Fishing, the fishing game where you try to catch fish and sell for money but avoid the shark at all costs as he will swim through your fishing net. Watfor, the WATCOM FORTRAN programming language. Chat, the OS included facilities for sending system-wide messages, which students abused often. References Bibliography Ivor Goodson and John Marshal Mangan, "Computer Studies as Symbolic and Ideological Action: The Genealogy of the ICON", Taylor & Francis, 1998, (originally published in Curriculum Journal, Volume 3 Issue 3 (Autumn 1992), pg. 261 – 276 John Marshall Mangan, "The Politics of Educational Computing in Ontario", Sociology of Education in Canada, (ed Lorna Erwin and David MacLennan), Copp Clark Longman, 1994, pg. 263–277 Robert J. D. Jones, "Shaping Educational Technology: Ontario's Educational Computing Initiative", Innovations in Education and Teaching International, Volume 28 Issue 2 (May 1991), pg. 129–134 Robert McLean, "An Educational Infrastructure for Microcomputers in Ontario", Technological Horizons In Education, Volume 16 Number 5 (December 1988), pg. 79–83 Barbara Wierzbicki, "Icon: Canada's system for schools", InfoWorld, Volume 5 Number 45 (7 November 1983), pg. 33–34 External links The Burroughs ICON Computer by Anthony William Anjo (Archive.org Backup) OLD-COMPUTERS.COM Museum – Unisys ICON Icon All-in-one desktop computers Education in Ontario Canadian inventions Information technology in Canada Educational hardware Orphaned technology 8086-based home computers Thin clients Computer-related introductions in 1984 1984 establishments in Canada 1994 disestablishments in Canada 16-bit computers
28063636
https://en.wikipedia.org/wiki/Clip%20font
Clip font
Clip fonts or split fonts are non-Unicode fonts that assign glyphs of Brahmic scripts, such as Devanagari, at code positions intended for glyphs of the Latin script or to produce glyphs not found in Unicode by using its Private Use Area (PUA). Comparison Brahmic scripts have an inherent vowel without attached diacritics. Vowels (excluding the inherent vowel) that immediately follow a consonant are written as a diacritic. For example, a Devanagari consonant in ‘base form’ in Unicode is ‘घ’ /ɡʱə/ where the inherent vowel is ‘अ’ /ə/. If the vowel ‘आ’ /aː/ were to follow this Devanagari consonant, then the ‘ा’ diacritic is attached resulting in ‘घा’. Consonants that are a part of conjunct clusters may assume a conjunct form such as ‘घ्‍ ‘ in Devanagari. Consonant–consonant clusters Devanagari consonants that are a part of conjunct clusters (except for the final consonant in a conjunct cluster, which is in its ‘base form’) are followed by the halant and zero-with joiner characters. For example, ‘घ्य’ /ɡʱjə/ is formed by ‘घ’, followed by the halant diacritic, Clip fonts Consonant–vowel clusters In clip fonts the ‘base form’ of a character is the conjunct form such as ‘घ्‍ ’ in Devanagari and diacritics are added to indicate that the consonant is immediately followed by a vowel (including the inherent vowel). For example, a Devanagari consonant in ‘base form’ in a clip font is ‘घ्‍ ’ /ɡʱ/. If the inherent vowel ‘अ’/ə/ were to follow this Devanagari consonant, then the ‘ा’ diacritic would be attached to it resulting in ‘घ’. Vowels that are not the inherent ‘अ’ /ə/ such as ‘आ’ /aː/ that follow this Devanagari consonant, then the ‘ा’ diacritic attaches twice, resulting in ‘घा’ with a Latin script representation of ‘Gaa’. Consonant–consonant clusters Devanagari consonants that are a part of conjunct clusters are written consecutively in their ‘base forms’ (unless it is the last consonant in a conjunct cluster, which is in its ‘inherent vowel form’). For example, ‘घ्य’ /ɡʱjə/ is formed by ‘घ्‍ ’, followed by ‘य्‍ ’, and followed by the ‘ा’ diacritic with a Latin script representation of ‘Gya’. Tables comparing Unicode and clip fonts The ‘घा’ ligature The ‘घ्य’ ligature Latin script characters A computer assumes that text written with a clip font is in the Latin script. Thus, when the font is changed to another Latin script font that is not a clip font, the Latin script characters on the keys that were used to type the text are displayed instead of text in the original Brahmic script. As a result, the clip font has to be available wherever text in Brahmic script is desired. Thus, clip fonts may not be uniformly compatible across computers and the Internet. This weakness is used as a kind of encryption. Purpose and availability Clip fonts arose as a result of the perceived complexity of keyboard layout switching in common operating system setups, as well as defective internationalization capabilities in older software. English computer keyboards are common in India. Clip font users can easily write Hindi and other Indic languages using those keyboards. In India, people switch quickly among multiple languages and scripts. At least 40 commercial clip fonts are available. With ASCII, they are used by custom keyboard drivers for Indic scripts, intended to limit keystrokes. Such helper software often broke following operating system updates. One of the popular clip fonts for Devanagari is Kiran fonts KF-Kiran, because it does not require special software and can be used in older software. Many users successfully ported this True Type font to operating systems such as Mac OS, Linux, some flavours of Unix and Android. Clip fonts are sometimes used for scripts that are not yet encoded in Unicode. The "correct" way to handle these is to temporarily encode these in Unicode's Private Use Area (PUA). Users in India find that only English language keyboards are available. List of clip fonts See also Indic computing ISCII References External links Marathi and Hindi Calligraphy Fonts Free are available under the section titled ‘2. Marathi Font, Hindi calligraphy fonts free for personal use’ 10000+ Marathi Fonts Download Free are available under the section titled ‘1. Legacy Hindi Font’ Hindi Devanagari clip fonts are available under the section titled ‘2. Marathi Typing Font’ Devanagari clip fonts are available under the section titled ‘1. Legacy Hindi Font’ Devanagari typography Indic computing
28319
https://en.wikipedia.org/wiki/Smalltalk
Smalltalk
Smalltalk is an object-oriented, dynamically typed reflective programming language. Smalltalk was created as the language underpinning the "new world" of computing exemplified by "human–computer symbiosis". It was designed and created in part for educational use, specifically for constructionist learning, at the Learning Research Group (LRG) of Xerox PARC by Alan Kay, Dan Ingalls, Adele Goldberg, Ted Kaehler, Diana Merry, Scott Wallace, and others during the 1970s. The language was first generally released as Smalltalk-80. Smalltalk-like languages are in active development and have gathered loyal communities of users around them. ANSI Smalltalk was ratified in 1998 and represents the standard version of Smalltalk. Smalltalk took second place for "most loved programming language" in the Stack Overflow Developer Survey in 2017, but it was not among the 26 most loved programming languages of the 2018 survey. History There are a large number of Smalltalk variants. The unqualified word Smalltalk is often used to indicate the Smalltalk-80 language, the first version to be made publicly available and created in 1980. The first hardware-environments which run the Smalltalk VMs were Xerox Alto computers. Smalltalk was the product of research led by Alan Kay at Xerox Palo Alto Research Center (PARC); Alan Kay designed most of the early Smalltalk versions, Adele Goldberg wrote most of the documentation, and Dan Ingalls implemented most of the early versions. The first version, termed Smalltalk-71, was created by Kay in a few mornings on a bet that a programming language based on the idea of message passing inspired by Simula could be implemented in "a page of code". A later variant used for research work is now termed Smalltalk-72 and influenced the development of the Actor model. Its syntax and execution model were very different from modern Smalltalk variants. After significant revisions which froze some aspects of execution semantics to gain performance (by adopting a Simula-like class inheritance model of execution), Smalltalk-76 was created. This system had a development environment featuring most of the now familiar tools, including a class library code browser/editor. Smalltalk-80 added metaclasses, to help maintain the "everything is an object" (except private instance variables) paradigm by associating properties and behavior with individual classes, and even primitives such as integer and boolean values (for example, to support different ways to create instances). Smalltalk-80 was the first language variant made available outside of PARC, first as Smalltalk-80 Version 1, given to a small number of firms (Hewlett-Packard, Apple Computer, Tektronix, and Digital Equipment Corporation (DEC)) and universities (UC Berkeley) for peer review and implementing on their platforms. Later (in 1983) a general availability implementation, named Smalltalk-80 Version 2, was released as an image (platform-independent file with object definitions) and a virtual machine specification. ANSI Smalltalk has been the standard language reference since 1998. Two of the currently popular Smalltalk implementation variants are descendants of those original Smalltalk-80 images. Squeak is an open source implementation derived from Smalltalk-80 Version 1 by way of Apple Smalltalk. VisualWorks is derived from Smalltalk-80 version 2 by way of Smalltalk-80 2.5 and ObjectWorks (both products of ParcPlace Systems, a Xerox PARC spin-off company formed to bring Smalltalk to the market). As an interesting link between generations, in 2001 Vassili Bykov implemented Hobbes, a virtual machine running Smalltalk-80 inside VisualWorks. (Dan Ingalls later ported Hobbes to Squeak.) During the late 1980s to mid-1990s, Smalltalk environments—including support, training and add-ons—were sold by two competing organizations: ParcPlace Systems and Digitalk, both California based. ParcPlace Systems tended to focus on the Unix/Sun microsystems market, while Digitalk focused on Intel-based PCs running Microsoft Windows or IBM's OS/2. Both firms struggled to take Smalltalk mainstream due to Smalltalk's substantial memory needs, limited run-time performance, and initial lack of supported connectivity to SQL-based relational database servers. While the high price of ParcPlace Smalltalk limited its market penetration to mid-sized and large commercial organizations, the Digitalk products initially tried to reach a wider audience with a lower price. IBM initially supported the Digitalk product, but then entered the market with a Smalltalk product in 1995 called VisualAge/Smalltalk. Easel introduced Enfin at this time on Windows and OS/2. Enfin became far more popular in Europe, as IBM introduced it into IT shops before their development of IBM Smalltalk (later VisualAge). Enfin was later acquired by Cincom Systems, and is now sold under the name ObjectStudio, and is part of the Cincom Smalltalk product suite. In 1995, ParcPlace and Digitalk merged into ParcPlace-Digitalk and then rebranded in 1997 as ObjectShare, located in Irvine, CA. ObjectShare (NASDAQ: OBJS) was traded publicly until 1999, when it was delisted and dissolved. The merged firm never managed to find an effective response to Java as to market positioning, and by 1997 its owners were looking to sell the business. In 1999, Seagull Software acquired the ObjectShare Java development lab (including the original Smalltalk/V and Visual Smalltalk development team), and still owns VisualSmalltalk, although worldwide distribution rights for the Smalltalk product remained with ObjectShare who then sold them to Cincom. VisualWorks was sold to Cincom and is now part of Cincom Smalltalk. Cincom has backed Smalltalk strongly, releasing multiple new versions of VisualWorks and ObjectStudio each year since 1999. Cincom, GemTalk, and Instantiations, continue to sell Smalltalk environments. IBM has 'end of life'd VisualAge Smalltalk having in the late 1990s decided to back Java instead and it is, , supported by Instantiations, Inc. who renamed the product VA Smalltalk (VAST Platform) and continue to release new versions yearly. The open Squeak implementation has an active community of developers, including many of the original Smalltalk community, and has recently been used to provide the Etoys environment on the OLPC project, a toolkit for developing collaborative applications Croquet Project, and the Open Cobalt virtual world application. GNU Smalltalk is a free software implementation of a derivative of Smalltalk-80 from the GNU project. Pharo Smalltalk is a fork of Squeak oriented toward research and use in commercial environments. A significant development, that has spread across all Smalltalk environments as of 2016, is the increasing usage of two web frameworks, Seaside and AIDA/Web, to simplify the building of complex web applications. Seaside has seen considerable market interest with Cincom, Gemstone, and Instantiations incorporating and extending it. Influences Smalltalk was one of many object-oriented programming languages based on Simula. Smalltalk is also one of the most influential programming languages. Virtually all of the object-oriented languages that came after—Flavors, CLOS, Objective-C, Java, Python, Ruby, and many others—were influenced by Smalltalk. Smalltalk was also one of the most popular languages for agile software development methods, rapid application development (RAD) or prototyping, and software design patterns. The highly productive environment provided by Smalltalk platforms made them ideal for rapid, iterative development. Smalltalk emerged from a larger program of Advanced Research Projects Agency (ARPA) funded research that in many ways defined the modern world of computing. In addition to Smalltalk, working prototypes of things such as hypertext, GUIs, multimedia, the mouse, telepresence, and the Internet were developed by ARPA researchers in the 1960s. Alan Kay (one of the inventors of Smalltalk) also described a tablet computer he called the Dynabook which resembles modern tablet computers like the iPad. Smalltalk environments were often the first to develop what are now common object-oriented software design patterns. One of the most popular is the model–view–controller (MVC) pattern for user interface design. The MVC pattern enables developers to have multiple consistent views of the same underlying data. It's ideal for software development environments, where there are various views (e.g., entity-relation, dataflow, object model, etc.) of the same underlying specification. Also, for simulations or games where the underlying model may be viewed from various angles and levels of abstraction. In addition to the MVC pattern, the Smalltalk language and environment were highly influential in the history of the graphical user interface (GUI) and the what you see is what you get (WYSIWYG) user interface, font editors, and desktop metaphors for UI design. The powerful built-in debugging and object inspection tools that came with Smalltalk environments set the standard for all the integrated development environments, starting with Lisp Machine environments, that came after. Object-oriented programming As in other object-oriented languages, the central concept in Smalltalk-80 (but not in Smalltalk-72) is that of an object. An object is always an instance of a class. Classes are "blueprints" that describe the properties and behavior of their instances. For example, a GUI's window class might declare that windows have properties such as the label, the position and whether the window is visible or not. The class might also declare that instances support operations such as opening, closing, moving and hiding. Each particular window object would have its own values of those properties, and each of them would be able to perform operations defined by its class. A Smalltalk object can do exactly three things: Hold state (references to other objects). Receive a message from itself or another object. In the course of processing a message, send messages to itself or another object. The state an object holds is always private to that object. Other objects can query or change that state only by sending requests (messages) to the object to do so. Any message can be sent to any object: when a message is received, the receiver determines whether that message is appropriate. Alan Kay has commented that despite the attention given to objects, messaging is the most important concept in Smalltalk: "The big idea is 'messaging'—that is what the kernel of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase)." Unlike most other languages, Smalltalk objects can be modified while the system is running. Live coding and applying fixes ‘on-the-fly’ is a dominant programming methodology for Smalltalk and is one of the main reasons for its efficiency. Smalltalk is a "pure" object-oriented programming language, meaning that, unlike C++ and Java, there is no difference between values which are objects and values which are primitive types. In Smalltalk, primitive values such as integers, booleans and characters are also objects, in the sense that they are instances of corresponding classes, and operations on them are invoked by sending messages. A programmer can change or extend (through subclassing) the classes that implement primitive values, so that new behavior can be defined for their instances—for example, to implement new control structures—or even so that their existing behavior will be changed. This fact is summarized in the commonly heard phrase "In Smalltalk everything is an object", which may be more accurately expressed as "all values are objects", as variables are not. Since all values are objects, classes are also objects. Each class is an instance of the metaclass of that class. Metaclasses in turn are also objects, and are all instances of a class called Metaclass. Code blocks—Smalltalk's way of expressing anonymous functions—are also objects. Reflection Reflection is a term that computer scientists apply to software programs that have the ability to inspect their own structure, for example their parse tree or data types of input and output parameters. Reflection is a feature of dynamic, interactive languages such as Smalltalk and Lisp. Interactive programs with reflection (either interpreted or compiled) maintain the state of all in-memory objects, including the code object itself, which are generated during parsing/compilation and are programmatically accessible and modifiable. Reflection is also a feature of having a meta-model as Smalltalk does. The meta-model is the model that describes the language, and developers can use the meta-model to do things like walk through, examine, and modify the parse tree of an object, or find all the instances of a certain kind of structure (e.g., all instances of the Method class in the meta-model). Smalltalk-80 is a totally reflective system, implemented in Smalltalk-80. Smalltalk-80 provides both structural and computational reflection. Smalltalk is a structurally reflective system which structure is defined by Smalltalk-80 objects. The classes and methods that define the system are also objects and fully part of the system that they help define. The Smalltalk compiler compiles textual source code into method objects, typically instances of CompiledMethod. These get added to classes by storing them in a class's method dictionary. The part of the class hierarchy that defines classes can add new classes to the system. The system is extended by running Smalltalk-80 code that creates or defines classes and methods. In this way a Smalltalk-80 system is a "living" system, carrying around the ability to extend itself at run time. Since the classes are objects, they can be asked questions such as "what methods do you implement?" or "what fields/slots/instance variables do you define?". So objects can easily be inspected, copied, (de)serialized and so on with generic code that applies to any object in the system. Smalltalk-80 also provides computational reflection, the ability to observe the computational state of the system. In languages derived from the original Smalltalk-80 the current activation of a method is accessible as an object named via a pseudo-variable (one of the six reserved words), thisContext. By sending messages to thisContext a method activation can ask questions like "who sent this message to me". These facilities make it possible to implement co-routines or Prolog-like back-tracking without modifying the virtual machine. The exception system is implemented using this facility. One of the more interesting uses of this is in the Seaside web framework which relieves the programmer of dealing with the complexity of a Web Browser's back button by storing continuations for each edited page and switching between them as the user navigates a web site. Programming the web server using Seaside can then be done using a more conventional programming style. An example of how Smalltalk can use reflection is the mechanism for handling errors. When an object is sent a message that it does not implement, the virtual machine sends the object the doesNotUnderstand: message with a reification of the message as an argument. The message (another object, an instance of Message) contains the selector of the message and an Array of its arguments. In an interactive Smalltalk system the default implementation of doesNotUnderstand: is one that opens an error window (a Notifier) reporting the error to the user. Through this and the reflective facilities the user can examine the context in which the error occurred, redefine the offending code, and continue, all within the system, using Smalltalk-80's reflective facilities. By creating a class that understands (implements) only doesNotUnderstand:, one can create an instance that can intercept any message sent to it via its doesNotUnderstand: method. Such instances are called transparent proxies. Such proxies can then be used to implement a number of facilities such as distributed Smalltalk where messages are exchanged between multiple Smalltalk systems, database interfaces where objects are transparently faulted out of a database, promises, etc. The design of distributed Smalltalk influenced such systems as CORBA. Syntax Smalltalk-80 syntax is rather minimalist, based on only a handful of declarations and reserved words. In fact, only six "keywords" are reserved in Smalltalk: true, false, nil, self, super, and thisContext. These are properly termed pseudo-variables, identifiers that follow the rules for variable identifiers but denote bindings that a programmer cannot change. The true, false, and nil pseudo-variables are singleton instances. self and super refer to the receiver of a message within a method activated in response to that message, but sends to super are looked up in the superclass of the method's defining class rather than the class of the receiver, which allows methods in subclasses to invoke methods of the same name in superclasses. thisContext refers to the current activation record. The only built-in language constructs are message sends, assignment, method return and literal syntax for some objects. From its origins as a language for children of all ages, standard Smalltalk syntax uses punctuation in a manner more like English than mainstream coding languages. The remainder of the language, including control structures for conditional evaluation and iteration, is implemented on top of the built-in constructs by the standard Smalltalk class library. (For performance reasons, implementations may recognize and treat as special some of those messages; however, this is only an optimization and is not hardwired into the language syntax.) The adage that "Smalltalk syntax fits on a postcard" refers to a code snippet by Ralph Johnson, demonstrating all the basic standard syntactic elements of methods: exampleWithNumber: x | y | true & false not & (nil isNil) ifFalse: [self halt]. y := self size + super size. #($a #a 'a' 1 1.0) do: [ :each | Transcript show: (each class name); show: ' ']. ^x < y Literals The following examples illustrate the most common objects which can be written as literal values in Smalltalk-80 methods. Numbers. The following list illustrates some of the possibilities. 42 -42 123.45 1.2345e2 2r10010010 16rA000 The last two entries are a binary and a hexadecimal number, respectively. The number before the 'r' is the radix or base. The base does not have to be a power of two; for example 36rSMALLTALK is a valid number equal to 80738163270632 decimal. Characters are written by preceding them with a dollar sign: $A Strings are sequences of characters enclosed in single quotes: 'Hello, world!' To include a quote in a string, escape it using a second quote: 'I said, ''Hello, world!'' to them.' Double quotes do not need escaping, since single quotes delimit a string: 'I said, "Hello, world!" to them.' Two equal strings (strings are equal if they contain all the same characters) can be different objects residing in different places in memory. In addition to strings, Smalltalk has a class of character sequence objects called Symbol. Symbols are guaranteed to be unique—there can be no two equal symbols which are different objects. Because of that, symbols are very cheap to compare and are often used for language artifacts such as message selectors (see below). Symbols are written as # followed by a string literal. For example: #'foo' If the sequence does not include whitespace or punctuation characters, this can also be written as: #foo Arrays: #(1 2 3 4) defines an array of four integers. Many implementations support the following literal syntax for ByteArrays: #[1 2 3 4] defines a ByteArray of four integers. And last but not least, blocks (anonymous function literals) [... Some smalltalk code...] Blocks are explained in detail further in the text. Many Smalltalk dialects implement additional syntaxes for other objects, but the ones above are the essentials supported by all. Variable declarations The two kinds of variables commonly used in Smalltalk are instance variables and temporary variables. Other variables and related terminology depend on the particular implementation. For example, VisualWorks has class shared variables and namespace shared variables, while Squeak and many other implementations have class variables, pool variables and global variables. Temporary variable declarations in Smalltalk are variables declared inside a method (see below). They are declared at the top of the method as names separated by spaces and enclosed by vertical bars. For example: | index | declares a temporary variable named index which contains initially the value nil. Multiple variables may be declared within one set of bars: | index vowels | declares two variables: index and vowels. All variables are initialized. Variables are initialized to nil except the indexed variables of Strings, which are initialized to the null character or ByteArrays which are initialized to 0. Assignment A variable is assigned a value via the ':=' syntax. So: vowels := 'aeiou' Assigns the string 'aeiou' to the formerly declared vowels variable. The string is an object (a sequence of characters between single quotes is the syntax for literal strings), created by the compiler at compile time. In the original Parc Place image, the glyph of the underscore character ⟨_⟩ appeared as a left-facing arrow ⟨←⟩ (like in the 1963 version of the ASCII code). Smalltalk originally accepted this left-arrow as the only assignment operator. Some modern code still contains what appear to be underscores acting as assignments, hearkening back to this original usage. Most modern Smalltalk implementations accept either the underscore or the colon-equals syntax. Messages The message is the most fundamental language construct in Smalltalk. Even control structures are implemented as message sends. Smalltalk adopts by default a dynamic dispatch and single dispatch strategy (as opposed to multiple dispatch, used by some other object-oriented languages). The following example sends the message 'factorial' to number 42: 42 factorial In this situation 42 is called the message receiver, while 'factorial' is the message selector. The receiver responds to the message by returning a value (presumably in this case the factorial of 42). Among other things, the result of the message can be assigned to a variable: aRatherBigNumber := 42 factorial "factorial" above is what is called a unary message because only one object, the receiver, is involved. Messages can carry additional objects as arguments, as follows: 2 raisedTo: 4 In this expression two objects are involved: 2 as the receiver and 4 as the message argument. The message result, or in Smalltalk parlance, the answer is supposed to be 16. Such messages are called keyword messages. A message can have more arguments, using the following syntax: 'hello world' indexOf: $o startingAt: 6 which answers the index of character 'o' in the receiver string, starting the search from index 6. The selector of this message is "indexOf:startingAt:", consisting of two pieces, or keywords. Such interleaving of keywords and arguments is meant to improve readability of code, since arguments are explained by their preceding keywords. For example, an expression to create a rectangle using a C++ or Java-like syntax might be written as: new Rectangle(100, 200); It's unclear which argument is which. By contrast, in Smalltalk, this code would be written as: Rectangle width: 100 height: 200 The receiver in this case is "Rectangle", a class, and the answer will be a new instance of the class with the specified width and height. Finally, most of the special (non-alphabetic) characters can be used as what are called binary messages. These allow mathematical and logical operators to be written in their traditional form: 3 + 4 which sends the message "+" to the receiver 3 with 4 passed as the argument (the answer of which will be 7). Similarly, 3 > 4 is the message ">" sent to 3 with argument 4 (the answer of which will be false). Notice, that the Smalltalk-80 language itself does not imply the meaning of those operators. The outcome of the above is only defined by how the receiver of the message (in this case a Number instance) responds to messages "+" and ">". A side effect of this mechanism is operator overloading. A message ">" can also be understood by other objects, allowing the use of expressions of the form "a > b" to compare them. Expressions An expression can include multiple message sends. In this case expressions are parsed according to a simple order of precedence. Unary messages have the highest precedence, followed by binary messages, followed by keyword messages. For example: 3 factorial + 4 factorial between: 10 and: 100 is evaluated as follows: 3 receives the message "factorial" and answers 6 4 receives the message "factorial" and answers 24 6 receives the message "+" with 24 as the argument and answers 30 30 receives the message "between:and:" with 10 and 100 as arguments and answers true The answer of the last message sent is the result of the entire expression. Parentheses can alter the order of evaluation when needed. For example, (3 factorial + 4) factorial between: 10 and: 100 will change the meaning so that the expression first computes "3 factorial + 4" yielding 10. That 10 then receives the second "factorial" message, yielding 3628800. 3628800 then receives "between:and:", answering false. Note that because the meaning of binary messages is not hardwired into Smalltalk-80 syntax, all of them are considered to have equal precedence and are evaluated simply from left to right. Because of this, the meaning of Smalltalk expressions using binary messages can be different from their "traditional" interpretation: 3 + 4 * 5 is evaluated as "(3 + 4) * 5", producing 35. To obtain the expected answer of 23, parentheses must be used to explicitly define the order of operations: 3 + (4 * 5) Unary messages can be chained by writing them one after another: 3 factorial factorial log which sends "factorial" to 3, then "factorial" to the result (6), then "log" to the result (720), producing the result 2.85733. A series of expressions can be written as in the following (hypothetical) example, each separated by a period. This example first creates a new instance of class Window, stores it in a variable, and then sends two messages to it. | window | window := Window new. window label: 'Hello'. window open If a series of messages are sent to the same receiver as in the example above, they can also be written as a cascade with individual messages separated by semicolons: Window new label: 'Hello'; open This rewrite of the earlier example as a single expression avoids the need to store the new window in a temporary variable. According to the usual precedence rules, the unary message "new" is sent first, and then "label:" and "open" are sent to the answer of "new". Code blocks A block of code (an anonymous function) can be expressed as a literal value (which is an object, since all values are objects). This is achieved with square brackets: [ :params | <message-expressions> ] Where :params is the list of parameters the code can take. This means that the Smalltalk code: [:x | x + 1] can be understood as: or expressed in lambda terms as: and [:x | x + 1] value: 3 can be evaluated as Or in lambda terms as: The resulting block object can form a closure: it can access the variables of its enclosing lexical scopes at any time. Blocks are first-class objects. Blocks can be executed by sending them the value message (compound variations exist in order to provide parameters to the block e.g. 'value:value:' and 'valueWithArguments:'). The literal representation of blocks was an innovation which on the one hand allowed certain code to be significantly more readable; it allowed algorithms involving iteration to be coded in a clear and concise way. Code that would typically be written with loops in some languages can be written concisely in Smalltalk using blocks, sometimes in a single line. But more importantly blocks allow control structure to be expressed using messages and polymorphism, since blocks defer computation and polymorphism can be used to select alternatives. So if-then-else in Smalltalk is written and implemented as expr ifTrue: [statements to evaluate if expr] ifFalse: [statements to evaluate if not expr] True methods for evaluation False methods for evaluation positiveAmounts := allAmounts select: [:anAmount | anAmount isPositive] Note that this is related to functional programming, wherein patterns of computation (here selection) are abstracted into higher-order functions. For example, the message select: on a Collection is equivalent to the higher-order function filter on an appropriate functor. Control structures Control structures do not have special syntax in Smalltalk. They are instead implemented as messages sent to objects. For example, conditional execution is implemented by sending the message ifTrue: to a Boolean object, passing as an argument the block of code to be executed if and only if the Boolean receiver is true. The two subclasses of Boolean both implement ifTrue:, where the implementation in subclass True always evaluates the block and the implementation in subclass False never evaluates the block. The following code demonstrates this: result := a > b ifTrue:[ 'greater' ] ifFalse:[ 'less or equal' ] Blocks are also used to implement user-defined control structures, enumerators, visitors, exception handling, pluggable behavior and many other patterns. For example: | aString vowels | aString := 'This is a string'. vowels := aString select: [:aCharacter | aCharacter isVowel]. In the last line, the string is sent the message select: with an argument that is a code block literal. The code block literal will be used as a predicate function that should answer true if and only if an element of the String should be included in the Collection of characters that satisfy the test represented by the code block that is the argument to the "select:" message. A String object responds to the "select:" message by iterating through its members (by sending itself the message "do:"), evaluating the selection block ("aBlock") once with each character it contains as the argument. When evaluated (by being sent the message "value: each"), the selection block (referenced by the parameter "aBlock", and defined by the block literal "[:aCharacter | aCharacter isVowel]"), answers a boolean, which is then sent "ifTrue:". If the boolean is the object true, the character is added to a string to be returned. Because the "select:" method is defined in the abstract class Collection, it can also be used like this: | rectangles aPoint collisions | rectangles := OrderedCollection with: (Rectangle left: 0 right: 10 top: 100 bottom: 200) with: (Rectangle left: 10 right: 10 top: 110 bottom: 210). aPoint := Point x: 20 y: 20. collisions := rectangles select: [:aRect | aRect containsPoint: aPoint]. The exception handling mechanism uses blocks as handlers (similar to CLOS-style exception handling): [ some operation ] on:Error do:[:ex | handler-code ex return ] The exception handler's "ex" argument provides access to the state of the suspended operation (stack frame, line-number, receiver and arguments etc.) and is also used to control how the computation is to proceed (by sending one of "ex proceed", "ex reject", "ex restart" or "ex return"). Classes This is a stock class definition: Object subclass: #MessagePublisher instanceVariableNames: '' classVariableNames: '' poolDictionaries: '' category: 'Smalltalk Examples' Often, most of this definition will be filled in by the environment. Notice that this is a message to the Object class to create a subclass called MessagePublisher. In other words: classes are first-class objects in Smalltalk which can receive messages just like any other object and can be created dynamically at execution time. Methods When an object receives a message, a method matching the message name is invoked. The following code defines a method publish, and so defines what will happen when this object receives the 'publish' message. publish Transcript show: 'Hello World!' The following method demonstrates receiving multiple arguments and returning a value: quadMultiply: i1 and: i2 "This method multiplies the given numbers by each other and the result by 4." | mul | mul := i1 * i2. ^mul * 4 The method's name is #quadMultiply:and:. The return value is specified with the ^ operator. Note that objects are responsible for determining dynamically at runtime which method to execute in response to a message—while in many languages this may be (sometimes, or even always) determined statically at compile time. Instantiating classes The following code: MessagePublisher new creates (and returns) a new instance of the MessagePublisher class. This is typically assigned to a variable: publisher := MessagePublisher new However, it is also possible to send a message to a temporary, anonymous object: MessagePublisher new publish Hello World example The Hello world program is used by virtually all texts to new programming languages as the first program learned to show the most basic syntax and environment of the language. For Smalltalk, the program is extremely simple to write. The following code, the message "show:" is sent to the object "Transcript" with the String literal 'Hello, world!' as its argument. Invocation of the "show:" method causes the characters of its argument (the String literal 'Hello, world!') to be displayed in the transcript ("terminal") window. Transcript show: 'Hello, world!'. Note that a Transcript window would need to be open in order to see the results of this example. Image-based persistence Most popular programming systems separate static program code (in the form of class definitions, functions or procedures) from dynamic, or run time, program state (such as objects or other forms of program data). They load program code when a program starts, and any prior program state must be recreated explicitly from configuration files or other data sources. Any settings the program (and programmer) does not explicitly save must be set up again for each restart. A traditional program also loses much useful document information each time a program saves a file, quits, and reloads. This loses details such as undo history or cursor position. Image based systems don't force losing all that just because a computer is turned off, or an OS updates. Many Smalltalk systems, however, do not differentiate between program data (objects) and code (classes). In fact, classes are objects. Thus, most Smalltalk systems store the entire program state (including both Class and non-Class objects) in an image file. The image can then be loaded by the Smalltalk virtual machine to restore a Smalltalk-like system to a prior state. This was inspired by FLEX, a language created by Alan Kay and described in his M.Sc. thesis. Smalltalk images are similar to (restartable) core dumps and can provide the same functionality as core dumps, such as delayed or remote debugging with full access to the program state at the time of error. Other languages that model application code as a form of data, such as Lisp, often use image-based persistence as well. This method of persistence is powerful for rapid development because all the development information (e.g. parse trees of the program) is saved which facilitates debugging. However, it also has serious drawbacks as a true persistence mechanism. For one thing, developers may often want to hide implementation details and not make them available in a run time environment. For reasons of legality and maintenance, allowing anyone to modify a program at run time inevitably introduces complexity and potential errors that would not be possible with a compiled system that exposes no source code in the run time environment. Also, while the persistence mechanism is easy to use, it lacks the true persistence abilities needed for most multi-user systems. The most obvious is the ability to do transactions with multiple users accessing the same database in parallel. Level of access Everything in Smalltalk-80 is available for modification from within a running program. This means that, for example, the IDE can be changed in a running system without restarting it. In some implementations, the syntax of the language or the garbage collection implementation can also be changed on the fly. Even the statement true become: false is valid in Smalltalk, although executing it is not recommended. Just-in-time compilation Smalltalk programs are usually compiled to bytecode, which is then interpreted by a virtual machine or dynamically translated into machine-native code. List of implementations OpenSmalltalk OpenSmalltalk VM (OS VM), is a notable implementation of the Smalltalk Runtime runner on which many modern Smalltalk implementations are based or derived from. OS VM itself is transpiled from a set of Smalltalk source code files (using a subset of Smalltalk called Slang) to native C language source code (by using a transpiler called VMMaker), which is in turn compiled against specific platform and architecture of the hardware practically enabling cross-platform execution of the Smalltalk images. The source code is available on GitHub and distributed under MIT License. The known Smalltalk implementations based on the OS VM are: Squeak, the original open source Smalltalk that the OpenSmalltalk VM was built for Pharo Smalltalk, an open-source cross-platform language Cuis-Smalltalk, an open-source small, clean and Smalltalk-80 compatible fork of Squeak Haver-Smalltalk an extension of Cuis with a complete Module-System Croquet VM, a Squeak-related Smalltalk VM for Croquet Project Others Amber Smalltalk, runs on JavaScript via transpilation Cincom has the following Smalltalk products: ObjectStudio, VisualWorks and WebVelocity. Visual Smalltalk Enterprise, and family, including Smalltalk/V Smalltalk/X, developed by Claus Gittinger F-Script, macOS-only implementation written in 2009 GemTalk Systems, GemStone/S GNU Smalltalk, headless (lacks GUI) implementation of Smalltalk StepTalk, GNUstep scripting framework uses Smalltalk language on an Objective-C runtime VisualAge Smalltalk Rosetta Smalltalk, developed by Scott Warren in 1979 and announced as a cartridge for the Exidy Sorcerer computer but never released VAST Platform (VA Smalltalk), developed by Instantiations, Inc Little Smalltalk Object Arts, Dolphin Smalltalk Object Connect, Smalltalk MT Smalltalk for Windows Pocket Smalltalk, runs on Palm Pilot SmallJ, an open source Smalltalk based on Java, derived from SmallWorld Etoys, a visual programming system for learning Scratch a visual programming system (only versions before 2.0 are Smalltalk-based) Strongtalk, an open-source (since 2006) Windows-only version, offers optional strong typing; initially created at Sun Microsystem Labs. TruffleSqueak, a Squeak/Smalltalk VM and Polyglot Programming Environment for the GraalVM (more GraalVM-based Smalltalk implementations can be found here) JavaScript VM PharoJS an open-source transpiler from Smalltalk to Javascript, extending the Pharo environment SqueakJS an OpenSmalltalk-compatible VM for the web, also runs older Squeak apps like Etoys or Scratch See also Objective-C GLASS (software bundle) Distributed Data Management Architecture References Further reading External links Free Online Smalltalk Books Cuis Smalltalk Pharo Smalltalk Squeak Smalltalk Cincom Smalltalk VisualWorks Dolphin Smalltalk GNU Smalltalk Smalltalk/X StrongTalk Amber Smalltalk Redline Smalltalk Scarlet Smalltalk VA Smalltalk GemStone GLASS (GemStone, Linux, Apache, Seaside, and Smalltalk) Smalltalk MT Smalltalk-78 online emulator OpenSmalltalk cross-platform virtual machine for Squeak, Pharo, Cuis, and Newspeak Smalltalk-80 Bluebook implementations in C++: by dbanay and rochus-keller on github Programming languages Class-based programming languages Dynamically typed programming languages Free educational software Object-oriented programming languages Programming languages created by women Programming languages created in 1972 1972 software Smalltalk programming language family Cross-platform free software Free compilers and interpreters
29414838
https://en.wikipedia.org/wiki/Rust%20%28programming%20language%29
Rust (programming language)
Rust is a multi-paradigm, general-purpose programming language designed for performance and safety, especially safe concurrency. Rust is syntactically similar to C++, but can guarantee memory safety by using a borrow checker to validate references. Rust achieves memory safety without garbage collection, and reference counting is optional. Rust has been called a systems programming language, and in addition to high-level features such as functional programming it also offers mechanisms for low-level memory management. First appearing in 2010, Rust was designed by Graydon Hoare at Mozilla Research, with contributions from Dave Herman, Brendan Eich, and others. The designers refined the language while writing the Servo experimental browser engine and the Rust compiler. Rust's major influences include C++, OCaml, Haskell, and Erlang. It has gained increasing use and investment in industry, by companies including Amazon, Discord, Dropbox, Facebook (Meta), Google (Alphabet), and Microsoft. Rust has been voted the "most loved programming language" in the Stack Overflow Developer Survey every year since 2016, and was used by 7% of the respondents in 2021. History The language grew out of a personal project begun in 2006 by Mozilla employee Graydon Hoare. Hoare has stated that the project was possibly named after rust fungi and that the name is also a subsequence of "robust". Mozilla began sponsoring the project in 2009 and announced it in 2010. The same year, work shifted from the initial compiler (written in OCaml) to an LLVM-based self-hosting compiler written in Rust. Named , it successfully compiled itself in 2011. The first numbered pre-alpha release of the Rust compiler occurred in January 2012. Rust 1.0, the first stable release, was released on May 15, 2015. Following 1.0, stable point releases are delivered every six weeks, while features are developed in nightly Rust with daily releases, then tested with beta releases that last six weeks. Every two to three years, a new Rust "edition" is produced. This is to provide an easy reference point for changes due to the frequent nature of Rust's train release schedule, as well as to provide a window to make limited breaking changes. Editions are largely compatible. Along with conventional static typing, before version 0.4, Rust also supported typestates. The typestate system modeled assertions before and after program statements, through use of a special check statement. Discrepancies could be discovered at compile time, rather than at runtime, as might be the case with assertions in C or C++ code. The typestate concept was not unique to Rust, as it was first introduced in the language NIL. Typestates were removed because in practice they were little used, though the same functionality can be achieved by leveraging Rust's move semantics. The object system style changed considerably within versions 0.2, 0.3, and 0.4 of Rust. Version 0.2 introduced classes for the first time, and version 0.3 added several features, including destructors and polymorphism through the use of interfaces. In Rust 0.4, traits were added as a means to provide inheritance; interfaces were unified with traits and removed as a separate feature. Classes were also removed and replaced by a combination of implementations and structured types. Starting in Rust 0.9 and ending in Rust 0.11, Rust had two built-in pointer types: ~ and @, simplifying the core memory model. It reimplemented those pointer types in the standard library as Box and (the now removed) Gc. In January 2014, before the first stable release, Rust 1.0, the editor-in-chief of Dr. Dobb's, Andrew Binstock, commented on Rust's chances of becoming a competitor to C++ and to the other up-and-coming languages D, Go, and Nim (then Nimrod). According to Binstock, while Rust was "widely viewed as a remarkably elegant language", adoption slowed because it repeatedly changed between versions. In August 2020, Mozilla laid off 250 of its 1,000 employees worldwide as part of a corporate restructuring caused by the long-term impact of the COVID-19 pandemic. Among those laid off were most of the Rust team, while the Servo team was completely disbanded. The event raised concerns about the future of Rust. In the following week, the Rust Core Team acknowledged the severe impact of the layoffs and announced that plans for a Rust foundation were underway. The first goal of the foundation would be taking ownership of all trademarks and domain names, and also take financial responsibility for their costs. On February 8, 2021 the formation of the Rust Foundation was officially announced by its five founding companies (AWS, Huawei, Google, Microsoft, and Mozilla). On April 6, 2021, Google announced support for Rust within Android Open Source Project as an alternative to C/C++. Syntax Here is a "Hello, World!" program written in Rust. The println! macro prints the message to standard output.fn main() { println!("Hello, World!"); } The syntax of Rust is similar to C and C++, with blocks of code delimited by curly brackets, and control flow keywords such as if, else, while, and for, although the specific syntax for defining functions is more similar to Pascal. Despite the syntactic resemblance to C and C++, the semantics of Rust are closer to that of the ML family of languages and the Haskell language. Nearly every part of a function body is an expression, even control flow operators. For example, the ordinary if expression also takes the place of C's ternary conditional, an idiom used by ALGOL 60. As in Lisp, a function need not end with a return expression: in this case if the semicolon is omitted, the last expression in the function creates the return value, as seen in the following recursive implementation of the factorial function: fn factorial(i: u64) -> u64 { match i { 0 => 1, n => n * factorial(n-1) } } The following iterative implementation uses the ..= operator to create an inclusive range: fn factorial(i: u64) -> u64 { (2..=i).product() } More advanced features in Rust include the use of generic functions to achieve type polymorphism. The following is a Rust program to calculate the sum of two things, for which addition is implemented, using a generic function: use std::ops::Add; fn sum<T: Add<Output = T>>(num1: T, num2: T) -> T { num1 + num2 } fn main() { let result1 = sum(10, 20); println!("Sum is: {}", result1); let result2 = sum(10.23, 20.45); println!("Sum is: {}", result2); } Rust has no null pointers unless dereferencing a null pointer (which has to be surrounded in an unsafe block). Rust instead uses a Haskell -like Option type, which has two variants, Some<T> and None which has to be handled using syntactic sugar such as the if let statement in order to access the inner type, in this case, a string: fn main() { let name: Option<String> = None; // If name was not None, it would print here. if let Some(name) = name { println!("{}", name); } } Features Rust is intended to be a language for highly concurrent and highly safe systems, and programming in the large, that is, creating and maintaining boundaries that preserve large-system integrity. This has led to a feature set with an emphasis on safety, control of memory layout, and concurrency. Memory safety Rust is designed to be memory safe. It does not permit null pointers, dangling pointers, or data races. Data values can be initialized only through a fixed set of forms, all of which require their inputs to be already initialized. To replicate pointers being either valid or NULL, such as in linked list or binary tree data structures, the Rust core library provides an option type, which can be used to test whether a pointer has Some value or None. Rust has added syntax to manage lifetimes, which are checked at compile time by the borrow checker. Unsafe code can subvert some of these restrictions using the unsafe keyword. Memory management Rust does not use automated garbage collection. Memory and other resources are managed through the resource acquisition is initialization convention, with optional reference counting. Rust provides deterministic management of resources, with very low overhead. Rust favors stack allocation of values and does not perform implicit boxing. There is the concept of references (using the & symbol), which does not involve run-time reference counting. The safety of such pointers is verified at compile time, preventing dangling pointers and other forms of undefined behavior. Rust's type system separates shared, immutable pointers of the form &T from unique, mutable pointers of the form &mut T. A mutable pointer can be coerced to an immutable pointer, but not vice versa. Ownership Rust has an ownership system where all values have a unique owner, and the scope of the value is the same as the scope of the owner. Values can be passed by immutable reference, using &T, by mutable reference, using &mut T, or by value, using T. At all times, there can either be multiple immutable references or one mutable reference (an implicit readers–writer lock). The Rust compiler enforces these rules at compile time and also checks that all references are valid. Types and polymorphism Rust's type system supports a mechanism called traits, inspired by type classes in the Haskell language. Traits annotate types and are used to define shared behavior between different types. For example, floats and integers both implement the Add trait because they can both be added; and any type that can be printed out as a string implements the Display or Debug traits. This facility is known as ad hoc polymorphism. Rust uses type inference for variables declared with the keyword let. Such variables do not require a value to be initially assigned to determine their type. A compile time error results if any branch of code leaves the variable without an assignment. Variables assigned multiple times must be marked with the keyword mut (short for mutable). A function can be given generic parameters, which allows the same function to be applied to different types. Generic functions can constrain the generic type to implement a particular trait or traits; for example, an "add one" function might require the type to implement "Add". This means that a generic function can be type-checked as soon as it is defined. The implementation of Rust generics is similar to the typical implementation of C++ templates: a separate copy of the code is generated for each instantiation. This is called monomorphization and contrasts with the type erasure scheme typically used in Java and Haskell. Rust's type erasure is also available by using the keyword dyn. The benefit of monomorphization is optimized code for each specific use case; the drawback is increased compile time and size of the resulting binaries. In Rust, user-defined types are created with the struct keyword. These types usually contains fields of data like objects or classes in other languages. The impl keyword can define methods for the struct (data and function are defined separately in a struct) or implement a trait for the structure. A trait is a contract that a structure has certain required methods implemented. Traits are used to restrict generic parameters and because traits can provide a struct with more methods than the user defined. For example, the trait Iterator requires that the next method be defined for the type. Once the next method is defined the trait provides common functional helper methods over the iterator like map or filter. The object system within Rust is based around implementations, traits and structured types. Implementations fulfill a role similar to that of classes within other languages and are defined with the keyword impl. Traits provide inheritance and polymorphism; they allow methods to be defined and mixed in to implementations. Structured types are used to define fields. Implementations and traits cannot define fields themselves, and only traits can provide inheritance. Among other benefits, this prevents the diamond problem of multiple inheritance, as in C++. In other words, Rust supports interface inheritance but replaces implementation inheritance with composition; see composition over inheritance. Macros for language extension It is possible to extend the Rust language using the procedural macro mechanism. Procedural macros use Rust functions that run at compile time to modify the compiler's token stream. This complements the declarative macro mechanism (also known as macros by example), which uses pattern matching to achieve similar goals. Procedural macros come in three flavors: Function-like macros custom!(...) Derive macros #[derive(CustomDerive)] Attribute macros #[custom_attribute] The println! macro is an example of a function-like macro and serde_derive is a commonly used library for generating code for reading and writing data in many formats such as JSON. Attribute macros are commonly used for language bindings such as the extendr library for Rust bindings to R. The following code shows the use of the Serialize, Deserialize and Debug derive procedural macros to implement JSON reading and writing as well as the ability to format a structure for debugging. use serde_json::{Serialize, Deserialize}; #[derive(Serialize, Deserialize, Debug)] struct Point { x: i32, y: i32, } fn main() { let point = Point { x: 1, y: 2 }; let serialized = serde_json::to_string(&point).unwrap(); println!("serialized = {}", serialized); let deserialized: Point = serde_json::from_str(&serialized).unwrap(); println!("deserialized = {:?}", deserialized); } Interface with C and C++ Rust has a foreign function interface (FFI) that can be used both to call code written in languages such as C from Rust and to call Rust code from those languages. While calling C++ has historically been challenging (from any language), Rust has a library, CXX, to allow calling to or from C++, and "CXX has zero or negligible overhead." Components Besides the compiler and standard library, the Rust ecosystem includes additional components for software development. Component installation is typically managed by rustup, a Rust toolchain installer developed by the Rust project. Cargo Cargo is Rust's build system and package manager. Cargo downloads, compiles, distributes, and uploads packages, called crates, maintained in the official registry. Cargo also wraps Clippy and other Rust components. Cargo requires projects to follow a certain directory structure, with some flexibility. Projects using Cargo may be either a single crate or a workspace composed of multiple crates that may depend on each other. The dependencies for a crate are specified in a Cargo.toml file along with SemVer version requirements, telling Cargo which versions of the dependency are compatible with the crate using them. By default, Cargo sources its dependencies from the user-contributed registry crates.io, but Git repositories and crates in the local filesystem can be specified as dependencies, too. Rustfmt Rustfmt is a code formatter for Rust. It takes Rust source code as input and changes the whitespace and indentation to produce code formatted in accordance to the Rust style guide or rules specified in a rustfmt.toml file. Rustfmt can be invoked as a standalone program or on a Rust project through Cargo. Clippy Clippy is Rust's built-in linting tool to improve the correctness, performance, and readability of Rust code. It was created in 2014 and named after the eponymous Microsoft Office feature. As of 2021, Clippy has more than 450 rules, which can be browsed online and filtered by category. Some rules are disabled by default. IDE support The most popular language servers for Rust are rust-analyzer and RLS. These projects provide IDEs and text editors with more information about a Rust project. Basic features include linting checks via Clippy and formatting via Rustfmt, among other functions. RLS also provides automatic code completion via Racer, though development of Racer was slowed down in favor of rust-analyzer. Performance Rust aims "to be as efficient and portable as idiomatic C++, without sacrificing safety". Since Rust utilizes LLVM, any performance improvements in LLVM also carry over to Rust. Adoption Rust has been adopted by major software engineering companies. For example, Dropbox is now written in Rust, as are some components at Amazon, Microsoft, Facebook, Discord, and the Mozilla Foundation. Rust was the third-most-loved programming language in the 2015 Stack Overflow annual survey and took first place for 2016–2021. Web browsers and services Firefox has two projects written in Rust: the Servo parallel browser engine developed by Mozilla in collaboration with Samsung; and Quantum, which is composed of several sub-projects for improving Mozilla's Gecko browser engine. OpenDNS uses Rust in two of its components. Deno, a secure runtime for JavaScript and TypeScript, is built with V8, Rust, and Tokio. Figma, a web-based vector graphics editor, is written in Rust. Operating systems Redox is a "full-blown Unix-like operating system" including a microkernel written in Rust. Theseus, an experimental OS with "intralingual design", is written in Rust. The Google Fuchsia capability-based operating system has some tools written in Rust. Stratis is a file system manager written in Rust for Fedora and RHEL 8. exa is a Unix/Linux command line alternative to ls written in Rust. Since 2021, the Linux kernel contains Rust code. Other notable projects and platforms Discord uses Rust for portions of its backend, as well as client-side video encoding, to augment the core infrastructure written in Elixir. Microsoft Azure IoT Edge, a platform used to run Azure services and artificial intelligence on IoT devices, has components implemented in Rust. Polkadot (cryptocurrency) is a blockchain platform written in Rust. Ruffle is an open-source SWF emulator written in Rust. TerminusDB, an open source graph database designed for collaboratively building and curating knowledge graphs, is written in Prolog and Rust. Amazon Web Services has multiple projects written in Rust, including Firecracker, a virtualization solution, and Bottlerocket, a Linux distribution and containerization solution. Community Rust's official website lists online forums, messaging platforms, and in-person meetups for the Rust community. Conferences dedicated to Rust development include: RustConf: an annual conference in Portland, Oregon. Held annually since 2016 (except in 2020 and 2021 because of the COVID-19 pandemic). Rust Belt Rust: a #rustlang conference in the Rust Belt RustFest: Europe's @rustlang conference RustCon Asia Rust LATAM Oxidize Global Governance The Rust Foundation is a non-profit membership organization incorporated in Delaware, United States, with the primary purposes of supporting the maintenance and development of the language, cultivating the Rust project team members and user communities, managing the technical infrastructure underlying the development of Rust, and managing and stewarding the Rust trademark. It was established on February 8, 2021, with five founding corporate members (Amazon Web Services, Huawei, Google, Microsoft, and Mozilla). The foundation's board is chaired by Shane Miller. Starting in late 2021, its Executive Director and CEO is Rebecca Rumbul. Prior to this, Ashley Williams was interim executive director. See also List of programming languages History of programming languages Comparison of programming languages Explanatory notes References Further reading (online version) External links 2010 software Articles with example code Concurrent programming languages Free compilers and interpreters Free software projects Functional languages High-level programming languages Mozilla Multi-paradigm programming languages Pattern matching programming languages Procedural programming languages Programming languages created in 2010 Software using the Apache license Software using the MIT license Statically typed programming languages Systems programming languages
824431
https://en.wikipedia.org/wiki/Connectionless-mode%20Network%20Service
Connectionless-mode Network Service
Connectionless-mode Network Service (CLNS) or simply Connectionless Network Service is an OSI network layer datagram service that does not require a circuit to be established before data is transmitted, and routes messages to their destinations independently of any other messages. As such it is a "best-effort" rather than a "reliable" delivery service. CLNS is not an Internet service, but provides capabilities in an OSI network environment similar to those provided by the Internet Protocol (IP). The service is specified in ISO 8348, the OSI Network Service Definition (which also defines the connection-oriented service, CONS.) Connectionless-mode Network Protocol Connectionless-mode Network Protocol (CLNP) is an OSI protocol deployment. CLNS is the service provided by the Connectionless-mode Network Protocol (CLNP). CLNP is widely used in many telecommunications networks around the world because IS-IS (an OSI routing protocol) is mandated by the ITU-T as the protocol for management of Synchronous Digital Hierarchy (SDH) elements. From August 1990 to April 1995 the NSFNET backbone supported CLNP in addition to TCP/IP. However, CLNP usage remained low compared to TCP/IP. Transport Protocol Class 4 (TP4) in conjunction with CLNS CLNS is used by ISO Transport Protocol Class 4 (TP4), one of the five transport layer protocols in the OSI suite. TP4 offers error recovery, performs segmentation and reassembly, and supplies multiplexing and demultiplexing of data streams over a single virtual circuit. TP4 sequences PDUs and retransmits them or re-initiates the connection if an excessive number are unacknowledged. TP4 provides reliable transport service and functions with either connection-oriented or connectionless network service. TP4 is the most commonly used of all the OSI transport protocols and is similar to the Transmission Control Protocol (TCP) in the Internet protocol suite. Protocols providing CLNS Several protocols provide the CLNS service: Connectionless-mode Network Protocol (CLNP), as specified in ITU-T Recommendation X.233. End System-to-Intermediate System (ES-IS), a routing exchange protocol for use in conjunction with the protocol for providing the CLNS (ISO 9542). Intermediate System-to-Intermediate System (IS-IS), an intradomain routing exchange protocol used in both the OSI and Internet environments (ISO 10589 and RFC 1142). Interdomain Routing Protocol (IDRP), the OSI equivalent of BGP. Signalling Connection Control Part (SCCP), as specified in ITU-T Recommendation Q.711 is a Signaling System 7 protocol. See also OSI model TCP/IP model X.25 protocol suite, an OSI Connection Oriented Network Service (CONS) References External links What is CLNS? - a brief introduction by Ivan Pepelnjak ITU-T recommendations OSI protocols Network layer protocols
14568189
https://en.wikipedia.org/wiki/Wrike
Wrike
Wrike, Inc. is an American project management application service provider based in San Jose, California. Wrike also has offices in St.Petersburg, Russia, Kyiv, Ukraine and Prague in the Czech Republic. History Wrike was founded in 2006 by Andrew Filev. Filev initially self-funded the company before later obtaining investor funding. Wrike released the beta version of its software (also called Wrike) in December 2006. The company then launched a new "Enterprise" platform in December 2013. In June 2015, Wrike announced the opening of an office in Dublin, Ireland and in 2016, Wrike launched a datacenter there to host data in compliance with local privacy regulations. In July of 2016, Wrike announced the launch of Wrike for Marketers. That same year, Wrike's headquarters moved from Mountain View to San Jose, California. In January 2021, Citrix Systems announced its intention to acquire Wrike for $2.25 billion. The acquisition closed in March 2021. Investments Wrike received $1 million in Angel funding in 2012 from TMT Investments. In October, 2013, Wrike secured $10 million in investment funding from Bain Capital. In May 2015, the company secured $15 million in a new round of funding. Investors included Scale Venture Partners, DCM Ventures, and Bain Capital. At that time, Wrike had 8,000 customers, 200 employees, and 30,000 new users each month. On November 29, 2018, Wrike signed a definitive agreement to receive a majority investment by Vista Equity Partners (“Vista”), a firm focused on software, data and technology-enabled businesses. Software The Wrike project management software is a Software-as-a-Service (SaaS) product that enables its users to manage and track projects, deadlines, schedules, and other workflow processes. It also allows users to collaborate with one another. The application is available in English, French, Spanish, German, Portuguese, Italian, Japanese and Russian. The software streamlines workflow and allows companies to focus on core tasks. As of 2016, it was used by over 12,000 companies Features Wrike is designed around a minimalist multi-pane UI and consists of features in two categories: project management, and team collaboration. Project Management features are those which help teams track dates and dependencies associated with projects, manage assignments and resources, and track time. These include an interactive Gantt chart, a workload view, and a sortable table that can be customized to store project data. Collaboration features are those designed to aid in conversations, asset creation, and decision-making by teams. These include Wrike's Live co-editor, discussion threads on tasks, and tools for attaching documents, editing them, and tracking their changes. Wrike uses an "inbox" feature and browser notifications to alert users of updates from their colleagues and dashboards for quick overviews of pending tasks. These updates are also available in Wrike's mobile apps on iOS and Android. Wrike has an optional feature set called "Wrike for Marketers" which has several tools for managing marketing workflows. In May 2012, Wrike announced the launch of a freemium version of its software for teams of up to 5 users. That year also saw the integration of a live text coeditor into its workspace to unify collaboration and task management. In late 2013 Wrike released a new feature set called Wrike Enterprise which included advanced analytics and other tools targeted at large business customers. Since then it's released several major updates to Wrike Enterprise, including a customizable spreadsheet called "Dynamic Platform" in late 2014 and custom workflows for teams in 2015. In July 2016, Wrike was updated with a set of add-on features under the name "Wrike for Marketers," which includes integrations with Adobe Photoshop, a tool for submitting requests, and proofing and approval tools for creative assets like videos and images. Mobile Wrike is available as native Android and iOS apps. Mobile apps include an interactive Gantt chart that syncs across devices. The apps are available offline, and sync when connection is restored. Integrations Wrike integrates with a number of other enterprise systems. These include: Adobe Box.com Google Drive Microsoft Teams DropBox Microsoft Word Microsoft Office 365 and Azure AD Gmail Salesforce.com It also has an API so that developers can build their own integrations. Company recognition and awards 2015 - Listed in by Deloitte's 2015 Technology Fast 500TM Ranking 2015 - Named one of the "Best Places to Work" by San Francisco Business Times/Silicon Valley Business Journal 2016 - GetApp Rank Wrike as the Best Project Management Software 2016 - Named Leader in Enterprise Collaborative Work Management by Forrester Wave See also Comparison of project management software Comparison of time-tracking software List of collaborative software List of project management software List of applications with iCalendar support References Software companies based in the San Francisco Bay Area Companies based in San Jose, California American companies established in 2006 Software companies established in 2006 Online companies of the United States Software companies of the United States Android (operating system) software As a service Business software Collaborative software Groupware IOS software Project management software Web applications 2021 mergers and acquisitions Citrix Systems
562113
https://en.wikipedia.org/wiki/NECTEC
NECTEC
Thailand's National Electronics and Computer Technology Center (NECTEC) is a statutory government organization under the National Science and Technology Development Agency (NSTDA), Ministry of Higher Education, Science, Research and Innovation. Its main responsibilities are to undertake, support, and promote the development of electronic, computing, telecommunication, and information technologies through research and development activities. NECTEC also disseminates and transfers such technologies for contribution to the economic growth and social development in the country, following the National Economic and Social Development Plan. History NECTEC was founded by the Thailand Ministry of Science, Technology and Energy on 16 September 1986. It was converted into a national centre specializing in electronics hardware and software in under National Science and Technology Development Agency. It was deemed a new agency following the enactment of the Science and Technology Development Act of 1991. NECTEC's executive director is Dr Sarun Sumriddetchkajorn. Mission NECTEC contributes to the development of Thailand's capability in electronics and computer technologies through: Research, development, design and engineering Technology transfer to industries and communities Human resource development Policy research and industrial intelligence and knowledge infrastructure Departments Optical and Quantum Communication Lab Intelligent Devices and Systems Research Unit Green Testing Development Lab Nano-Electronics and MEMS lab Photonics Technology lab Information Technology Management Division Human Language Technology Open Source Software LAb Organization and Strategic Planning and Evaluation Division. Human Resource and Organization Development Section Strategic Program Management Section Policy Research Division Public Relations Section R&D Services and support Section Platform Technology Program Management Division Rehabilitative Engineering and Assistive Technology Institute Information Security Infrastructure and Services Embedded Systems Technology Lab Industrial control and Automation Lab Software Engineering Lab Integrated Circuits Design Section Products TVIS is an automatic system that reports traffic information in Bangkok. Ya&You: It is the application for searching and providing the knowledge of medicine and health information to promote the use of drugs and healthcare properly. Traffy bSafe for Android is a free application where users can download to make a complaint and report dangerous driving behaviour of public transportation such as vans and buses. tangmoChecker: It is an application for checking the ripeness of watermelons NVIS: It is an automatic system that reads out loud online news for a user. FFC: Family Folder Collector: The purpose of this application is for healthcare personnel to collect household information when going out to the village for data collection. Smart Sensor:A prototyping platform for Android application and Smart Sensor Device communicate via Bluetooth. Drift:This application monitors a user's activity during the day. It classifies a user's activity into sleeping (phone is not with user), resting (phone is with user while doing no or minimal activity, walking (user is walking), or driving (user is taking transportation). Floodsign:FloodSign is a tool used to report flooding stain levels in Thailand's 2011 floods. Impact NECTEC has used green technology in the field of printing. This has led to the foundation of Thailand Organic and Printed Electronics Innovation Centre (TOPIC). NECTEC along with public and private sectors have researched the technical feasibility of using organic electronics in printing ink. It has successfully developed graphene-based conductive ink in 2011. The ink has five times more conductivity than a typical ink. It is also cheap, contains no contamination, and is suitable for various applications. It has also developed a software called "Size-Thai" that uses 3-D body scan to measure the anatomical dimensions of Thai people. This makes Thailand the second nation in Asia to use such a software after Japan. It is expected to reduce wastage and help garment retailers to reduce losses. It also has business applications like "virtual-try on" and "made to measure". References External links National Electronics and Computer Technology Center Research institutes in Thailand Information technology research institutes Research institutes established in 1986 1986 establishments in Thailand National Science and Technology Development Agency Information technology in Thailand
1301906
https://en.wikipedia.org/wiki/Software%20quality
Software quality
In the context of software engineering, software quality refers to two related but distinct notions: Software functional quality reflects how well it complies with or conforms to a given design, based on functional requirements or specifications. That attribute can also be described as the fitness for purpose of a piece of software or how it compares to competitors in the marketplace as a worthwhile product. It is the degree to which the correct software was produced. Software structural quality refers to how it meets non-functional requirements that support the delivery of the functional requirements, such as robustness or maintainability. It has a lot more to do with the degree to which the software works as needed. Many aspects of structural quality can be evaluated only statically through the analysis of the software inner structure, its source code (see Software metrics), at the unit level, system level (sometimes referred to as end-to-end testing), which is in effect how its architecture adheres to sound principles of software architecture outlined in a paper on the topic by Object Management Group (OMG). However some structural qualities, such as usability, can be assessed only dynamically (users or others acting in their behalf interact with the software or, at least, some prototype or partial implementation; even the interaction with a mock version made in cardboard represents a dynamic test because such version can be considered a prototype). Other aspects, such as reliability, might involve not only the software but also the underlying hardware, therefore, it can be assessed both statically and dynamically (stress test). Functional quality is typically assessed dynamically but it is also possible to use static tests (such as software reviews). Historically, the structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126 and the subsequent ISO/IEC 25000 standard. Based on these models (see Models), the Consortium for IT Software Quality (CISQ) has defined five major desirable structural characteristics needed for a piece of software to provide business value: Reliability, Efficiency, Security, Maintainability and (adequate) Size. Software quality measurement quantifies to what extent a software program or system rates along each of these five dimensions. An aggregated measure of software quality can be computed through a qualitative or a quantitative scoring scheme or a mix of both and then a weighting system reflecting the priorities. This view of software quality being positioned on a linear continuum is supplemented by the analysis of "critical programming errors" that under specific circumstances can lead to catastrophic outages or performance degradations that make a given system unsuitable for use regardless of rating based on aggregated measurements. Such programming errors found at the system level represent up to 90 percent of production issues, whilst at the unit-level, even if far more numerous, programming errors account for less than 10 percent of production issues (see also Ninety-ninety rule). As a consequence, code quality without the context of the whole system, as W. Edwards Deming described it, has limited value. To view, explore, analyze, and communicate software quality measurements, concepts and techniques of information visualization provide visual, interactive means useful, in particular, if several software quality measures have to be related to each other or to components of a software or system. For example, software maps represent a specialized approach that "can express and combine information about software development, software quality, and system dynamics". Software quality also plays a role in the release phase of a software project. Specifically, the quality and establishment of the release processes (also patch processes), configuration management are important parts of a overall software engineering process. Motivation Software quality is motivated by at least two main perspectives: Risk management: Software failure has caused more than inconvenience. Software errors can cause human fatalities (see for example: List of software bugs). The causes have ranged from poorly designed user interfaces to direct programming errors, see for example Boeing 737 case or Unintended acceleration cases or Therac-25 cases. This resulted in requirements for the development of some types of software, particularly and historically for software embedded in medical and other devices that regulate critical infrastructures: "[Engineers who write embedded software] see Java programs stalling for one third of a second to perform garbage collection and update the user interface, and they envision airplanes falling out of the sky.". In the United States, within the Federal Aviation Administration (FAA), the FAA Aircraft Certification Service provides software programs, policy, guidance and training, focus on software and Complex Electronic Hardware that has an effect on the airborne product (a "product" is an aircraft, an engine, or a propeller). Certification standards such as DO-178C, ISO 26262, IEC 62304, etc. provide guidance. Cost management: As in any other fields of engineering, a software product or service governed by good software quality costs less to maintain, is easier to understand and can change more cost-effective in response to pressing business needs. Industry data demonstrate that poor application structural quality in core business applications (such as enterprise resource planning (ERP), customer relationship management (CRM) or large transaction processing systems in financial services) results in cost, schedule overruns and creates waste in the form of rework (see Muda (Japanese term)). Moreover, poor structural quality is strongly correlated with high-impact business disruptions due to corrupted data, application outages, security breaches, and performance problems. CISQ reports on the cost of poor quality estimates an impact of: $2.08 trillion in 2020 $2.84 trillion in 2018 IBM's Cost of a Data Breach Report 2020 estimates that the average global costs of a data breach: $3.86 million Definitions ISO Software quality is "capability of a software product to conform to requirements." while for others it can be synonymous with customer- or value-creation or even defect level. ASQ ASQ uses the following definition: Software quality describes the desirable attributes of software products. There are two main approaches exist: defect management and quality attributes. NIST Software Assurance (SA) covers both the property and the process to achieve it: [Justifiable] confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle and that the software functions in the intended manner The planned and systematic set of activities that ensure that software life cycle processes and products conform to requirements, standards, and procedures PMI The Project Management Institute's PMBOK Guide "Software Extension" defines not "Software quality" itself, but Software Quality Assurance (SQA) as "a continuous process that audits other software processes to ensure that those processes are being followed (includes for example a software quality management plan)." whereas Software Quality Control (SCQ) means "taking care of applying methods, tools, techniques to ensure satisfaction of the work products towards quality requirements for a software under development or modification." Other general and historic The first definition of quality history remembers is from Shewhart in the beginning of 20th century: "There are two common aspects of quality: one of them has to do with the consideration of the quality of a thing as an objective reality independent of the existence of man. The other has to do with what we think, feel or sense as a result of the objective reality. In other words, there is a subjective side of quality." Kitchenham and Pfleeger, further reporting the teachings of David Garvin, identify five different perspectives on quality: The transcendental perspective deals with the metaphysical aspect of quality. In this view of quality, it is "something toward which we strive as an ideal, but may never implement completely". It can hardly be defined, but is similar to what a federal judge once commented about obscenity: "I know it when I see it". The user perspective is concerned with the appropriateness of the product for a given context of use. Whereas the transcendental view is ethereal, the user view is more concrete, grounded in the product characteristics that meet user's needs. The manufacturing perspective represents quality as conformance to requirements. This aspect of quality is stressed by standards such as ISO 9001, which defines quality as "the degree to which a set of inherent characteristics fulfills requirements" (ISO/IEC 9001). The product perspective implies that quality can be appreciated by measuring the inherent characteristics of the product. The final perspective of quality is value-based. This perspective recognizes that the different perspectives of quality may have different importance, or value, to various stakeholders. Tom DeMarco has proposed that "a product's quality is a function of how much it changes the world for the better." This can be interpreted as meaning that functional quality and user satisfaction are more important than structural quality in determining software quality. Another definition, coined by Gerald Weinberg in Quality Software Management: Systems Thinking, is "Quality is value to some person." This definition stresses that quality is inherently subjective—different people will experience the quality of the same software differently. One strength of this definition is the questions it invites software teams to consider, such as "Who are the people we want to value our software?" and "What will be valuable to them?". Other meanings and controversies One of the challenges in defining quality is that "everyone feels they understand it" and other definitions of software quality could be based on extending the various descriptions of the concept of quality used in business. Software quality also often gets mixed-up with Quality Assurance or Problem Resolution Management or Quality Control or DevOps. It does over-lap with before mentioned areas (see also PMI definitions), but is distinctive as it does not solely focus on testing but also on processes, management, improvements, assessments, etc. Measurement Although the concepts presented in this section are applicable to both structural and functional software quality, measurement of the latter is essentially performed through testing [see main article: Software testing]. However, testing isn't enough: According to a study, individual programmers are less than 50% efficient at finding bugs in their own software. And most forms of testing are only 35% efficient. This makes it difficult to determine [software] quality. Introduction Software quality measurement is about quantifying to what extent a system or software possesses desirable characteristics. This can be performed through qualitative or quantitative means or a mix of both. In both cases, for each desirable characteristic, there are a set of measurable attributes the existence of which in a piece of software or system tend to be correlated and associated with this characteristic. For example, an attribute associated with portability is the number of target-dependent statements in a program. More precisely, using the Quality Function Deployment approach, these measurable attributes are the "hows" that need to be enforced to enable the "whats" in the Software Quality definition above. The structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126-3 and the subsequent ISO/IEC 25000:2005 quality model. The main focus is on internal structural quality. Subcategories have been created to handle specific areas like business application architecture and technical characteristics such as data access and manipulation or the notion of transactions. The dependence tree between software quality characteristics and their measurable attributes is represented in the diagram on the right, where each of the 5 characteristics that matter for the user (right) or owner of the business system depends on measurable attributes (left): Application Architecture Practices Coding Practices Application Complexity Documentation Portability Technical and Functional Volume Correlations between programming errors and production defects unveil that basic code errors account for 92 percent of the total errors in the source code. These numerous code-level issues eventually count for only 10 percent of the defects in production. Bad software engineering practices at the architecture levels account for only 8 percent of total defects, but consume over half the effort spent on fixing problems, and lead to 90 percent of the serious reliability, security, and efficiency issues in production. Code-based analysis Many of the existing software measures count structural elements of the application that result from parsing the source code for such individual instructions tokens control structures (Complexity), and objects. Software quality measurement is about quantifying to what extent a system or software rates along these dimensions. The analysis can be performed using a qualitative or quantitative approach or a mix of both to provide an aggregate view [using for example weighted average(s) that reflect relative importance between the factors being measured]. This view of software quality on a linear continuum has to be supplemented by the identification of discrete Critical Programming Errors. These vulnerabilities may not fail a test case, but they are the result of bad practices that under specific circumstances can lead to catastrophic outages, performance degradations, security breaches, corrupted data, and myriad other problems that make a given system de facto unsuitable for use regardless of its rating based on aggregated measurements. A well-known example of vulnerability is the Common Weakness Enumeration, a repository of vulnerabilities in the source code that make applications exposed to security breaches. The measurement of critical application characteristics involves measuring structural attributes of the application's architecture, coding, and in-line documentation, as displayed in the picture above. Thus, each characteristic is affected by attributes at numerous levels of abstraction in the application and all of which must be included calculating the characteristic's measure if it is to be a valuable predictor of quality outcomes that affect the business. The layered approach to calculating characteristic measures displayed in the figure above was first proposed by Boehm and his colleagues at TRW (Boehm, 1978) and is the approach taken in the ISO 9126 and 25000 series standards. These attributes can be measured from the parsed results of a static analysis of the application source code. Even dynamic characteristics of applications such as reliability and performance efficiency have their causal roots in the static structure of the application. Structural quality analysis and measurement is performed through the analysis of the source code, the architecture, software framework, database schema in relationship to principles and standards that together define the conceptual and logical architecture of a system. This is distinct from the basic, local, component-level code analysis typically performed by development tools which are mostly concerned with implementation considerations and are crucial during debugging and testing activities. Reliability The root causes of poor reliability are found in a combination of non-compliance with good architectural and coding practices. This non-compliance can be detected by measuring the static quality attributes of an application. Assessing the static attributes underlying an application's reliability provides an estimate of the level of business risk and the likelihood of potential application failures and defects the application will experience when placed in operation. Assessing reliability requires checks of at least the following software engineering best practices and technical attributes: Application Architecture Practices Coding Practices Complexity of algorithms Complexity of programming practices Compliance with Object-Oriented and Structured Programming best practices (when applicable) Component or pattern re-use ratio Dirty programming Error & Exception handling (for all layers - GUI, Logic & Data) Multi-layer design compliance Resource bounds management Software avoids patterns that will lead to unexpected behaviors Software manages data integrity and consistency Transaction complexity level Depending on the application architecture and the third-party components used (such as external libraries or frameworks), custom checks should be defined along the lines drawn by the above list of best practices to ensure a better assessment of the reliability of the delivered software. Efficiency As with Reliability, the causes of performance inefficiency are often found in violations of good architectural and coding practice which can be detected by measuring the static quality attributes of an application. These static attributes predict potential operational performance bottlenecks and future scalability problems, especially for applications requiring high execution speed for handling complex algorithms or huge volumes of data. Assessing performance efficiency requires checking at least the following software engineering best practices and technical attributes: Application Architecture Practices Appropriate interactions with expensive and/or remote resources Data access performance and data management Memory, network and disk space management Compliance with Coding Practices (Best coding practices) Security Software quality includes software security. Many security vulnerabilities result from poor coding and architectural practices such as SQL injection or cross-site scripting. These are well documented in lists maintained by CWE, and the SEI/Computer Emergency Center (CERT) at Carnegie Mellon University. Assessing security requires at least checking the following software engineering best practices and technical attributes: Implementation, Management of a security-aware and hardening development process, e.g. Security Development Lifecycle (Microsoft) or IBM's Secure Engineering Framework. Secure Application Architecture Practices Multi-layer design compliance Security best practices (Input Validation, SQL Injection, Cross-Site Scripting, Access control etc.) Secure and good Programming Practices Error & Exception handling Maintainability Maintainability includes concepts of modularity, understandability, changeability, testability, reusability, and transferability from one development team to another. These do not take the form of critical issues at the code level. Rather, poor maintainability is typically the result of thousands of minor violations with best practices in documentation, complexity avoidance strategy, and basic programming practices that make the difference between clean and easy-to-read code vs. unorganized and difficult-to-read code. Assessing maintainability requires checking the following software engineering best practices and technical attributes: Application Architecture Practices Architecture, Programs and Code documentation embedded in source code Code readability Code smells Complexity level of transactions Complexity of algorithms Complexity of programming practices Compliance with Object-Oriented and Structured Programming best practices (when applicable) Component or pattern re-use ratio Controlled level of dynamic coding Coupling ratio Dirty programming Documentation Hardware, OS, middleware, software components and database independence Multi-layer design compliance Portability Programming Practices (code level) Reduced duplicate code and functions Source code file organization cleanliness Maintainability is closely related to Ward Cunningham's concept of technical debt, which is an expression of the costs resulting of a lack of maintainability. Reasons for why maintainability is low can be classified as reckless vs. prudent and deliberate vs. inadvertent, and often have their origin in developers' inability, lack of time and goals, their carelessness and discrepancies in the creation cost of and benefits from documentation and, in particular, maintainable source code. Size Measuring software size requires that the whole source code be correctly gathered, including database structure scripts, data manipulation source code, component headers, configuration files etc. There are essentially two types of software sizes to be measured, the technical size (footprint) and the functional size: There are several software technical sizing methods that have been widely described. The most common technical sizing method is number of Lines of Code (#LOC) per technology, number of files, functions, classes, tables, etc., from which backfiring Function Points can be computed; The most common for measuring functional size is function point analysis. Function point analysis measures the size of the software deliverable from a user's perspective. Function point sizing is done based on user requirements and provides an accurate representation of both size for the developer/estimator and value (functionality to be delivered) and reflects the business functionality being delivered to the customer. The method includes the identification and weighting of user recognizable inputs, outputs and data stores. The size value is then available for use in conjunction with numerous measures to quantify and to evaluate software delivery and performance (development cost per function point; delivered defects per function point; function points per staff month.). The function point analysis sizing standard is supported by the International Function Point Users Group (IFPUG). It can be applied early in the software development life-cycle and it is not dependent on lines of code like the somewhat inaccurate Backfiring method. The method is technology agnostic and can be used for comparative analysis across organizations and across industries. Since the inception of Function Point Analysis, several variations have evolved and the family of functional sizing techniques has broadened to include such sizing measures as COSMIC, NESMA, Use Case Points, FP Lite, Early and Quick FPs, and most recently Story Points. However, Function Points has a history of statistical accuracy, and has been used as a common unit of work measurement in numerous application development management (ADM) or outsourcing engagements, serving as the "currency" by which services are delivered and performance is measured. One common limitation to the Function Point methodology is that it is a manual process and therefore it can be labor-intensive and costly in large scale initiatives such as application development or outsourcing engagements. This negative aspect of applying the methodology may be what motivated industry IT leaders to form the Consortium for IT Software Quality focused on introducing a computable metrics standard for automating the measuring of software size while the IFPUG keep promoting a manual approach as most of its activity rely on FP counters certifications. CISQ defines Sizing as to estimate the size of software to support cost estimating, progress tracking or other related software project management activities. Two standards are used: Automated Function Points to measure the functional size of software and Automated Enhancement Points to measure the size of both functional and non-functional code in one measure. Identifying critical programming errors Critical Programming Errors are specific architectural and/or coding bad practices that result in the highest, immediate or long term, business disruption risk. These are quite often technology-related and depend heavily on the context, business objectives and risks. Some may consider respect for naming conventions while others – those preparing the ground for a knowledge transfer for example – will consider it as absolutely critical. Critical Programming Errors can also be classified per CISQ Characteristics. Basic example below: Reliability Avoid software patterns that will lead to unexpected behavior (Uninitialized variable, null pointers, etc.) Methods, procedures and functions doing Insert, Update, Delete, Create Table or Select must include error management Multi-thread functions should be made thread safe, for instance servlets or struts action classes must not have instance/non-final static fields Efficiency Ensure centralization of client requests (incoming and data) to reduce network traffic Avoid SQL queries that don't use an index against large tables in a loop Security Avoid fields in servlet classes that are not final static Avoid data access without including error management Check control return codes and implement error handling mechanisms Ensure input validation to avoid cross-site scripting flaws or SQL injections flaws Maintainability Deep inheritance trees and nesting should be avoided to improve comprehensibility Modules should be loosely coupled (fanout, intermediaries) to avoid propagation of modifications Enforce homogeneous naming conventions Operationalized quality models Newer proposals for quality models such as Squale and Quamoco propagate a direct integration of the definition of quality attributes and measurement. By breaking down quality attributes or even defining additional layers, the complex, abstract quality attributes (such as reliability or maintainability) become more manageable and measurable. Those quality models have been applied in industrial contexts but have not received widespread adoption. Trivia "A science is as mature as its measurement tools." "I know it when I see it." "You cannot control what you cannot measure." (Tom DeMarco) "You cannot inspect quality into a product." (W. Edwards Deming) "The bitterness of poor quality remains long after the sweetness of meeting the schedule has been forgotten." (Anonymous) "If you don't start with a spec, every piece of code you write is a patch." (Leslie Lamport) See also Anomaly in software Accessibility Availability Best coding practices Cohesion and Coupling Cyclomatic complexity Coding conventions Computer bug Dependability GQM ISO/IEC 9126 Software Process Improvement and Capability Determination - ISO/IEC 15504 Programming style Quality: quality control, total quality management. Requirements management Scope (project management) Security Security engineering Software quality assurance Software architecture Software quality control Software metrics Software reusability Software standard Software testing Testability Static program analysis Further reading Android OS Quality Guidelines including checklists for UI, Security, etc. July 2021 Association of Maritime Managers in Information Technology & Communications (AMMITEC). Maritime Software Quality Guidelines. September 2017 Capers Jones and Olivier Bonsignour, "The Economics of Software Quality", Addison-Wesley Professional, 1st edition, December 31, 2011, CAT Lab - CNES Code Analysis Tools Laboratory (on GitHub) Girish Suryanarayana, Software Process versus Design Quality: Tug of War? Ho-Won Jung, Seung-Gweon Kim, and Chang-Sin Chung. Measuring software product quality: A survey of ISO/IEC 9126. IEEE Software, 21(5):10–13, September/October 2004. International Organization for Standardization. Software Engineering—Product Quality—Part 1: Quality Model. ISO, Geneva, Switzerland, 2001. ISO/IEC 9126-1:2001(E). Measuring Software Product Quality: the ISO 25000 Series and CMMI (SEI site) MSQF - A measurement based software quality framework Cornell University Library Omar Alshathry, Helge Janicke, "Optimizing Software Quality Assurance," compsacw, pp. 87–92, 2010 IEEE 34th Annual Computer Software and Applications Conference Workshops, 2010. Robert L. Glass. Building Quality Software. Prentice Hall, Upper Saddle River, NJ, 1992. Roland Petrasch, "The Definition of 'Software Quality': A Practical Approach", ISSRE, 1999 Software Quality Professional, American Society for Quality (ASQ) Software Quality Journal by Springer Nature Stephen H. Kan. Metrics and Models in Software Quality Engineering. Addison-Wesley, Boston, MA, second edition, 2002. Stefan Wagner. Software Product Quality Control. Springer, 2013. References Notes Bibliography External links When code is king: Mastering automotive software excellence (McKinsey, 2021) Embedded System Software Quality: Why is it so often terrible? What can we do about it? (by Philip Koopman) Code Quality Standards by CISQ™ CISQ Blog: https://blog.it-cisq.org Guide to software quality assurance (ESA) Guide to applying the ESA software engineering standards to small software projects (ESA) An Overview of ESA Software Product Assurance Services (NASA/ESA) Our approach to quality in Volkswagen Software Dev Center Lisbon Google Style Guides Ensuring Product Quality at Google (2011) NASA Software Assurance NIST Software Quality Group OMG/CISQ Automated Function Points (ISO/IEC 19515) OMG Automated Technical Debt Standard Automated Quality Assurance (articled in IREB by Harry Sneed) Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric (1996) Analyzing Application Quality by Using Code Analysis Tools (Microsoft, Documentation, Visual Studio, 2016) Systems thinking Quality Source code
1407354
https://en.wikipedia.org/wiki/Microsoft%20Money
Microsoft Money
Microsoft Money was a personal finance management software program by Microsoft. It had capabilities for viewing bank account balances, creating budgets, and tracking expenses, among other features. It was designed for computers using the Microsoft Windows operating system, and versions for Windows Mobile were also available (available for Money 2000-2006 on select versions of Windows Mobile, up to, but not including, Windows Mobile 5.0). Microsoft developed Money to compete with Quicken, another personal finance management software. Money is no longer being actively developed as a retail program. From its inception in 1991 until its discontinuation in 2009, Microsoft Money was commercial software. Microsoft discontinued sales of the software on June 30, 2009 and removed access to online services for existing Money installations in January 2011. In 2010, Microsoft released a replacement version, called Microsoft Money Plus Sunset, which allows users to open and edit Money data files, but lacks any online features or support. It was available in two editions: Deluxe, and Home & Business. In 2012, Money returned as a Windows Store app; however, this version of Money was designed as a news aggregator for personal finance, investing, and real estate. Other features include stock tracking across the world markets, a mortgage calculator, and a currency converter. It does not have any of the personal accounting and bookkeeping/money management features of the legacy desktop program. Microsoft 365 Family and Personal subscribers in the US have access to a premium Money in Excel template from Microsoft. Localization There were localized editions of Microsoft Money for the United Kingdom, France, Japan, Canada and an International English edition for other English speaking countries. However, Microsoft had not updated the U.K., French and international editions since Money 2005. The last Canadian edition was Money 2006. There were also localized editions for other countries, such as Russia, Brazil, Germany and Italy. However, these editions were discontinued due to what was believed to be an insufficient user-base to justify the expense of localization for more recent editions or the expense to integrate support for the national online-banking standard like HBCI in Germany. Microsoft offered a free downloadable time-limited trial version of Microsoft Money Plus. This trial version can import data files from the Canadian edition of Money, but not from other non-US editions. Users upgrading from other non-US editions must manually export and reimport their accounts, and may have to re-enter certain information by hand. History The first version of Microsoft Money dates back to 1991 and was originally part of the Microsoft Home series. Due to Microsoft's propensity to market product versions using the year number rather than the actual version number, the version number reported in the About dialogue box may not actually reflect that of the packaging of the distribution media. Note that a version 13.x was never created. Release history Discontinuation of Money In August 2008, Microsoft announced that it would stop releasing a new version of Money each year and had no version planned for 2009. The company also announced that it would no longer ship boxed versions of Microsoft Money to retail stores and would instead sell the product only as online downloads. On June 10, 2009, Microsoft announced that it would stop developing Money, would stop selling it by March 18 by next year, and would continue supporting it until January 31, 2011. The company cited the changing needs of the marketplace as the reason for Money's demise, stating that "demand for a comprehensive personal finance toolset has declined." Product-activation servers used for Money 2007 and beyond were also to be deactivated after January 31, 2011, preventing these versions from being reinstalled after that date. Money Plus Sunset On June 17, 2010, Microsoft announced the release of Money Plus Sunset, a downloadable version of Money Plus Deluxe and Money Plus Home & Business. Money Plus Sunset does not require online activation or the installation of any previous version of Money on the user's computer, and it should not be installed over the original 2008 version, if online services are still required. Money Plus Sunset comes with most of the functionality that was available in the retail versions of Money Plus. The features missing are: Money Plus Sunset cannot import data files from non-US editions of Money Money Plus Sunset is missing all the online services features from earlier versions of Money, e.g.: automatic statement downloads initiated by Money (though users may import downloadable OFX and QIF statements from one's financial institution into the user's Money file) online bill payments online investment quotes (though one can "go to the Portfolio Manager and Update Prices – Update Prices Manually") A few third party add-ons have been made to overcome the online limitations of the sunset edition: MSMoneyQuotes is a for-pay tool to update quotes. The add-on was written by an ex-Microsoft employee who coded the Portfolio Manager in Money. PocketSense is a free tool to download bank account statements (via OFX) and quotes. Money in Excel Money in Excel is a Microsoft premium template for Excel available for Microsoft 365 Family and Personal subscribers in the US only. References External links Microsoft - Download Money Plus Sunset Deluxe Money Plus Sunset Page (Internet Archive) What Is Microsoft Money Plus Sunset? (Internet Archive) Microsoft Office Templates - Money in Excel Microsoft Money Home Page (Internet Archive) Microsoft Money 1.0 Screenshots Accounting software Money Pocket PC software Money
1592225
https://en.wikipedia.org/wiki/SecuROM
SecuROM
SecuROM was a CD/DVD copy protection and digital rights management (DRM) product developed by Sony DADC. It aims to prevent unauthorised copying and reverse engineering of software, primarily commercial computer games running on Microsoft Windows. The method of disc protection in later versions is data position measurement, which may be used in conjunction with online activation DRM. SecuROM gained prominence in the late 2000s but generated controversy because of its requirement for frequent online authentication and strict key activation limits. A 2008 class-action lawsuit was filed against Electronic Arts for its use of SecuROM in the video game Spore. Opponents, including the Electronic Frontier Foundation, believe that fair-use rights are restricted by DRM applications such as SecuROM. Software SecuROM limits the number of PCs activated at the same time from the same key and is not uninstalled upon removal of the game. SecuROM 7.x was the first version to include the SecuROM Removal Tool, which is intended to help users remove SecuROM after the software with which it was installed has been removed. Most titles now also include a revoke tool to deactivate the license; revoking all licenses would restore the original activation limit. As with Windows activation, a hardware change may appear as a change of computer, and force another activation of the software. Reformatting the computer may not consume an activation, if the Product Activation servers successfully detect it as a re-installation on the same set of hardware. The activation limit may be increased, on a case-by-case basis, if the user is shown to have reached this limit due to several hardware-triggered re-activations on the same PC. Known problems SecuROM may not detect that the original game disc is in the drive. This can occur on virtually any configuration, and reinserting the disc or rebooting the computer usually resolves the problem. Under Windows Vista, SecuROM will prevent a game from running if explicit congestion notification is enabled in Vista's networking configuration. Software that can be used to bypass copy protection, such as disk drive emulators and debugging software, will block the launch of the game and generate a security module error. Disabling such software usually fixes the issue, but in some cases uninstallation is required. SecuROM conflicts with other software, the best-known being SysInternals' Process Explorer (prior to version 11). Use of Process Explorer before an attempt to run the protected software would produce an error caused by a driver that was kept in memory after Process Explorer was closed. This is solved by either ensuring that Process Explorer is not running in the background when the game is launched, or updating Process Explorer. SecuROM has a hardware-level incompatibility with certain brands of optical drives. Workarounds exist. Controversies BioShock Purchasers of BioShock were required to activate the game online, and users who exceeded their permitted two activations would have to call to get their limit raised. The limit was raised to five activations because an incorrect phone number had been printed on the manual, and because there were no call centers outside of the United States. Separate activations were required for each user on the same machine. 2K Games removed the activation limit in 2008, although online activation was still required. The game is now available completely DRM-free. Mass Effect EA announced in May 2008 that Mass Effect for the PC would use SecuROM 7.x and require that the software be reactivated every 10 days. Customer complaints led EA to remove the 10-day activation, but SecuROM remained tied to the installation, with its product activation facility used to impose a limit of three activations. A call to customer support is required to reset the activation limit. Unlike BioShock, uninstalling the game does not refund a previously used activation. A de-authorization tool was released for the main game, but EA's customer support must still be contacted to deactivate the downloadable expansions. Spore Spore, released by EA on September 7, 2008, uses SecuROM. Spore has seen relatively substantial rates of unauthorized distribution among peer-to-peer groups, and with a reported 1.7 million downloads over BitTorrent networks, was the most user-redistributed game of 2008, according to TorrentFreak's "Top 10 most pirated games of 2008" list. Journalists note that this was a reaction from users unhappy with the copy protection. EA requires the player to authenticate the game online upon installation. This system was announced after EA's originally planned system, which would have required authentication every 10 days, met opposition from the public. Each individual product key of the game would be limited to use on three computers. This limit was raised to five computers, in response to customer complaints, but only one online user (required to access user-generated content) can be created per copy. A class-action lawsuit was filed by Maryland resident Melissa Thomas within the U.S. District Court against Electronic Arts over SecuROM's inclusion with Spore. Several other lawsuits have followed. Command & Conquer: Red Alert 3 Red Alert 3 included SecuROM until February 19, 2009, when it was removed from the Steam version. Non-Steam editions still include SecuROM. Despite this, every serial key can only be activated up to 5 times, and activations could be revoked for individual systems through the game's auto-run feature as of patch 1.05. Dragon Age II Reports emerged in March 2011 that EA's Dragon Age II included SecuROM, despite assertions from EA to the contrary. On March 12, 2011, a BioWare representative stated on the official Dragon Age II message boards that the game does not use SecuROM, but instead "a release control product which is made by the same team, but is a completely different product" which was later revealed to be Sony Release Control. The consumer advocacy group Reclaim Your Game has challenged this claim, based on their analysis of the files in question. Final Fantasy VII PC re-release In early August 2012 an updated version of Final Fantasy VII was re-released for PC. The updated version included SecuROM software, which was discovered when an early purchase link was included in the Square Enix store. Users who purchased and downloaded the game were unable to activate the game due to the activation servers not recognizing the activation key for their purchased games. The Sims 2 Ultimate Collection EA released The Sims 2 Ultimate Collection as a free download until July 31, 2014, but did not mention that the download also came with SecuROM included, which was later revealed by the site Reclaim Your Game. SecuROM was removed on November 1, 2017, more than three years after it was last offered on Origin. Tron: Evolution In 2019, due to Disney's decision to end its SecuROM license, Tron: Evolution, which relies on it to authenticate its installation and startup, was rendered unplayable and pulled from the Steam store nearly a decade after its release. Disney claims that efforts are being made to re-release the game without SecuROM, but there has so far been no further assurance of or timetable for such an action. See also CD-Cops Denuvo Digital rights management Don't Copy That Floppy Extended Copy Protection SafeDisc Sony BMG CD copy prevention scandal Tagès References External links Sony DADC Tweakguide's analysis of SecuROM controversy Compact Disc and DVD copy protection DRM for MacOS DRM for Windows Sony software
25989
https://en.wikipedia.org/wiki/RGB%20color%20model
RGB color model
The RGB color model is an additive color model in which the red, green, and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue. The main purpose of the RGB color model is for the sensing, representation, and display of images in electronic systems, such as televisions and computers, though it has also been used in conventional photography. Before the electronic age, the RGB color model already had a solid theory behind it, based in human perception of colors. RGB is a device-dependent color model: different devices detect or reproduce a given RGB value differently, since the color elements (such as phosphors or dyes) and their response to the individual red, green, and blue levels vary from manufacturer to manufacturer, or even in the same device over time. Thus an RGB value does not define the same color across devices without some kind of color management. Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma, OLED, quantum dots, etc.), computer and mobile phone displays, video projectors, multicolor LED displays and large screens such as the Jumbotron. Color printers, on the other hand are not RGB devices, but subtractive color devices typically using the CMYK color model. Additive colors To form a color with RGB, three light beams (one red, one green, and one blue) must be superimposed (for example by emission from a black screen or by reflection from a white screen). Each of the three beams is called a component of that color, and each of them can have an arbitrary intensity, from fully off to fully on, in the mixture. The RGB color model is additive in the sense that the three light beams are added together, and their light spectra add, wavelength for wavelength, to make the final color's spectrum. This is essentially opposite to the subtractive color model, particularly the CMY color model, that applies to paints, inks, dyes, and other substances whose color depends on reflecting the light under which we see them. Because of properties, these three colors create white, this is in stark contrast to physical colors, such as dyes which create black when mixed. Zero intensity for each component gives the darkest color (no light, considered the black), and full intensity of each gives a white; the quality of this white depends on the nature of the primary light sources, but if they are properly balanced, the result is a neutral white matching the system's white point. When the intensities for all the components are the same, the result is a shade of gray, darker or lighter depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and weakest of the intensities of the primary colors employed. When one of the components has the strongest intensity, the color is a hue near this primary color (red-ish, green-ish, or blue-ish), and when two components have the same strongest intensity, then the color is a hue of a secondary color (a shade of cyan, magenta or yellow). A secondary color is formed by the sum of two primary colors of equal intensity: cyan is green+blue, magenta is blue+red, and yellow is red+green. Every secondary color is the complement of one primary color: cyan complements red, magenta complements green, and yellow complements blue. When all the primary colors are mixed in equal intensities, the result is white. The RGB color model itself does not define what is meant by red, green, and blue colorimetrically, and so the results of mixing them are not specified as absolute, but relative to the primary colors. When the exact chromaticities of the red, green, and blue primaries are defined, the color model then becomes an absolute color space, such as sRGB or Adobe RGB; see RGB color space for more details. Physical principles for the choice of red, green, and blue The choice of primary colors is related to the physiology of the human eye; good primaries are stimuli that maximize the difference between the responses of the cone cells of the human retina to light of different wavelengths, and that thereby make a large color triangle. The normal three kinds of light-sensitive photoreceptor cells in the human eye (cone cells) respond most to yellow (long wavelength or L), green (medium or M), and violet (short or S) light (peak wavelengths near 570 nm, 540 nm and 440 nm, respectively). The difference in the signals received from the three kinds allows the brain to differentiate a wide gamut of different colors, while being most sensitive (overall) to yellowish-green light and to differences between hues in the green-to-orange region. As an example, suppose that light in the orange range of wavelengths (approximately 577 nm to 597 nm) enters the eye and strikes the retina. Light of these wavelengths would activate both the medium and long wavelength cones of the retina, but not equally—the long-wavelength cells will respond more. The difference in the response can be detected by the brain, and this difference is the basis of our perception of orange. Thus, the orange appearance of an object results from light from the object entering our eye and stimulating the different cones simultaneously but to different degrees. Use of the three primary colors is not sufficient to reproduce all colors; only colors within the color triangle defined by the chromaticities of the primaries can be reproduced by additive mixing of non-negative amounts of those colors of light. History of RGB color model theory and usage The RGB color model is based on the Young–Helmholtz theory of trichromatic color vision, developed by Thomas Young and Hermann von Helmholtz in the early to mid-nineteenth century, and on James Clerk Maxwell's color triangle that elaborated that theory (circa 1860). Photography The first experiments with RGB in early color photography were made in 1861 by Maxwell himself, and involved the process of combining three color-filtered separate takes. To reproduce the color photograph, three matching projections over a screen in a dark room were necessary. The additive RGB model and variants such as orange–green–violet were also used in the Autochrome Lumière color plates and other screen-plate technologies such as the Joly color screen and the Paget process in the early twentieth century. Color photography by taking three separate plates was used by other pioneers, such as the Russian Sergey Prokudin-Gorsky in the period 1909 through 1915. Such methods lasted until about 1960 using the expensive and extremely complex tri-color carbro Autotype process. When employed, the reproduction of prints from three-plate photos was done by dyes or pigments using the complementary CMY model, by simply using the negative plates of the filtered takes: reverse red gives the cyan plate, and so on. Television Before the development of practical electronic TV, there were patents on mechanically scanned color systems as early as 1889 in Russia. The color TV pioneer John Logie Baird demonstrated the world's first RGB color transmission in 1928, and also the world's first color broadcast in 1938, in London. In his experiments, scanning and display were done mechanically by spinning colorized wheels. The Columbia Broadcasting System (CBS) began an experimental RGB field-sequential color system in 1940. Images were scanned electrically, but the system still used a moving part: the transparent RGB color wheel rotating at above 1,200 rpm in synchronism with the vertical scan. The camera and the cathode-ray tube (CRT) were both monochromatic. Color was provided by color wheels in the camera and the receiver. More recently, color wheels have been used in field-sequential projection TV receivers based on the Texas Instruments monochrome DLP imager. The modern RGB shadow mask technology for color CRT displays was patented by Werner Flechsig in Germany in 1938. Personal computers Early personal computers of the late 1970s and early 1980s, such as those from Apple, and Commodore's Commodore VIC-20, used composite video whereas the Commodore 64 and the Atari family used S-Video derivatives. IBM introduced a 16-color scheme (four bits—one bit each for red, green, blue, and intensity) with the Color Graphics Adapter (CGA) for its first IBM PC (1981), later improved with the Enhanced Graphics Adapter (EGA) in 1984. The first manufacturer of a truecolor graphics card for PCs (the TARGA) was Truevision in 1987, but it was not until the arrival of the Video Graphics Array (VGA) in 1987 that RGB became popular, mainly due to the analog signals in the connection between the adapter and the monitor which allowed a very wide range of RGB colors. Actually, it had to wait a few more years because the original VGA cards were palette-driven just like EGA, although with more freedom than VGA, but because the VGA connectors were analog, later variants of VGA (made by various manufacturers under the informal name Super VGA) eventually added true-color. In 1992, magazines heavily advertised true-color Super VGA hardware. RGB devices RGB and displays One common application of the RGB color model is the display of colors on a cathode ray tube (CRT), liquid-crystal display (LCD), plasma display, or organic light emitting diode (OLED) display such as a television, a computer's monitor, or a large scale screen. Each pixel on the screen is built by driving three small and very close but still separated RGB light sources. At common viewing distance, the separate sources are indistinguishable, which tricks the eye to see a given solid color. All the pixels together arranged in the rectangular screen surface conforms the color image. During digital image processing each pixel can be represented in the computer memory or interface hardware (for example, a graphics card) as binary values for the red, green, and blue color components. When properly managed, these values are converted into intensities or voltages via gamma correction to correct the inherent nonlinearity of some devices, such that the intended intensities are reproduced on the display. The Quattron released by Sharp uses RGB color and adds yellow as a sub-pixel, supposedly allowing an increase in the number of available colors. Video electronics RGB is also the term referring to a type of component video signal used in the video electronics industry. It consists of three signals—red, green, and blue—carried on three separate cables/pins. RGB signal formats are often based on modified versions of the RS-170 and RS-343 standards for monochrome video. This type of video signal is widely used in Europe since it is the best quality signal that can be carried on the standard SCART connector. This signal is known as RGBS (4 BNC/RCA terminated cables exist as well), but it is directly compatible with RGBHV used for computer monitors (usually carried on 15-pin cables terminated with 15-pin D-sub or 5 BNC connectors), which carries separate horizontal and vertical sync signals. Outside Europe, RGB is not very popular as a video signal format; S-Video takes that spot in most non-European regions. However, almost all computer monitors around the world use RGB. Video framebuffer A framebuffer is a digital device for computers which stores data in the so-called video memory (comprising an array of Video RAM or similar chips). This data goes either to three digital-to-analog converters (DACs) (for analog monitors), one per primary color or directly to digital monitors. Driven by software, the CPU (or other specialized chips) write the appropriate bytes into the video memory to define the image. Modern systems encode pixel color values by devoting eight bits to each of the R, G, and B components. RGB information can be either carried directly by the pixel bits themselves or provided by a separate color look-up table (CLUT) if indexed color graphic modes are used. A CLUT is a specialized RAM that stores R, G, and B values that define specific colors. Each color has its own address (index)—consider it as a descriptive reference number that provides that specific color when the image needs it. The content of the CLUT is much like a palette of colors. Image data that uses indexed color specifies addresses within the CLUT to provide the required R, G, and B values for each specific pixel, one pixel at a time. Of course, before displaying, the CLUT has to be loaded with R, G, and B values that define the palette of colors required for each image to be rendered. Some video applications store such palettes in PAL files (Age of Empires game, for example, uses over half-a-dozen) and can combine CLUTs on screen. RGB24 and RGB32 This indirect scheme restricts the number of available colors in an image CLUT—typically 256-cubed (8 bits in three color channels with values of 0–255)—although each color in the RGB24 CLUT table has only 8 bits representing 256 codes for each of the R, G, and B primaries, making 16,777,216 possible colors. However, the advantage is that an indexed-color image file can be significantly smaller than it would be with only 8 bits per pixel for each primary. Modern storage, however, is far less costly, greatly reducing the need to minimize image file size. By using an appropriate combination of red, green, and blue intensities, many colors can be displayed. Current typical display adapters use up to 24-bits of information for each pixel: 8-bit per component multiplied by three components (see the Digital representations section below (24bits = 2563, each primary value of 8 bits with values of 0–255). With this system, 16,777,216 (2563 or 224) discrete combinations of R, G, and B values are allowed, providing millions of different (though not necessarily distinguishable) hue, saturation and lightness shades. Increased shading has been implemented in various ways, some formats such as .png and .tga files among others using a fourth greyscale color channel as a masking layer, often called RGB32. For images with a modest range of brightnesses from the darkest to the lightest, eight bits per primary color provides good-quality images, but extreme images require more bits per primary color as well as the advanced display technology. For more information see High Dynamic Range (HDR) imaging. Nonlinearity In classic CRT devices, the brightness of a given point over the fluorescent screen due to the impact of accelerated electrons is not proportional to the voltages applied to the electron gun control grids, but to an expansive function of that voltage. The amount of this deviation is known as its gamma value (), the argument for a power law function, which closely describes this behavior. A linear response is given by a gamma value of 1.0, but actual CRT nonlinearities have a gamma value around 2.0 to 2.5. Similarly, the intensity of the output on TV and computer display devices is not directly proportional to the R, G, and B applied electric signals (or file data values which drive them through digital-to-analog converters). On a typical standard 2.2-gamma CRT display, an input intensity RGB value of (0.5, 0.5, 0.5) only outputs about 22% of full brightness (1.0, 1.0, 1.0), instead of 50%. To obtain the correct response, a gamma correction is used in encoding the image data, and possibly further corrections as part of the color calibration process of the device. Gamma affects black-and-white TV as well as color. In standard color TV, broadcast signals are gamma corrected. RGB and cameras In color television and video cameras manufactured before the 1990s, the incoming light was separated by prisms and filters into the three RGB primary colors feeding each color into a separate video camera tube (or pickup tube). These tubes are a type of cathode ray tube, not to be confused with that of CRT displays. With the arrival of commercially viable charge-coupled device (CCD) technology in the 1980s, first, the pickup tubes were replaced with this kind of sensor. Later, higher scale integration electronics was applied (mainly by Sony), simplifying and even removing the intermediate optics, thereby reducing the size of home video cameras and eventually leading to the development of full camcorders. Current webcams and mobile phones with cameras are the most miniaturized commercial forms of such technology. Photographic digital cameras that use a CMOS or CCD image sensor often operate with some variation of the RGB model. In a Bayer filter arrangement, green is given twice as many detectors as red and blue (ratio 1:2:1) in order to achieve higher luminance resolution than chrominance resolution. The sensor has a grid of red, green, and blue detectors arranged so that the first row is RGRGRGRG, the next is GBGBGBGB, and that sequence is repeated in subsequent rows. For every channel, missing pixels are obtained by interpolation in the demosaicing process to build up the complete image. Also, other processes used to be applied in order to map the camera RGB measurements into a standard RGB color space as sRGB. RGB and scanners In computing, an image scanner is a device that optically scans images (printed text, handwriting, or an object) and converts it to a digital image which is transferred to a computer. Among other formats, flat, drum and film scanners exist, and most of them support RGB color. They can be considered the successors of early telephotography input devices, which were able to send consecutive scan lines as analog amplitude modulation signals through standard telephonic lines to appropriate receivers; such systems were in use in press since the 1920s to the mid-1990s. Color telephotographs were sent as three separated RGB filtered images consecutively. Currently available scanners typically use CCD or contact image sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. Early color film scanners used a halogen lamp and a three-color filter wheel, so three exposures were needed to scan a single color image. Due to heating problems, the worst of them being the potential destruction of the scanned film, this technology was later replaced by non-heating light sources such as color LEDs. Numeric representations A color in the RGB color model is described by indicating how much of each of the red, green, and blue is included. The color is expressed as an RGB triplet (r,g,b), each component of which can vary from zero to a defined maximum value. If all the components are at zero the result is black; if all are at maximum, the result is the brightest representable white. These ranges may be quantified in several different ways: From 0 to 1, with any fractional value in between. This representation is used in theoretical analyses, and in systems that use floating point representations. Each color component value can also be written as a percentage, from 0% to 100%. In computers, the component values are often stored as unsigned integer numbers in the range 0 to 255, the range that a single 8-bit byte can offer. These are often represented as either decimal or hexadecimal numbers. High-end digital image equipment are often able to deal with larger integer ranges for each primary color, such as 0..1023 (10 bits), 0..65535 (16 bits) or even larger, by extending the 24-bits (three 8-bit values) to 32-bit, 48-bit, or 64-bit units (more or less independent from the particular computer's word size). For example, brightest saturated red is written in the different RGB notations as: {| class="wikitable" ! Notation ! RGB triplet |- | Arithmetic | (1.0, 0.0, 0.0) |- | Percentage | (100%, 0%, 0%) |- | Digital 8-bit per channel | (255, 0, 0) or sometimes #FF0000 (hexadecimal) |- | Digital 12-bit per channel | (4095, 0, 0) |- | Digital 16-bit per channel | (65535, 0, 0) |- | Digital 24-bit per channel | (16777215, 0, 0) |- | Digital 32-bit per channel | (4294967295, 0, 0) |} In many environments, the component values within the ranges are not managed as linear (that is, the numbers are nonlinearly related to the intensities that they represent), as in digital cameras and TV broadcasting and receiving due to gamma correction, for example. Linear and nonlinear transformations are often dealt with via digital image processing. Representations with only 8 bits per component are considered sufficient if gamma encoding is used. Following is the mathematical relationship between RGB space to HSI space (hue, saturation, and intensity: HSI color space): If , then . Color depth The RGB color model is one of the most common ways to encode color in computing, and several different digital representations are in use. The main characteristic of all of them is the quantization of the possible values per component (technically a sample ) by using only integer numbers within some range, usually from 0 to some power of two minus one (2n − 1) to fit them into some bit groupings. Encodings of 1, 2, 4, 5, 8 and 16 bits per color are commonly found; the total number of bits used for an RGB color is typically called the color depth. Geometric representation Since colors are usually defined by three components, not only in the RGB model, but also in other color models such as CIELAB and Y'UV, among others, then a three-dimensional volume is described by treating the component values as ordinary Cartesian coordinates in a Euclidean space. For the RGB model, this is represented by a cube using non-negative values within a 0–1 range, assigning black to the origin at the vertex (0, 0, 0), and with increasing intensity values running along the three axes up to white at the vertex (1, 1, 1), diagonally opposite black. An RGB triplet (r,g,b) represents the three-dimensional coordinate of the point of the given color within the cube or its faces or along its edges. This approach allows computations of the color similarity of two given RGB colors by simply calculating the distance between them: the shorter the distance, the higher the similarity. Out-of-gamut computations can also be performed this way. Colors in web-page design The RGB color model for HTML was formally adopted as an Internet standard in HTML 3.2, though it had been in use for some time before that. Initially, the limited color depth of most video hardware led to a limited color palette of 216 RGB colors, defined by the Netscape Color Cube. With the predominance of 24-bit displays, the use of the full 16.7 million colors of the HTML RGB color code no longer poses problems for most viewers. The web-safe color palette consists of the 216 (63) combinations of red, green, and blue where each color can take one of six values (in hexadecimal): #00, #33, #66, #99, #CC or #FF (based on the 0 to 255 range for each value discussed above). These hexadecimal values = 0, 51, 102, 153, 204, 255 in decimal, which = 0%, 20%, 40%, 60%, 80%, 100% in terms of intensity. This seems fine for splitting up 216 colors into a cube of dimension 6. However, lacking gamma correction, the perceived intensity on a standard 2.5 gamma CRT / LCD is only: 0%, 2%, 10%, 28%, 57%, 100%. See the actual web safe color palette for a visual confirmation that the majority of the colors produced are very dark. The syntax in CSS is: rgb(#,#,#) where # equals the proportion of red, green, and blue respectively. This syntax can be used after such selectors as "background-color:" or (for text) "color:". Color management Proper reproduction of colors, especially in professional environments, requires color management of all the devices involved in the production process, many of them using RGB. Color management results in several transparent conversions between device-independent and device-dependent color spaces (RGB and others, as CMYK for color printing) during a typical production cycle, in order to ensure color consistency throughout the process. Along with the creative processing, such interventions on digital images can damage the color accuracy and image detail, especially where the gamut is reduced. Professional digital devices and software tools allow for 48 bpp (bits per pixel) images to be manipulated (16 bits per channel), to minimize any such damage. ICC-compliant applications, such as Adobe Photoshop, use either the Lab color space or the CIE 1931 color space as a Profile Connection Space when translating between color spaces. RGB model and luminance–chrominance formats relationship All luminance–chrominance formats used in the different TV and video standards such as YIQ for NTSC, YUV for PAL, YDBDR for SECAM, and YPBPR for component video use color difference signals, by which RGB color images can be encoded for broadcasting/recording and later decoded into RGB again to display them. These intermediate formats were needed for compatibility with pre-existent black-and-white TV formats. Also, those color difference signals need lower data bandwidth compared to full RGB signals. Similarly, current high-efficiency digital color image data compression schemes such as JPEG and MPEG store RGB color internally in YCBCR format, a digital luminance–chrominance format based on YPBPR. The use of YCBCR also allows computers to perform lossy subsampling with the chrominance channels (typically to 4:2:2 or 4:1:1 ratios), which reduces the resultant file size. See also CMY color model CMYK color model Color theory Colour banding Complementary colors DCI-P3 – a common RGB color space List of color palettes ProPhoto RGB color space RG color space RGBA color model scRGB TSL color space References External links RGB mixer Demonstrative color conversion applet Color space 1861 introductions
60490
https://en.wikipedia.org/wiki/Abstract%20interpretation
Abstract interpretation
In computer science, abstract interpretation is a theory of sound approximation of the semantics of computer programs, based on monotonic functions over ordered sets, especially lattices. It can be viewed as a partial execution of a computer program which gains information about its semantics (e.g., control-flow, data-flow) without performing all the calculations. Its main concrete application is formal static analysis, the automatic extraction of information about the possible executions of computer programs; such analyses have two main usages: inside compilers, to analyse programs to decide whether certain optimizations or transformations are applicable; for debugging or even the certification of programs against classes of bugs. Abstract interpretation was formalized by the French computer scientist working couple Patrick Cousot and Radhia Cousot in the late 1970s. Intuition This section illustrates abstract interpretation by means of real-world, non-computing examples. Consider the people in a conference room. Assume a unique identifier for each person in the room, like a social security number in the United States. To prove that someone is not present, all one needs to do is see if their social security number is not on the list. Since two different people cannot have the same number, it is possible to prove or disprove the presence of a participant simply by looking up his or her number. However it is possible that only the names of attendees were registered. If the name of a person is not found in the list, we may safely conclude that that person was not present; but if it is, we cannot conclude definitely without further inquiries, due to the possibility of homonyms (for example, two people named John Smith). Note that this imprecise information will still be adequate for most purposes, because homonyms are rare in practice. However, in all rigor, we cannot say for sure that somebody was present in the room; all we can say is that he or she was possibly here. If the person we are looking up is a criminal, we will issue an alarm; but there is of course the possibility of issuing a false alarm. Similar phenomena will occur in the analysis of programs. If we are only interested in some specific information, say, "was there a person of age n in the room?", keeping a list of all names and dates of births is unnecessary. We may safely and without loss of precision restrict ourselves to keeping a list of the participants' ages. If this is already too much to handle, we might keep only the age of the youngest, m and oldest person, M. If the question is about an age strictly lower than m or strictly higher than M, then we may safely respond that no such participant was present. Otherwise, we may only be able to say that we do not know. In the case of computing, concrete, precise information is in general not computable within finite time and memory (see Rice's theorem and the halting problem). Abstraction is used to allow for generalized answers to questions (for example, answering "maybe" to a yes/no question, meaning "yes or no", when we (an algorithm of abstract interpretation) cannot compute the precise answer with certainty); this simplifies the problems, making them amenable to automatic solutions. One crucial requirement is to add enough vagueness so as to make problems manageable while still retaining enough precision for answering the important questions (such as "might the program crash?"). Abstract interpretation of computer programs Given a programming or specification language, abstract interpretation consists of giving several semantics linked by relations of abstraction. A semantics is a mathematical characterization of a possible behavior of the program. The most precise semantics, describing very closely the actual execution of the program, are called the concrete semantics. For instance, the concrete semantics of an imperative programming language may associate to each program the set of execution traces it may produce – an execution trace being a sequence of possible consecutive states of the execution of the program; a state typically consists of the value of the program counter and the memory locations (globals, stack and heap). More abstract semantics are then derived; for instance, one may consider only the set of reachable states in the executions (which amounts to considering the last states in finite traces). The goal of static analysis is to derive a computable semantic interpretation at some point. For instance, one may choose to represent the state of a program manipulating integer variables by forgetting the actual values of the variables and only keeping their signs (+, − or 0). For some elementary operations, such as multiplication, such an abstraction does not lose any precision: to get the sign of a product, it is sufficient to know the sign of the operands. For some other operations, the abstraction may lose precision: for instance, it is impossible to know the sign of a sum whose operands are respectively positive and negative. Sometimes a loss of precision is necessary to make the semantics decidable (see Rice's theorem and the halting problem). In general, there is a compromise to be made between the precision of the analysis and its decidability (computability), or tractability (computational cost). In practice the abstractions that are defined are tailored to both the program properties one desires to analyze, and to the set of target programs. The first large scale automated analysis of computer programs with abstract interpretation was motivated by the accident that resulted in the destruction of the first flight of the Ariane 5 rocket in 1996. Formalization Let L be an ordered set, called a concrete set and let L′ be another ordered set, called an abstract set. These two sets are related to each other by defining total functions that map elements from one to the other. A function α is called an abstraction function if it maps an element x in the concrete set L to an element α(x) in the abstract set L′. That is, element α(x) in L′ is the abstraction of x in L. A function γ is called a concretization function if it maps an element x′ in the abstract set L′ to an element γ(x′) in the concrete set L. That is, element γ(x′) in L is a concretization of x′ in L′. Let L1, L2, L′1 and L′2 be ordered sets. The concrete semantics f is a monotonic function from L1 to L2. A function f′ from L′1 to L′2 is said to be a valid abstraction of f if for all x′ in L′1, (f ∘ γ)(x′) ≤ (γ ∘ f′)(x′). Program semantics are generally described using fixed points in the presence of loops or recursive procedures. Let us suppose that L is a complete lattice and let f be a monotonic function from L into L. Then, any x′ such that f(x′) ≤ x′ is an abstraction of the least fixed-point of f, which exists, according to the Knaster–Tarski theorem. The difficulty is now to obtain such an x′. If L′ is of finite height, or at least verifies the ascending chain condition (all ascending sequences are ultimately stationary), then such an x′ may be obtained as the stationary limit of the ascending sequence x′n defined by induction as follows: x′0=⊥ (the least element of L′) and x′n+1=f′(x′n). In other cases, it is still possible to obtain such an x′ through a widening operator ∇: for all x and y, x ∇ y should be greater or equal than both x and y, and for any sequence y′n, the sequence defined by x′0=⊥ and x′n+1=x′n ∇ y′n is ultimately stationary. We can then take y′n=f′(x′n). In some cases, it is possible to define abstractions using Galois connections (α, γ) where α is from L to L′ and γ is from L′ to L. This supposes the existence of best abstractions, which is not necessarily the case. For instance, if we abstract sets of couples (x, y) of real numbers by enclosing convex polyhedra, there is no optimal abstraction to the disc defined by x2+y2 ≤ 1. Examples of abstract domains One can assign to each variable x available at a given program point an interval [Lx, Hx]. A state assigning the value v(x) to variable x will be a concretization of these intervals if for all x, v(x) is in [Lx, Hx]. From the intervals [Lx, Hx] and [Ly, Hy] for variables x and y, one can easily obtain intervals for x+y ([Lx+Ly, Hx+Hy]) and for x−y ([Lx−Hy, Hx−Ly]); note that these are exact abstractions, since the set of possible outcomes for, say, x+y, is precisely the interval ([Lx+Ly, Hx+Hy]). More complex formulas can be derived for multiplication, division, etc., yielding so-called interval arithmetics. Let us now consider the following very simple program: y = x; z = x - y; With reasonable arithmetic types, the result for should be zero. But if we do interval arithmetic starting from in [0, 1], one gets in [−1, +1]. While each of the operations taken individually was exactly abstracted, their composition isn't. The problem is evident: we did not keep track of the equality relationship between and ; actually, this domain of intervals does not take into account any relationships between variables, and is thus a non-relational domain. Non-relational domains tend to be fast and simple to implement, but imprecise. Some examples of relational numerical abstract domains are: congruence relations on integers convex polyhedra (cf. left picture) – with some high computational costs difference-bound matrices "octagons" linear equalities and combinations thereof (such as the reduced product, cf. right picture). When one chooses an abstract domain, one typically has to strike a balance between keeping fine-grained relationships, and high computational costs. See also Model checking Symbolic simulation Symbolic execution List of tools for static code analysis — contains both abstract-interpretation based (sound) and ad hoc (unsound) tools Static program analysis — overview of analysis methods, including, but not restricted to, abstract interpretation Interpreter (computing) References External links A web-page on Abstract Interpretation maintained by Patrick Cousot Roberto Bagnara's paper showing how it is possible to implement an abstract-interpretation based static analyzer for a C-like programming language The Static Analysis Symposia, proceedings appearing in the Springer LNCS series Conference on Verification, Model-Checking, and Abstract Interpretation (VMCAI), affiliated at the POPL conference, proceedings appearing in the Springer LNCS series Lecture notes Abstract Interpretation. Patrick Cousot. MIT. David Schmidt's lecture notes on abstract interpretation Møller and Schwarzbach's lecture notes on Static Program Analysis Agostino Cortesi's lecture notes on Program Analysis and Verification Slides by Grégoire Sutre going through every step of Abstract Interpretation with many examples - also introducing Galois connections Abstract interpretation
26841983
https://en.wikipedia.org/wiki/Open-source%20bounty
Open-source bounty
An open-source bounty is a monetary reward for completing a task in an open-source software project. Description Bounties are usually offered as an incentive for fixing software bugs or implementing minor features. Bounty driven development is one of the Business models for open-source software. The compensation offered for an open-source bounty is usually small. Alternatives When open-source projects require bigger funds they usually apply for grants or, most recently, launch crowdsourcing or crowdfunding campaigns, typically organized over platforms like Kickstarter or BountyC (since 2004 also crowdfunding). Examples of bounties 2018: Mozilla Firefox's WebRTC (Web Real-Time Communications) bug was submitted by Education First to CanYa platform Bountysource for $50,000 Sun MicroSystems (now owned by Oracle Corporation) has offered $1 million in bounties for OpenSolaris, NetBeans, OpenSPARC, Project GlassFish, OpenOffice.org, and OpenJDK. 2004: Mozilla introduced a Security Bug Bounty Program, offering $500 to anyone who finds a "critical" security bug in Mozilla. 2015: Artifex Software offers up to $1000 to anyone who fixes some of the issues posted on Ghostscript Bugzilla. Two software bounties were completed for the classic Commodore Amiga Motorola 680x0 version of the AROS operating system, producing a free Kickstart ROM replacement for use with the UAE emulator and FPGA Amiga reimplementations, as well as original Amiga hardware. RISC OS Open bounty scheme to encourage development of RISC OS AmiZilla was an over $11,000 bounty to port the Firefox web-browser to AmigaOS, MorphOS & AROS. While the bounty produced little results it inspired many bounty systems in the Amiga community including Timberwolf, Power2people, AROS Bounties, Amigabounty.net and many more. Examples of websites listing bounties for multiple projects huntr pays its users for finding and fixing vulnerabilities in most open source projects on GitHub. Bountysource Gitcoin is an open-source bounty marketplace which has awarded more than $735,000 through its platform since its launch in November 2017, as of January 2019. It has GitHub integration and allows OSS maintainers to add bounties to specific issues on their GitHub repositories, and award contributors for pull requests that solve the issue. Devcash is a decentralized bounty platform where users utilize DEV to crowdsource developer talent or perform developer tasks and earn DEV. IssueHunt is an issue-based bounty platform for open source projects. Anyone can fund specific issues of GitHub repo, and these bounties will be distributed to contributors and maintainers. Gitpay.me An issue bounty platform for Git-powered projects with an integrated payment system. Rysolv is an open source issue-bounty platform. Users can crowdsource bounties, and get paid for fixing issues. See also Business models for open-source software Crowdfunding References Free software Grants (money) Philanthropy
3064903
https://en.wikipedia.org/wiki/Alsamixer
Alsamixer
alsamixer is a graphical mixer program for the Advanced Linux Sound Architecture (ALSA) that is used to configure sound settings and adjust the volume. It has an ncurses user interface and does not require the X Window System. It supports multiple sound cards with multiple devices. See also Aplay Softvol References External links Alsamixer - ALSA wiki Advanced Linux Sound Architecture Linux audio video-related software Software that uses ncurses
298840
https://en.wikipedia.org/wiki/Bacolod
Bacolod
, officially the (; ; ), is a in the region of , . It is the capital of the province of Negros Occidental, where it is geographically situated but governed administratively independent. With a total of inhabitants as of the , it is the most populous city in Western Visayas and the second most populous city in the entire Visayas after Cebu City. It is the center of the Bacolod metropolitan area, which also includes the cities of Silay and Talisay with a total population of 791,019 inhabitants, along with a total area of . It is notable for its MassKara Festival held during the third week of October and is known for being a relatively friendly city, as it bears the nickname "The City of Smiles". The city is also famous for its local delicacies piaya and chicken inasal. Etymology Bacólod (), is derived from bakólod (Old Spelling: bacólod), the Old Hiligaynon (Old Ilonggo) (Old Spelling: Ylongo and Ilongo) word for a "hill, mound, rise, hillock, down, any small eminence or elevation", since the resettlement was founded on a stony, hilly area, now the barangay of Granada. It was officially called Ciudad de Bacólod (City of Bacolod) when Municipalidad de Bacólod (Municipality of Bacolod) was converted into a city in 1938. History Spanish colonial period Historical church accounts provide a glimpse of the early years of Bacolod as a mere small settlement by the riverbank known as Magsungay (translated as "horn-shaped" in English). When the neighboring settlement of Bago was elevated into the status of a small town in 1575, it had several religious dependencies and one of which was the village of Magsungay. The early missionaries placed the village under the care and protection of Saint Sebastian sometime in the middle of the 18th century. A corregidor () by the name of Luis Fernando de Luna, donated a relic of the saint for the growing mission, and since then, the village came to be known as San Sebastián de Magsung̃ay. Bacolod was not established as a town until 1755 or 1756, after the inhabitants of the coastal settlement of San Sebastián de Magsung̃ay, were attacked by forces under Datu Bantílan of Sulu on July 14, 1755, and the villagers transferred from the coast to a hilly area called Bacólod (which is now the barangay of Granada). Bernardino de los Santos became the first gobernadorcillo (). The town of Bacolod was constituted as a parroquia () in 1788 under the secular clergy, but did not have a resident priest until 1802, as the town was served by the priest from Bago, and later Binalbagan. By 1790, slave raids on Bacolod by Moro pirates had ceased. On 11 February 1802, Fr. Eusebio Laurencio became acting parish priest of Bacolod. In September 1806, Fr. León Pedro was appointed interim parish priest and the following year became the first regular parish priest. In September 1817, Fray () Julián Gonzaga from Barcelona was appointed as the parish priest. He encouraged the people to settle once again near the sea. He also encouraged migration to Bacolod and the opening of lands to agriculture and industry. In 1846, upon the request of Romualdo Jimeno, bishop of Cebu and Negros at that time, Governor-General Narciso Clavería y Zaldúa sent to Negros a team of Recollect missionaries headed by priest Fernando Cuenca. A decree of 20 June 1848 by Gobernador General Clavería ordered the restructuring of Negros politically and religiously. The following year (1849), Negros Island Gobernadorcillo Manuel Valdevieso y Morquecho transferred the capital of the Province of Negros from Himamaylan to Bacolod and the Augustinian Recollects were asked to assume spiritual administration of Negros, which they did that same year. Transfer of Bacolod to the Recollects, however, took place only in 1871. Fray Mauricio Ferrero became the first Augustinian Recollect parish priest of Bacolod and successor to the secular priest, Fr. Mariano Ávila. In 1863, a compulsory primary public school system was set up. In 1889, Bacolod became the capital of Occidental Negros when the Province of Negros was politically divided into the separate provinces of Occidental Negros (Spanish: Negros Occidental) and Oriental Negros (Spanish: Negros Oriental). Revolution and Republic of Negros The success of the uprising in Bacolod and environs was attributed to the low morale of the local imperial Spanish detachment, due to its defeat in Panay and Luzon and to the psychological warfare waged by Generals Aniceto Lacson and Juan Araneta. In 1897, a battle in Bacolod was fought at Matab-ang River. A year later, on November 5, 1898, the Negrense Revolucionarios (), armed with knives, bolos, spears, and rifle-like nipa palm stems, and pieces of sawali or amakan mounted on carts, captured the convent, presently Palacio Episcopal (), where Colonel Isidro de Castro y Cisneros, well-armed cazadores () and platoons of Guardias Civiles (), surrendered. On 7 November 1898, most of the revolutionary army gathered together to establish a provisional junta and to confirm the elections of Aniceto Lacson as president, Juan Araneta as war-delegate, as well as the other officials. For a brief moment, the provinces of Occidental Negros and Oriental Negros were reunited under the cantonal government of the Negrense Revolucionarios, from 6 November 1898 to the end of February 1899, making Bacolod the capital. In March 1899, the American forces led by Colonel James G. Smith occupied Bacolod, the revolutionary capital of República Cantonal de Negros (). They occupied Bacolod after the invitation of the Republic of Negros which sought protectorate status for their nation under the United States. American colonial period The Cantonal Republic of Negros became a U.S. territory on April 30, 1901. This separated Negros Island once again, reverting Bacolod to its status as the capital of Occidental Negros. The public school of Instituto Rizal () opened its doors to students on 1 July 1902. Colegio de Nuestra Señora de la Consolación (), the first private institution in the province of Negros Occidental, was established in Bacolod by the Augustinian sisters on March 11, 1919, and opened in July 1919. A historic event took place in 1938 when Municipality of Bacolod was elevated into a city through Commonwealth Act No. 326 passed by the 1st National Assembly of the Philippines creating the City of Bacolod. Assemblyman Pedro C. Hernáez of the second district of Negros Occidental sponsored the bill. The law was passed on June 18, 1938. Bacolod was formally inaugurated as a chartered city on October 19, 1938, by virtue of Commonwealth Act No. 404, highlighted by the visit of Commonwealth President Manuel L. Quezon. President Quezon appointed Alfredo Montelíbano, Sr. as the first city mayor of Bacolod. Japanese occupation and allied liberation In World War II, Bacolod was occupied by the Japanese forces on May 21, 1942. Lieutenant General Kawano "Kono" Takeshi, the Japanese commanding officer of the 77th Infantry Brigade, 102nd Division, seized the homes of Don Generoso Villanueva, a prominent sugar planter—whose home, the Daku Balay served as the "seat of power" (occupational headquarters for the Japanese Forces in Negros and all of the Central Visayan region of the Philippines) and being the tallest building of Bacolod it served as the city's watchtower—and the home of his brother-in-law, Don Mariano Ramos, the first appointed Municipal President of Bacolod. The home of Don Generoso was lived in by Lt. General Takeshi throughout the duration of the war and also served as his office and the home of Don Mariano was occupied by a Japanese Colonel serving under the command of Lt. General Takeshi. The city was liberated by joint Philippine and American forces on May 29, 1945. It took time to rebuild the city after liberation. However, upon the orders of Lt. General Takeshi, both the homes of Villanueva and Ramos were saved from destruction by the retreating Japanese forces. In March 1945, upon the invasion of the American and Philippine Commonwealth forces, the withdrawal of the Japanese army into the mountains and the temporary occupation of Bacolod by the combined U.S. and Philippine Commonwealth armed forces, the house of Villanueva was then occupied by Major General Rapp Brush, commander of the 40th Infantry Division, known as the "Sun Burst" Division, for approximately five months. The local Philippine military built and established the general headquarters and camp bases of the Philippine Commonwealth Army which was active from January 3, 1942, to June 30, 1946. The 7th Constabulary Regiment of the Philippine Constabulary was also active from October 28, 1944, to June 30, 1946, and was stationed in Bacolod during and after World War II. Independent Philippines When the country finally gained independence from the United States, the city's public markets and slaughterhouses were rebuilt during the administration of then Mayor Vicente Remitió from 1947 to 1949. In 1948, a fire razed a portion of the records section of the old city hall that consumed the rear end of the building and with it, numerous priceless documents of the city. Bacolod was classified as a highly urbanized city. On September 27, 1984, by the provision of Section 166 and 168 of the Local Government Code and the DILG Memo Circular No. 83-49. In January 1985, the original hardwood and coral structure of Palacio Episcopal was almost entirely destroyed by a fire. Among the damage of the raging fire were items of significant historical value. The reconstruction of Palacio which took more than two years, was completed in 1990. In 2008, Bacolod topped a survey by MoneySense Magazine as the "Best Place to Live in the Philippines". The city has also been declared by the Department of Science and Technology as a "center of excellence" for information technology and business process management operations. In 2017 & 2019, Bacolod was awarded the "Top Philippine Model City" as the most livable urban center in the country by The Manila Times. In 2021, Bacolod received the "2021 Most Business-Friendly Local Government Unit (LGU) Award" under the category of highly-urbanized cities outside the National Capital Region (NCR) in the search organized by the Philippine Chamber of Commerce and Industry (PCCI). This was the second time Bacolod received such award having won the same title in 2007. Geography Bacolod is located on the northwestern coast of the large island of Negros. Within the island, it is bounded on the north by the city of Talisay, on the east by the town of Murcia and on the south by the city of Bago. As a coastal city, it is bounded on the west by the Guimaras Strait, serving as a natural border of northwestern Negros Island Region to the neighboring Western Visayas. The global location of Bacolod is 10 degrees, 40 minutes 40 seconds - north and 122 degrees 54 minutes 25 seconds - east with Bacolod Public Plaza as the benchmark. Bacolod has a total land area of , including straits and bodies of water and the reclamation area; and is composed of 61 barangay (villages) and 639 purok (smaller units composing a barangay/village). It is accessible by sea through the ports of Banago; the BREDCO Port in the Reclamation Area, and the port of Pulupandan. By air, it is accessible through the Bacolod–Silay International Airport, which is approximately 13 (four is counting from the Lagoon) kilometers away from the center of the city. Bacolod is ideally located on a level area, slightly sloping down as it extends toward the sea with an average slope of 0.9 percent for the city proper and between 3 and 5 percent for the suburbs. The altitude is above sea level, with the Bacolod City Public Plaza as the benchmark. Bacolod has two pronounced seasons, wet and dry. The rainy (wet) season starts from May to January of the following year with heavy rains occurring during the months of August and September. The dry season starts from the month of February until the last week of April. Climate Barangays Bacolod is politically subdivided into 61 barangays. Barangay 1 (Poblacion) Barangay 2 (Población) Barangay 3 (Población) Barangay 4 (Población) Barangay 5 (Población) Barangay 6 (Población) Barangay 7 (Población) Barangay 8 (Población) Barangay 9 (Población) Barangay 10 (Población) Barangay 11 (Población) Barangay 12 (Población) Barangay 13 (Población) Barangay 14 (Población) Barangay 15 (Población) Barangay 16 (Población) Barangay 17 (Población) Barangay 18 (Población) Barangay 19 (Población) Barangay 20 (Población) Barangay 21 (Población) Barangay 22 (Población) Barangay 23 (Población) Barangay 24 (Población) Barangay 25 (Población) Barangay 26 (Población) Barangay 27 (Población) Barangay 28 (Población) Barangay 29 (Población) Barangay 30 (Población) Barangay 31 (Población) Barangay 32 (Población) Barangay 33 (Población) Barangay 34 (Población) Barangay 35 (Población) Barangay 36 (Población) Barangay 37 (Población) Barangay 38 (Población) Barangay 39 (Población) Barangay 40 (Población) Barangay 41 (Población) Alangilan Alijis Banago Bata Cabug Estefanía Felisa Granada Handumanan Mandalagan Mansilingan Montevista Pahanocoy Punta Taytay Singcang-Airport Sum-ag Taculing Tangub Villamonte Vista Alegre Demographics As of , Bacolod has a total population of , and its registered voting population is voters (). Economy Bacolod is the Philippines' third fastest growing economy in terms of information technology (IT) and business process outsourcing (BPO) activities. The city has been recommended by the Information and Communication Technology Office of the Department of Science and Technology (DOST) and Business Processing Association of the Philippines (BPAP) as the best location in the Visayas for BPO activities. Bacolod ranked 3rd among the top ten "Next Wave Cities" of the Philippines for the best location for BPO and offshoring according to a 2010 report of the Commission on Information and Communications Technology. In 2013, the city was declared a "center of excellence" for IT-business process management operations by the DOST, joining the ranks of Metro Manila, Metro Cebu and Clark Freeport Zone. Among the notable BPO companies operating in the city are Concentrix, Teleperformance, TTEC, iQor, Transcom, Ubiquity Global Services, Panasiatic Solutions, Focus Direct Inc. – Bacolod, Pierre and Paul Solutions, Inc., TELESYNERGY Corp. – Bacolod, Hit Rate Solutions/Next Level IT Teleservices, Focusinc Group Corporation (FGC Plus), Pathcutters Philippines Inc., TeleQuest Voice Services (TQVS), ARB Call Facilities, Inc., and Global Strategic. In 2012, a portion of the Paglaum Sports Complex was partitioned for the construction of the provincial government-owned Negros First CyberCentre (NFCC) as an IT-BPO Outsourcing Hub with a budget of P674-million. It is located at Lacson corner Hernaez Streets and offers up to 22,000 square meters of mixed IT-BPO and commercial spaces. Its facilities are divided into three sections — Information Technology, Commercial Support Facilities, and Common IT Facilities. It was inaugurated in April 2015 in rites led by President Benigno S. Aquino III. The area was initially a residential zone and has been reclassified as a commercial zone as approved by the Comprehensive Zoning Ordinance. Along its highways, sugarcane plantations are a typical scene. As of 2003, of the city's of agricultural land were still planted with sugarcane. Meanwhile, were devoted to rice, to assorted vegetables, to coconut, to banana and to corn. According to the "Philippine Cities Competitiveness Ranking Project 2005" of Asian Institute of Management (AIM), Bacolod tops the list in terms of infrastructure, ahead of such other mid-size cities like Iligan, Calamba and General Santos. The city also tops the list in terms of quality of life, ahead of such other mid-size cities like San Fernando, Baguio, Iloilo and Lipa. AIM also recognized Bacolod as one of the Top Five most competitive mid-size cities together with Batangas, Iligan, Iloilo, and San Fernando. Sports Football Bacolod hosted the 2005 Southeast Asian Games Football tournament, the 2007 ASEAN Football Championship qualification, the 2010 AFC U-16 Championship qualification and the 2012 AFC Challenge Cup qualification play-off first leg was held at the Panaad Stadium where the Philippines won 2–0 over Mongolia. Likewise the city has the home football stadium of the Philippines national football team (Azkals). The Philippines Football League side Ceres–Negros F.C. is based in the city, playing their home games at the newly renovated Panaad Stadium. Since Bacolod is also being tagged as a "Football City" in the country, an ordinance was approved by the City Council in June 2015, setting the third week of the month of April every year as the "Bacolod City Football Festival Week". Ceres-Negros FC is the Philippines Football League 2018 Champion. Basketball 2008 PBA All-Star Weekend was held in the city and since then has been a regular venue of Philippine Basketball Association out-of-town games. Also, the Sandugo Unigames 2012 was hosted by the city participated by various universities around the country notably those who compete in the UAAP. The city was also the home of the Negros Slashers of the now-defunct Metropolitan Basketball Association, playing their home games at the USLS Coliseum. Karate The 1996 Philippine Karatedo Federation (PKF) National Championships and the 20th PKF National Open 2007 were held in the city. Both events were hosted by La Salle Coliseum of the University of St. La Salle. The tournaments were contested by hundreds of karatekas all over the country. Golf There are two major golf courses in the city; the Bacolod Golf and Country Club and the Negros Occidental Golf and Country Club. The city hosted the 61st Philippine Airlines Inter-club Golf Tournament and the 2008 Philippine Amateur Golf Championship. A Golf tournament sponsored by the City Mayor is also held every Masskara. Mixed martial arts Bacolod is home to many mixed martial arts competitions including quarterly fights hosted by the Universal Reality Combat Championship. Parkour The first Parkour team in Negros, known as "Parkour Bacolod", started in late 2007. Festivals Masskara Festival The MassKara Festival (Hiligaynon: Pista sang Maskara, Filipino: Fiesta ng Maskara) is an annual festival held on the fourth Sunday of October in Bacolod. Dancers wear masks, which is where the festival gets its name. Panaad sa Negros Festival The Panaad sa Negros Festival, or just the Panaad Festival (sometimes spelled as Pana-ad), is a festival held annually during the month of April. Panaad is the Hiligaynon word for "vow" or "promise"; the festival is a form of thanksgiving to Divine Providence and commemoration of a vow in exchange for a good life. The celebration is held at the Panaad Park, which also houses the Panaad Stadium, and is participated in by the 13 cities and 19 towns of the province. For this reason, the province dubs it the "mother" of all its festivals. Bacolaodiat Festival Bacolod's Chinese New year Festival. It comes from the word "Bacolod" and "Lao Diat" which means celebration. Infrastructure Panaad Park and Sports Complex The Panaad Park and Sports Complex is a multi-purpose park in the city owned by the Provincial Government of Negros Occidental. Situated in the complex is the Panaad Stadium which is currently used mostly for football matches. It is the home stadium of Philippines Football League team Ceres–Negros F.C. It was used for the 2005 South East Asian Games and was the venue of the pre-qualifiers of the 2007 ASEAN Football Championship or ASEAN Cup. The stadium has a seating capacity of 15,500, but holds around 20,000 people with standing areas. It is unofficially designated as the home stadium of the Philippines national football team. Aside from the football field, it also has a rubberized track oval, an Olympic-size swimming pool and other sports facilities. The stadium is also the home of Panaad sa Negros Festival, a week-long celebration participated in by all cities and municipalities in the province held annually every summer. The festival is highlighted by merry-making, field demonstrations, pageant and concert at the stadium. The stadium itself features replicas of the landmarks of the 13 cities and 19 municipalities of Negros Occidental. Bacolod Public Plaza The Bacolod Public Plaza is one of the notable landmarks in Bacolod, the capital of Negros Occidental, which is found right in the heart of downtown area, very near to the city hall and right across the San Sebastian Cathedral. The plaza is the celebrated place of MassKara Festival. It is a week-long festival held each year in Bacolod City every third weekend of October nearest October 19, the city's Charter Anniversary. Bacolod public plaza is the final destination of Masskara street dancing competitions which is the highlights of the celebration. Capitol Park & Lagoon The Capitol Park and Lagoon is a provincial park located right in the heart of Bacolod City, Negros Occidental, in the Philippines. One of the landmarks of the park is the carabao (water buffalo) being reared by a woman. This carabao is located at the northern end of the lagoon. On the southern end, there is also another carabao sculpture being pulled by a man. Locals are known to feed pop corns, pop rice, and other edible delicacies sold within the park to the fishes in the lagoon. Negros Museum Negros Museum is a privately owned provincial museum situated in the Negros Occidental Provincial Capitol Complex in Bacolod City, Philippines. The structure was built in 1925 as the Provincial Agriculture Building. Negros Museum Cafe serves the needs of museum goers and walk-in guests, situated in the West Annex of the museum. It includes a separate entrance, which includes an open-air and an in-house station occasionally used for small theater plays and art exhibitions. The cafe and the resident chef, Guido Nijssen, serves as the official caterer of the Office of the Governor and the Provincial Government of Negros Occidental for official dignitary functions Paglaum Sports Complex The Paglaum Sports Complex is a provincial-owned sports venue adjacent to the Negros Occidental High School established during the 1970s that hosted various football events, such as the 1991 Philippines International Cup and the football event of the 2005 Southeast Asian Games. It also hosted three editions of the Palarong Pambansa (1971, 1974, 1979). However, the stadium became unfit to host football matches following the erection of business establishments around the area. In 2012, a two-hectare portion of the four-hectare complex was partitioned for the construction of the Capitol-owned Negros First CyberCentre (NFCC) as an IT-BPO Outsourcing Hub. As of 2013, the provincial government has been proposing for a renovation of the stadium to serve as alternative venue to Panaad Park and Sports Complex, particularly for football competition. Recently, the Paglaum Sports Complex also serves as an alternative venue to the Bacolod Public Plaza for the MassKara Festival celebration. Negros Occidental Multi-Purpose Activity Center The Negros Occidental Multi-Purpose Activity Center (NOMPAC) is a provincial-owned multi-use gym adjacent to the Capitol Park and Lagoon. It is currently used mostly for basketball, karatedo and boxing matches. Aside from the gym, it also serves as evacuation site of the province during calamities likewise also serves as cultural facilities in many events. BAYS Center The Bacolod Arts & Youth Sports Center (BAYS Center) is a multi-use gym fronting the Bacolod Public Plaza. It is used mostly for basketball, karatedo and boxing matches, and was previously used in events in the city like the MassKara Festival activities and other government related activities like seminars, business and political gatherings. The gym has a seating capacity of more than a thousand. It is officially designated as the COMELEC tally headquarters for both local and national election in the Philippines. Art District Art District located along Lacson Street is known for its street art mural and graffiti, restaurants and nightlife. Education Bacolod currently has 3 large universities and more than a dozen other schools specializing in various courses. Currently, as sanctioned by the Department of Education, all primary and secondary institutions in the city use the K-12 educational system. The city alone currently hosts three of well-known educational institutions in the nation. These are: University of St. La Salle (1952), a LaSallian district school and the second oldest campus founded by the De La Salle Philippines congregation in the country. University of Negros Occidental – Recoletos (1941), administered by the Order of Augustinian Recollects and the first university in the province of Negros Occidental and the city of Bacolod. STI West Negros University (1948), founded by Baptist Protestants and later acquired by the STI Education Systems Holdings, Inc. Other noteworthy educational institutions include: Colegio San Agustin – Bacolod (1962) La Consolacion College Bacolod (1919) Riverside College, Inc. (1961) Carlos Hilado Memorial State College (1954) John B. Lacson Colleges Foundation – Bacolod (1974) VMA Global College (1974) Bacolod Christian College of Negros (1954) Bacolod City College (1997) Our Lady of Mercy College – Bacolod (2008) College of Arts & Sciences of Asia & the Pacific – Bacolod Campus (2016) AMA Computer College – Bacolod Campus (2012) ABE International Business College – Bacolod Campus (1999) Asian College of Aeronautics – Bacolod Branch (Main Campus) (2003) St. Benilde School (1987) St. Joseph School–La Salle (1960) St. Scholastica's Academy – Bacolod (1958) St. John's Institute (Hua Ming) (1953) Bacolod Tay Tung High School (1934) Jack and Jill School/Castleson High (1963/1995) Bacolod Trinity Christian School, Inc. (1976) Transportation Airports The Bacolod–Silay Airport, located in nearby City of Silay, is 15 kilometers north-east from Bacolod. Bacolod is 1 hour by air from Manila, 30 minutes by air from Cebu, 1 hour by air from Cagayan de Oro and 1 hour and 10 minutes by air from Davao City. Bacolod City Domestic Airport was the former airport serving the general area of Bacolod. It was one of the busiest airports in the Western Visayas region, when Bacolod and Negros Occidental were both still part of it. This airport was later replaced by the new Bacolod–Silay International Airport, located in Silay. It was classified as such by the Air Transportation Office, a body of the Department of Transportation that is responsible for the operations of all other airports in the Philippines except the major international airports. The Bacolod City Domestic Airport ceased operations on January 17, 2008, prior to the opening of the Bacolod–Silay International Airport which began operations the day after. Ports Banago Wharf and BREDCO Port are the vessels entry point in Bacolod. It has daily access to Iloilo, with different shipping lines such as 2GO Travel (as relaunched in 2012), Weesam Express, OceanJet, Montenegro Lines, Supercat, FastCat, and Tri-Star Mega Link. There were also access routes previously to Puerto Princesa via Iloilo City, Cagayan de Oro, General Santos, Zamboanga City, Cotabato, Butuan via Cagayan de Oro route, Dipolog, Iligan, Ozamiz, and Surigao City via Cagayan de Oro route. As of 2012 to present, SuperFerry and Negros Navigation was relaunched into 2GO Travel with routes from Bacolod going Manila, Iloilo and Cagayan de Oro. Bacolod is 18–23 hours from the Port of Manila, 12–15 hours from the Port of Cagayan de Oro, 2-3hrs from Dumangas Port and 1hr from the Port of Iloilo. Land routes Bacolod has two main roads, Lacson Street to the north and Araneta Street to the south. The streets in the downtown area are one way, making Bacolod free from traffic congestion. Recently, Bacolod City is experiencing an increase in traffic congestion due to an increase in number of vehicles. By land-ferry, Bacolod is approximately an hour directly from Iloilo City while by land-RORO-land, Bacolod is approximately 3 hours from Iloilo City via Dumangas route. By land-ferry-land, Bacolod City is approximately 4 hours and 30 minutes from Cebu City via Toledo-San Carlos/Salvador Benedicto route while it takes approximately 6 hours by land-RORO-land via same route. By land-RORO-land, Bacolod is approximately 7 hours and 30 minutes from Cebu City via Tabuelan-Escalante, Toledo-San Carlos/Escalante and Toledo-San Carlos/Canlaon routes. Bacolod to Dumaguete via Mabinay route is approximately 6 hours while via Cadiz-San Carlos route takes approximately 8 hours, both routes going Negros Oriental. Notable people Sister cities Bacolod has the following sister cities: Local Dumaguete City, Negros Oriental Iloilo City, Iloilo Legazpi, Albay Naga, Camarines Sur Makati City, Marikina City, Parañaque City and Taguig City, Metro Manila International Singaraja in Bali, Indonesia Andong in North Gyeongsang and Seo District in Daegu, Republic of Korea Keelung, Taiwan Long Beach in California, United States - However, Sister Cities International designates Bacolod and Long Beach as Friendship Cities. Kamloops in British Columbia, Canada See also Negros Occidental Hiligaynon language Metro Bacolod (which also includes Silay and Talisay) Ceres–Negros F.C. References External links [ Philippine Standard Geographic Code] Metro Bacolod Cities in Negros Occidental Highly urbanized cities in the Philippines Provincial capitals of the Philippines Port cities and towns in the Philippines Populated places established in 1575 1575 establishments in the Philippines
28871462
https://en.wikipedia.org/wiki/TeleType%20Co.
TeleType Co.
TeleType Co., Inc. is a privately held company in the United States, specialized in developing software for GPS devices. It was founded in 1981, under the name TeleTypesetting Company and it is based in Boston, Massachusetts. The company's product line includes automotive and commercial GPS navigation systems and other products including GPS receivers and tracking units. It develops and sells the WorldNav software for PC and Windows CE, tools for converting third party maps into WorldNav maps, an SDK and an API that allow the customization of the WorldNav application. TeleType Co. also offers consultancy services for those interested in acquiring and adapting the source code of their software products. History The early years The company was founded in 1981, in Ann Arbor, Michigan, under the name TeleTypesetting Company Inc., by Edward Friedman, a Licensed Professional Engineer, and Marleen Winer, an alumnus of the University of Maryland, holding a B.S. in Geography and a Masters in Urban Planning from the University of Michigan. Ed Friedman was born in Odessa, Ukraine and graduated from the Moscow Institute of Steel and Alloys (currently the National University of Science and Technology) with a B.S. in Transportation Engineering. He completed his studies in the U.S. with a Master of Science in Engineering at the Michigan State University and a M.S. in Financial Management at the Polytechnic Institute. (currently New York University), completed requirements for Ph.D. in Civil Engineering at University of Michigan Sensing that the electronics industry was in its infancy and poised to expand, the two founders established the company in 1981, initially operating in the phototypesetting industry. TeleTypesetting was one of the first companies to produce a hardware and software interface between personal computers such as the Apple II and IBM PS/2 and numerous models of phototypesetting machines such as the Compugraphic Compuwriter and CompEdit, and Varityper EPICS, Comp/Set and Comp/Edit. TeleTypesetting Co. created a package called MicroSetter, composed of a desk accessory, cables, conversion software, and connectivity software, which provided computers equipped with it the capability to connect to phototypesetting machines. MicroSetter was compatible with numerous desktop publishing applications such as Ready,Set,Go, Page Maker, Microsoft Word, MacWrite, and MacDraw. Use of personal computers took advantage of functions such as spell checking and printing on plain paper prior to printing on expensive phototypesetting paper which could not easily be corrected after the photographic process was completed thereby saving time and expense while providing more accurate results. From the MicroSetter product line evolved the T-Script PostScript interpreter (also referred to in the industry as a Raster image processor) which converted output from popular PostScript based WYSIWYG (What You See Is What You Get) programs such as PageMaker, Microsoft Word and MacWrite to non PostScript printers which were much more economical at the time. This provided the user the ability to see the document on the screen as it would appear on the printer. Furthering innovation Expanding on its expertise of system integration, the company applied its knowledge to a new industry producing "Books that Remember". Taking advantage of the emerging personal digital assistant (PDA) capabilities the company pushed the envelope of development to produce a series of digital books that could be used in interactive ways based on the computing power of the devices. It published applications designed for Apple's series of PDAs, the Newton. This included Digital Gourmet, a cooking book application aimed at professional chefs, built on the HyperCard programming environment. The application made use of HyperCard's capabilities in an innovative way, calculating the nutritional values of the prepared foods based on the used ingredients. Since 2003, the application has been released to the community as a freeware. Global Positioning System integration The company recognized the simplicity and convenience of using PDAs for navigation purposes and used its experience in this domain to create its first GPS navigation software, which later evolved into a line of portable GPS devices. Seeking to combine the computing power of PDAs such as the Newton with the advanced capabilities of the GPS technology, TeleType created an application which served as an aid to navigation for aircraft pilots. This software gave the pilots the ability to create flight plans and avoid restricted areas, using the large touch screen Newton rather than the complicated button oriented small screen handheld systems offered at that time. Later, the company expanded its aviation product to include street maps so that pilots could use the same device for flying and driving by easily switching from aviation mode to street mode. In 1998, TeleType Co. began marketing its GPS navigation software, targeting handheld devices running on the Windows CE operating system. The software was compatible with many types of devices, including PDAs and Pocket PCs. The software could also be installed on PCs running Windows 95/NT. In 1999 a software product addressed to airplane pilots was launched that added specific functionalities such as runway details and radio frequencies. In the late 1990' and early 2000 the company partnered with Geographic Data Technologies (GDT) which was later acquired by TeleAtlas, one of the market leaders for digitized maps, an agreement which made street level maps of U.S., Canada and 14 European countries available to TeletType Co. The company implemented these maps in its navigation solutions for the Apple Newton and Windows and Windows CE based systems. Currently the company partners with Navteq for U.S., Canada, and Mexico mapping. TeleType continued its development efforts offering the first solution to combine land, air, and water navigation in one integrated program. The company expanded to create a vehicle tracking solution called PocketTracker, combining GPS tracking and navigation in a Windows CE based PDA. The solution was launched in 2001. The software took advantage of the CDPD technology, one of the first wireless data technologies, allowing users to track in real time the positions of devices running TeleType's software and hardware. From 2000 to 2007 the company introduced a series of portable navigation devices for street navigation. In 2008, TeleType launches one of the first GPS solutions aimed specifically at commercial drivers. The WorldNav Truck GPS product line began with a 3.5" touch screen GPS with the unique feature of interactively showing restricted roads in a bright pink color based on the size and weight of the vehicle in use. The company then expanded the line to include 5" and 7" screens. This product line takes into account commercial restrictions and low bridge heights among other features when calculating routes. Product line GPS navigation systems and Mobile Apps for commercial vehicles TeleType manufactures a series of GPS devices aimed at professional drivers, such as truckers and bus drivers. The devices operate on the Windows CE operating system and are powered by a SiRFstarIII GPS chipset. When calculating routes, the WorldNav software installed on the devices takes into account commercial truck restrictions such bridge heights, load limits, one-way roads and Hazmat restrictions. The systems also allow custom routing based on the vehicle's dimensions. TeleType Co. holds a pending patent (US 2010/0057358) for this custom routing technology The WorldNav software is preloaded with over 12 million points of interest, which TeleType claims to be the highest number in the industry. The selection of the points of interest is oriented towards professional drivers, and it includes gas stations, truck stops and weigh stations. The WorldNav offers the unique feature of searching for points of interest by the business telephone number. In 2012 TeleType developed and introduced the SmartTruckRoute app to provide navigation for commercial drivers seeking up-to-date mapping and truck specific routing for Android and iOS smartphones. SmartTruckRoute GPS devices for land, air and water navigation TeleType also offers a line of GPS navigation devices aimed at the consumer market. The non-commercial line of portable GPS devices include over 12 million points of interest with the unique feature of searching for points of interest by the business telephone number. Restaurants, hotels, airports, and other basic points of interest are included. These devices have received generally favorable reviews, being praised for the GPS reception, text to speech functions and large database of points of interest. A specialized device targeting motorcyclists and bicyclists is outfitted with a custom version of the navigation software which allows users to create their own off-road routes and has a specialized electronic keyboard making it easy to use while wearing gloves. In addition to GPS navigation devices for land vehicles, TeleType offers versions of its WorldNav software customized for air and marine navigation. Special GPS systems Other specialized GPS devices offered by TeleType include GPS receivers, vehicle-mountable GPS trackers and other specialty items. Developer tools TeleType offers software development kits for its WorldNav navigation software allowing developers full control over the integration of their specialized software solutions with navigation. The tools support Windows and Windows CE embedded devices. Developers have access to functions of the WorldNav navigation software allowing routes to be created, analyzed, and pushed directly to the device of choice. TeleType's solutions have been used for projects such as creating an automated taxi dispatch service. The company also offers map development tools allowing clients to convert their own digital mapping information into the TeleType proprietary map format (TTM) for use in TeleType navigation software. Awards and recognitions TeleType Co.'s product WorldNavigator has received CNET's Editor's Choice in 2003 and 2004. External links References Software companies established in 1981 Companies based in Boston Navigation system companies Software companies based in Massachusetts Typesetting software Privately held companies based in Massachusetts Software companies of the United States 1981 establishments in Massachusetts 1981 establishments in the United States Companies established in 1981
30756232
https://en.wikipedia.org/wiki/Thacker%20Shield
Thacker Shield
The Thacker Shield is a rugby league football trophy awarded on an annual basis to the winner of a match between the champion clubs of the Canterbury Rugby League and West Coast Rugby League. History The shield was donated by Dr Henry Thacker in 1913 after setting up the Canterbury Rugby Football League in 1912. The shield was originally competed for on a national basis by the various provincial club champions. Sydenham defeated North Shore to win the first title 13-8 on 6 September 1913. Ponsonby United won the title in 1918. Ponsonby held the trophy until 1921 when it accepted a challenge from Auckland's City club, losing the trophy to City. Sydenham had also challenged for the trophy but had been told that there was no suitable date. The Canterbury Rugby League, and their President Henry Thacker, challenged this decision and the New Zealand Rugby League stepped in, returning the trophy to Canterbury. The rules were subsequently amended to make the shield only contestable between South Island clubs. Runanga became the first West Coast Rugby League team to win the trophy, when they defeated Addington 16-6 at Monica Park in 1931. See also NRL State Championship References Rugby league competitions in New Zealand Rugby league trophies and awards New Zealand sports trophies and awards
43239050
https://en.wikipedia.org/wiki/Apple%20Watch
Apple Watch
Apple Watch is a line of smartwatches produced by Apple Inc. It incorporates fitness tracking, health-oriented capabilities, and wireless telecommunication, and integrates with iOS and other Apple products and services. The Apple Watch was released in April 2015 and quickly became the best-selling wearable device: 4.2 million were sold in the second quarter of fiscal 2015, and more than 100 million people were estimated to use an Apple Watch as of December 2020. Apple has introduced new generations of the Apple Watch with improved internal components each September—each labeled by Apple a 'Series', with certain exceptions. Each Series has been initially sold in multiple variants defined by the watch casing's material, color, and size (except for the budget watches Series 1 and SE, available only in aluminum), and beginning with Series 3, by the option in the aluminum variants for LTE cellular connectivity, which comes standard with the other materials. The band included with the watch can be selected from multiple options from Apple, and watch variants in aluminum co-branded with Nike and in stainless steel co-branded with Hermès are also offered, which include exclusive bands, colors, and digital watch faces carrying those companies' brandings. The Apple Watch operates primarily in conjunction with the user's iPhone for functions such as configuring the watch and syncing data with iPhone apps, but can separately connect to a Wi-Fi network for some data-reliant purposes, including basic communications and audio streaming. LTE-equipped models can connect to a mobile network, including for calling, texting, and installed mobile app data use, substantially reducing the need for an iPhone after initial setup. Although the paired iPhone need not be near the watch, to make a call with the watch, the paired iPhone must still be powered on and connected to a cellular network. The oldest iPhone model that is compatible with any given Apple Watch depends on the version of system software installed on each device. , new Apple Watches come with watchOS 8 preinstalled and require an iPhone running iOS 15, which is available for the iPhone 6S and later. Development The goal of the Apple Watch was to complement an iPhone and add new functions, and to free people from their phones. Kevin Lynch was hired by Apple to make wearable technology for the wrist. He said: "People are carrying their phones with them and looking at the screen so much. People want that level of engagement. But how do we provide it in a way that's a little more human, a little more in the moment when you're with somebody?" Apple's development process was held under wraps until a Wired article revealed how some internal design decisions were made. Rumors as far back as 2011 speculated that Apple was developing a wearable variation of the iPod that would curve around the user's wrist, and feature Siri integration. In February 2013, The New York Times and The Wall Street Journal reported that Apple was beginning to develop an iOS-based smartwatch with a curved display. That same month, Bloomberg reported that Apple's smartwatch project was "beyond the experimentation phase" with a team of about 100 designers. In July 2013, Financial Times reported that Apple had begun hiring more employees to work on the smartwatch, and that it was targeting a retail release in late 2014. Unveiling and release In April 2014, Apple CEO Tim Cook told The Wall Street Journal that the company was planning to launch new products that year, but revealed no specifics. In June 2014, Reuters reported that production was expected to begin in July for an October release. During a September 2014 press event where the iPhone 6 was also presented, the new watch product was introduced by Tim Cook. After a video focusing on the design process, Cook reappeared on stage wearing an Apple Watch. In comparison to other Apple products and competing smartwatches, marketing of the Apple Watch promoted the device as a fashion accessory. Apple later focused on its health and fitness-oriented features, in an effort to compete with dedicated activity trackers. The watchOS 3 added fitness tracking for wheelchair users, social sharing in the Activity app, and a Breathe app to facilitate mindfulness. The device was not branded as "iWatch", which would have put it in line with its product lines such as iPod, iPhone, and iPad. In the United States, the "iWatch" trademark is owned by OMG Electronics – who was crowdfunding a device under the same name; it is owned in the European Union by Irish firm Probendi. In July 2015, Probendi sued Apple Inc. for trademark infringement, arguing that through keyword advertising on the Google search engine, it caused advertising for the Apple Watch to appear on search results pages when users searched for the trademarked term "iWatch". Release Pre-orders for the Apple Watch began on April 10, 2015, with the official release on April 24. Initially, it was not available at the Apple Store; customers could make appointments for demonstrations and fitting, but the device was not in-stock for walk-in purchases and had to be reserved and ordered online. CNET felt that this distribution model was designed to prevent Apple Store locations from having long line-ups due to the high demand. Selected models were available in limited quantities at luxury boutiques and authorized resellers. On June 4, 2015, Apple announced that it planned to stock Apple Watch models at its retail locations. On August 24, 2015, Best Buy announced that it would begin stocking Apple Watch at its retail stores by the end of September. Both T-Mobile US and Sprint also announced plans to offer Apple Watch through their retail stores. In September 2015, Apple launched a new subset of Apple Watch, with a stainless steel body and leather band, in collaboration with Hermès. The following year, Apple launched another subset of Apple Watches in collaboration with Nike dubbed "Apple Watch Nike+". Both subsets featured cosmetic customization, but otherwise functioned like standard Apple Watches. Apple Watch went on sale in India in November 2015. The device also launched in Chile, the Philippines, Indonesia, and South Africa. Specifications Design and materials Each series of Apple Watch is offered in multiple variants, distinguished by the casing's material, color, and size, with special bands and digital watch faces available for certain variants co-branded with Nike and Hermès, which are also sometimes accompanied by other unique extras, like stainless steel charging pucks, premium packaging, and exclusive color basic bands. Originally at launch, the Apple Watch was marketed as one of three "collections", designating the case material. In order of increasing cost, the collections were: Apple Watch Sport (Aluminium case) Apple Watch (Stainless steel case) Apple Watch Edition (Originally released as an 18kt gold casing with newer materials in later models) Starting with Series 1/Series 2, Apple dropped the "Sport" moniker from the branding (apart from the sport bands), and the Apple Watch was available with either an aluminum (lowest cost) or stainless steel case. "Apple Watch Edition" branding still exists, but now refers to watch casings made from ceramic or titanium. Apple did not explicitly market the first-generation Apple Watch as being waterproof, stating that it can withstand splashes of water (such as rain and hand washing), but does not recommend submersion (IPX7). Apple introduced a higher level of water resistance with the release of the Apple Watch Series 2, and the device was explicitly advertised as being suitable for swimming and surfing. The Series 7 also includes an IP6X certification for dust resistance. Size Since the introduction of the Apple Watch, it has been available in two sizes, primarily affecting screen resolution and area. The smaller size at launch was , referring to the approximate height of the watch case; the larger size was . Starting with Series 4, the two nominal sizes changed to (smaller) and (larger). The nominal sizes changed again with the introduction of Series 7: (smaller) and (larger). The overall shape and width of the watch has not changed significantly since its release, so customizable bands and accessories are typically compatible with any Apple Watch of the same size class. Bands that fit the smaller size class (, , and watches) and larger size class (, , and watches) are generally interchangeable within the class. The casing of the watch includes a mechanism to allow the user to change the straps without special tools. Input and sensors For input, the Watch features a "digital crown" on one side which can be turned to scroll or zoom content on screen, and pressed to return to the home screen. Next to the crown (on the same side of the watch) is the Side Button, which can be used to display recently used apps and access Apple Pay, which is used for contactless payment. The Watch also prominently features a touchscreen; prior to Series 6/SE, the screen included Force Touch technology, which enabled the display to become pressure-sensitive and therefore capable of distinguishing between a tap and a press for displaying contextual menus. Force Touch has since been physically removed in Watch Series 6 and Watch SE, and has been disabled via software on Watch Series 5 and earlier on models supporting watchOS 5. Additional sensors integrated into the Watch include an accelerometer, gyroscope, and barometer, which are used to determine device orientation, user movement, and altitude. The back of all Apple Watches is equipped with a Heart Rate Monitor, which projects infrared and green light from light-emitting diodes (LEDs) onto the user's skin and photodiodes measure the varying amount of light reflected. Because blood absorbs green light and reflects red light, the amounts of each type of reflected light are compared to determine heart rate. The Watch adjusts the sampling rate and LED brightness as needed. Starting with the Series 4, Apple added electrical sensors to the Digital Crown and back, allowing the Watch to take electrocardiogram (ECG) readings; the device won FDA clearance in October 2018, becoming the first consumer device capable of taking an ECG. A blood oxygen monitor was added with the Series 6 in 2020, albeit as a "wellness" device not capable of diagnosing a medical condition. The blood oxygen monitor added red LEDs to the back, allowing the watch to determine oxygen levels by measuring blood color. The Watch SE reverted to the capabilities of the Series 3, dropping the electrical sensors and blood oxygen monitor. Battery Apple rates the device's battery for 18 hours of mixed usage. Apple Watch is charged by means of inductive charging. If the watch's battery depletes to less than 10 percent, the user is alerted and offered to enable a "power reserve" mode, which allows the user to continue to read the time for an additional 72 hours, while other features are disabled. The watch then reverts to its original mode when recharged or after holding down the side button. Bands Apple Watch comes with an included band (strap) to attach it to the user's wrist. The band can be easily changed to other types by holding down the connectors on the bottom of the watch and sliding the band pieces out. Third party bands are compatible with Apple Watch, however Apple produces bands in a variety of materials and colours. Bands designed for the original Series 1-3 38 mm and 42 mm case sizes are compatible with the Series 4-6 40 mm and 44 mm cases, as well as the 41 mm and 45 mm cases, respectively. Starting with Apple Watch Series 5, Apple introduced the online Apple Watch Studio which allows customers to mix and match bands on purchase, eliminating the need to purchase a specific combination of case and band design, and allows for a simplification of packaging (since Apple Watch Series 4 in 2018). Hardware First generation The 1st generation Apple Watch (colloquially referred to as Series 0) uses the single-core S1 system-on-chip. It does not have a built-in GPS chip, instead relying on a paired iPhone for location services. It features a new linear actuator hardware from Apple called the "Taptic Engine", providing realistic haptic feedback when an alert or a notification is received, and is used for other purposes by certain apps. The watch is equipped with a built-in heart rate sensor, which uses both infrared and visible-light LEDs and photodiodes. All versions of the first-generation Apple Watch have 8 GB of storage; the operating system allows the user to store up to 2 GB of music and 75 MB of photos. When the Apple Watch is paired with an iPhone, all music on that iPhone is also available to be controlled and accessed from the Apple Watch. Software support for the first Apple Watch ended with watchOS 4.3.2. Second generation (Series 1 and 2) The second generation Apple Watch has two models; the Apple Watch Series 1 and Apple Watch Series 2. The Series 1 has a variant of the dual-core Apple S2 processor with GPS removed, known as the Apple S1P. It has a lower starting price than first generation. The Series 1 was sold only in Aluminium casings. The Series 2 has the dual-core Apple S2 processor, water resistance to 50 meters, a display twice as bright, a GPS receiver, and a brighter 1000 nits display. The Series 2 was sold in casings of anodized Aluminium, Stainless Steel and Ceramic. Series 1 & 2 have an advertised 18 hours of battery life. Software support for the Series 1 and Series 2 Apple Watch ended with watchOS 6.3. Third generation (Series 3) The Apple Watch Series 3 features a faster processor, the dual-core S3, Bluetooth 4.2 (compared to 4.0 on older models), a built-in altimeter for measuring flights of stairs climbed, increased RAM size, and is available in a variant with LTE cellular connectivity. Siri is able to speak though the onboard speaker on Apple Watch Series 3 due to the increased processing speed of the Watch. Series 3 features LTE cellular connectivity for the first time in an Apple Watch, enabling users to make phone calls, iMessage and stream Apple Music and Podcasts directly on the watch, independent of an iPhone. The LTE model contains an eSIM and shares the same mobile number as the user's iPhone. Fourth generation (Series 4) The Apple Watch Series 4 is the first predominant redesign of the Apple Watch, featuring larger displays with thinner bezels and rounded corners, and a slightly rounder, thinner chassis with a redesigned ceramic back. Internally there is a new S4 64-bit dual-core processor, capable of up to double the S3's performance, upgraded 16 GB storage, and a new electrical heart sensor. The microphone was moved to the opposite side between the side button and the digital crown to improve call quality. Other changes include the digital crown incorporating haptic feedback with the Apple Haptic Engine and includes the new Apple-designed W3 wireless chip. The ECG system has received clearance from the United States Food and Drug Administration, a first ever for a consumer device, and is supported by the American Heart Association. The Series 4 can also detect falls, and can automatically contact emergency services unless the user cancels the outgoing call. The watch received mostly positive reviews from critics. TechRadar gave it a score of 4.5/5, calling it one of the top smartwatches, while criticizing the short battery life. Digital Trends gave it a score of 5/5, calling it Apple's best product and praising the design, build quality, and software, among others, while criticizing the battery life. CNET gave it a score of 8.2/10, calling it the "best overall smartwatch around", while criticizing the battery life and lack of watch face options. T3 gave it a score of 5/5, calling it a "truly next-gen smartwatch" due to its thinner body and bigger screen compared to the Series 3, and health features. Fifth generation (Series 5) The Apple Watch Series 5 was announced on September 10, 2019. Its principal improvements over its predecessor were the addition of a compass and an always-on display with a low-power display driver capable of refresh rates as low as once per second. Additional new features include International Emergency Calling, enabling emergency calls in over 150 countries, a more energy-efficient S5 processor, improved ambient light sensor, and storage doubled to 32 GB. The release of the Series 5 also brought back the "Edition" model, with a ceramic model absent from the previous generation. A new titanium model was also included in two colors: natural and Space Black. The Series 5 and above (including the SE model introduced in 2020) also incorporate enhanced hardware- and software-based battery and performance management functionality. Critics generally gave it a positive review. CNET gave it a score of 4/5, concluding, "The Apple Watch continues to be one of the best smartwatches, but it remains limited by being an iPhone accessory for now." Digital Trends gave it a score of 4.5/5. The Verge gave it a score of 9/10. Sixth generation (Series 6 and SE) The Apple Watch Series 6 was announced on September 15, 2020, during an Apple Special Event and began shipping on September 18. Its principal improvement over its predecessor is the inclusion of a sensor to monitor blood oxygen saturation. Additional features include a new S6 processor that is up to 20% faster than the S4 and S5, a 2.5× brighter always-on display, and an always-on altimeter. The S6 incorporates an updated, third generation optical heart rate sensor and also enhanced telecommunication technology, including support for ultra-wideband (UWB) via Apple's U1 chip, and the ability to connect to 5 GHz Wi-Fi networks. The Series 6 watch was updated with faster charging hardware such that it completes charging in ~1.5 hours. Force Touch hardware was removed, consistent with the removal of all Force Touch functionality from watchOS 7. The Series 6 watch added Product Red and Navy Blue case color options. At its September 2020 product introduction event, Apple also announced the Apple Watch SE, a lower-cost model. The SE incorporates the same always-on altimeter as the Series 6, but uses the previous-generation S5 processor and previous- (i.e. second) generation optical heart rate sensor; does not include ECG and blood oximiter sensors or an always-on display; and does not include UWB or 5 GHz Wi-Fi communication capabilities. Seventh generation (Series 7) The Apple Watch Series 7 was announced on September 14, 2021, during an Apple Special Event. Pre-orders opened on October 8, with earliest shipping dates starting on October 15. Enhancements relative to the prior-generation Series 6 watch include a more rounded design with a case 1mm larger than the Series 6; a display that is 70% brighter indoors and approximately 20% larger; improved durability via a crack-resistant front crystal; IP6X certification for resistance to dust; 33% faster charging via improved internal electronics and an enhanced, USB-C based fast-charging cable; support for BeiDou (China's satellite navigation system); and the availability of an on-screen keyboard that can be tapped or swiped. The Series 7 is also equipped with new hardware that enables ultra-rapid, short-range wireless data transfer at 60.5 GHz, though Apple has not fully explained this new functionality. Following Apple's announcement of the Series 7, an independent software development company filed a lawsuit against Apple alleging inappropriate copying of the software keyboard functionality from an app that Apple had previously rejected from its App Store. Software Apple Watch runs watchOS, whose interface is based around a home screen with circular app icons, which can be changed to a list view in the devices settings. The OS can be navigated using the touchscreen or the crown on the side of the watch. During its debut, the first generation Apple Watch needed to be paired with an iPhone 5 or later running iOS 8.2 or later; this version of iOS introduced the Apple Watch app, which is used to pair the watch with an iPhone, customize settings and loaded apps, and highlight compatible apps from the App Store. The Apple Watch is capable of receiving notifications, messages, and phone calls via a paired iPhone. "Glances" allowed users to swipe between pages containing widget-like displays of information; however, this feature was replaced by a new Control Center. watchOS also supports Handoff to send content from Apple Watch to an iOS or macOS device, and act as a viewfinder for an iPhone camera, Siri is also available for voice commands, and is capable of responding with voice prompts on the Series 3 watches. Apple Watch also supports Apple Pay, and enables its use with older iPhone models that do not contain near-field communication (NFC) support. Apple Watch's default apps are designed to interact with their iOS counterparts, such as Mail, Phone, Calendar, Messages, Maps, Music, Photos, Reminders, Remote (which can control iTunes and Apple TV), Stocks, and Wallet. Using the Activity and Workout apps, a user can track their physical activity and send data back to the iPhone for use in its Health app and other HealthKit-enabled software. With watchOS 3, Reminders, Home, Find My Friends, Heart Rate, and Breathe were added to the stock apps. With the release of watchOS 4 and the Series 3 Apple Watch, iPhone 5 and iPhone 5c support was dropped, requiring users to use an iPhone 5s or later with iOS 11 or later to use watchOS 4. Apple Watches still running watchOS 3 or below remain compatible with the iPhone 5 and iPhone 5c. Further, watchOS 5 dropped support for the original (Series 0) Apple Watch. watchOS 6 requires iOS 13, and was the final version to support the Series 1 and Series 2 Apple Watch. watchOS 7 requires iOS 14. watchOS 8 requires iOS 15. Version history At WWDC 2015, Tim Cook announced watchOS 2.0; described by CNET as a "significant revamp", it included a new software development kit that allows more direct access to the device's hardware, new watch faces, the ability to reply to an e-mail, and other features. WatchOS 2.0 was released in September 2015. Following the software update, some users experienced issues with lag. watchOS 3.0 was announced at WWDC 2016, with a priority on performance. Users are able to keep apps running in memory as well as receive background updates and refreshed information. Other updates include a new Dock invoked with the side button to replace the performance-laden Glances, an updated Control Center, and new reply options on Messages. Several new watch faces have also been added, including Minnie Mouse, along with the ability to switch watch faces from the lock screen simply by swiping. A new feature called SOS allows users to hold the dock button to make a call to the local emergency line and pull up the user's Medical ID. Another feature is Activity Sharing, which allows sharing of workouts with friends and even sending their heartbeats to one another. A new app called Breathe guides users through breathing exercises throughout the day, with visuals and haptic feedback. It was made available to the public in September 2016. watchOS 3.1 was released to the public in October 2016, and watchOS 3.2 was released in March 2017. Both updates added minor improvements and bug fixes. WatchOS 4.0 was announced at WWDC 2017 and released to the public in September 2017. WatchOS 4 features a proactive Siri watch face, personalized activity coaching, and an entirely redesigned music app. It also introduces Apple GymKit, a technology platform to connect workouts with cardio equipment. WatchOS 4.3 was released in March 2018. It introduced support for Nightstand mode in portrait orientation. It brought back the ability for music playing on the iPhone to be controlled using the Music app on the Apple Watch and also enabled control of playback and volume on Apple's HomePod. Other new features included a new charging animation and a new app loading animation. Activity data was added to the Siri watch face, and the battery complication more accurately reports battery life. watchOS 5.0 was first shown to the public at the San Jose WWDC developer conference held by Apple. It introduced an instant watch-to-watch walkie-talkie mode, all-new Podcasts app, raise-wrist-to-speak Siri, customizable Control Center, and the ability to access the notification center and control center from apps. Other features included support for WebKit to view web pages, six new watch faces, and new workout running features. It was released to the public in September 2018. On the newest release of watchOS beta the sleep feature was shown on screen, this would eliminate the need to use third-party apps. watchOS 6.0 was released to the public in September 2019. It introduced more native iOS apps such as voice memos, calculator, and a native watchOS app store. watchOS 6.0 also introduced new features such as the noise app that allows you to measure the sound around you in decibels, menstrual tracking, and new watch faces. Other features include Siri being able to tell users what music they are listening to, activity trends, and a new UI framework for developers. watchOS 7.0 was announced on June 22, 2020, at the WWDC, and released on September 16, 2020; new functionalities include sleep tracking, additional watch faces, handwashing detection and new workouts such as dancing. watchOS 8.0 was announced on June 7, 2021, at the WWDC, and released on September 20, 2021. It replaces the Breathe app with a new Mindfulness app, and adds a Focus mode as well as a Portrait Watch Face, updates to the Messages and Home apps, Contacts and Find My apps, and a redesigned Photos app. Third-party apps In watchOS 1, third-party WatchKit applications run in the background on the iPhone as an application extension while a set of native user interface resources are installed on Apple Watch. Thus, watchOS apps must be bundled within their respective iOS app, and are synced to the watch either manually, or automatically upon installation of the phone app. With the release of watchOS 2, Apple made it mandatory for new watch apps to be developed with the watchOS 2 SDK from June 1, 2016, onwards; no third-party languages or SDKs can be used to develop apps. This allowed for developers to create native apps that are run on the watch itself, thus improving the responsiveness of third-party apps. In watchOS 5 and earlier, all watchOS apps are dependent apps – the watchOS app relies on an iOS companion app in order to function properly. However, in watchOS 6 or later, developers are able to create completely independent watchOS apps, and no longer require an app to be installed on the paired iPhone. This was assisted by the introduction of a separate App Store on the Apple Watch itself. Models As of September 2021, eight generations and eight series of Apple Watch have been released. Apple Watch models have been divided into five "collections": Apple Watch (1st generation-present), Apple Watch Sport (1st generation), Apple Watch Nike+ (Series 2-present), Apple Watch Hermès (1st generation-Series 5, Series 6-present), and Apple Watch Edition (1st generation-Series 3, Series 5, Series 6-present). They are differentiated by combinations of cases, bands, and exclusive watch faces; Apple Watch comes with either aluminum or stainless steel cases, and various watch bands (only stainless steel was offered for Apple Watch 1st generation); Apple Watch Sport came with aluminum cases and sport bands or woven nylon bands; Apple Watch Nike+ comes with aluminum cases and Nike sport bands or sport loops; Apple Watch Hermès uses stainless steel cases and Hermès leather watch bands (also included is an exclusive Hermès orange sport band); and Apple Watch Edition came with ceramic cases and various bands (the 1st generation Apple Watch Edition used 18 karat yellow or rose gold). With the Series 5, the Edition tier was expanded with a new titanium case. Apple Watch Series 1 models were previously only available with aluminum cases and sport bands. As of Series 3, each Apple Watch model in aluminum, the least expensive casing, is available either with or without LTE cellular connectivity, while the models with the other casing materials available (stainless steel and sometimes ceramic and titanium) always include it. Each model through Series 3 comes in a 38- or 42-millimeter body, with the larger size having a slightly larger screen and battery. The Series 4 was updated to 40- and 44-millimeter models, respectively. The Series 7 has been updated to 41- and 45-millimeter models. Each model has various color and band options. Featured Apple-made bands include colored sport bands, sport loop, woven nylon band, classic buckle, modern buckle, leather loop, Milanese loop, and a link bracelet. Comparison of models Support Specifications Collections and materials 1st generation only: Apple Watch was sold as "Apple Watch Sport" (Aluminum body) and "Apple Watch" (Stainless steel body). Later generations sold both body materials as "Apple Watch". Reception Following the announcement, initial impressions from technology and watch industry observers were varied; the watch was praised by some for its "design, potential capabilities and eventual usefulness", while others offered criticism of these same aspects. Venture capitalist Marc Andreessen said he "can't wait" to try it, and Steve Jobs' biographer Walter Isaacson described it as "extremely cool" and an example of future technology that is "much more embedded into our lives". However, Evan Dashevsky of PC Magazine said it offered nothing new in terms of functionality compared to the Moto 360, except the customizable vibration notifications. In November 2014, Apple Watch was listed by Time as one of the 25 Best Inventions of 2014. Initial reviews for the device have been generally positive with some caveats. Reviewers praised the watch's potential ability to integrate into everyday life and the overall design of the product, but noted issues of speed and price. Many reviewers described the watch as functional and convenient, while also noting failure to offer as much potential functionality as preceding smartphones. Farhad Manjoo of The New York Times mentioned the device's steep learning curve, stating it took him "three long, often confusing and frustrating days" to become accustomed to watchOS 1, but loved it thereafter. Some reviewers also compared it to competing products, such as Android Wear devices, and claimed "The Smartwatch Finally Makes Sense". Reviewers had mixed opinions on battery life though, with Geoffrey Fowler of The Wall Street Journal saying "the battery lives up to its all-day billing, but sometimes just barely," and others compared it to the Samsung Gear 2, which "strolls through three days of moderate usage." Tim Bradshaw of the Financial Times used several applications over a period of days. He concluded that there is no "killer application" so far besides telling the time, which is the basic function of a wristwatch anyhow. When using the Apple Watch, some users have reported issues using the heart monitoring feature due to permanent skin conditions including tattoos. The Watch uses photoplethysmography technology (PPG) which utilizes the green LED lights to measure heart rates. To gauge a user's heart rate, the watch flashes green light from the LEDs at the skin and records the amount of this light that is absorbed by the red pigment of the blood. However, under certain circumstances the skin may not allow for the light absorption to be read properly and thus provide inaccurate results. Some users have complained that the logo and text on the back of the Apple Watch Sport model, primarily the space gray version, can be easily worn off. Sales Financial analysts offered early sales estimates from a few million to as many as 5 million in the first year. Times Tim Bajarin summarized the breadth of reactions, writing that "there is not enough information yet to determine how this product will fare when it finally reaches the market next year". Owing to the inadequacy of materials, the Apple Watch's delivery was delayed from its initial pre-order release date of April 10, 2015. As a result, only 22 percent of the pre-ordered Apple Watches were dispatched in the United States during the weekend after the release date. It is estimated Apple received almost one million Apple Watch pre-orders in the United States during the initial six hours of the pre-order period on April 10, 2015, after which it sold out and further orders would start delivering in June. A report later on by an analyst stated that Apple Watch was already a $10 billion business during its first year. Apple has not disclosed any sales figures for the Apple Watch. An estimate by IDC states Apple shipped over 12 million units in 2015. In late 2016, a veteran of the Swiss watch industry said Apple sold about 20 million watches and had a market share of about 50 percent. Analysts estimate Apple sold 18 million watches in 2017, 31 million in 2019, and 34 million in 2020. In 2021, analysts estimated there were 100 million units in use. Controversies In December 2019, Dr. Joseph Wiesel, a New York University cardiologist, sued Apple over allegations that the Apple Watch violates a patented method for detecting atrial fibrillation. Wiesel claimed he had shared details of the patent with Apple in September 2017, but the company refused to negotiate. See also List of iOS devices Notes References External links – official site Apple Inc. hardware Apple Watch Wearable devices Computer-related introductions in 2015
42446523
https://en.wikipedia.org/wiki/Borderlands%3A%20The%20Pre-Sequel
Borderlands: The Pre-Sequel
Borderlands: The Pre-Sequel (stylized as Borderlands: The Pre-Sequel!) is an action role-playing first-person shooter video game developed by 2K Australia, with assistance from Gearbox Software and published by 2K Games. It is the third game in the Borderlands series, and is set after 2009's Borderlands and before 2012's Borderlands 2. It was released for Microsoft Windows, OS X, Linux, PlayStation 3 and Xbox 360 on 14 October 2014. PlayStation 4 and Xbox One ports were released as part of Borderlands: The Handsome Collection on 24 March 2015. The storyline of The Pre-Sequel focuses on Jack, an employee of the Hyperion corporation; after the company's Helios space station is captured by a military unit known as the Lost Legion, he leads a group of four Vault Hunters—all of whom were non-playable characters and bosses in previous Borderlands games—on an expedition to re-gain control of Helios, defeat the Lost Legion, and find the hidden vault on Pandora's moon Elpis. The game expands upon the engine and gameplay of Borderlands 2 and introduced gameplay mechanics, including low-gravity environments, freeze weapons, and oxygen tanks, which are used to navigate and perform ground slamming attacks. The Pre-Sequel received positive reviews, being praised for its new gameplay features and character classes, but was criticized for its confusing level design and not providing enough significant deviations from the core mechanics and gameplay of Borderlands 2. Gameplay Gameplay in The Pre-Sequel is similar to Borderlands 2, but with the addition of new mechanics. Two varieties of items have been added, including laser guns, and items possessing a cryogenic elemental effect, which can be used to slow down and freeze enemies. Enemies that are frozen take increased damage from explosive, melee or critical attacks and are smashed into pieces when killed. The game features low-gravity environments, causing players to jump higher but slower, and items such as loot and dead bodies to float away. O2 kits are added to supply air while in space; oxygen supplies can be replenished using generators, vents, and through oxygen tank items dropped by enemies. The kits can be used like a jetpack to perform double jumps, hovering, and ground slamming attacks; as with other items, different types of O2 kits can provide stat bonuses and affect how ground slams deal damage. A new "Grinder" machine allows players to deposit combinations of existing weapons to receive one of higher rarity. Vehicles were introduced, including a moon buggy, and the "Stingray"—a type of hoverbike. As with Borderlands 2, completing the main campaign with a character unlocks "True Vault Hunter Mode", a second playthrough that is higher in difficulty, while beating the mode and reaching level 50 unlocks the third playthrough "Ultimate Vault Hunter Mode". Plot Characters The Pre-Sequel features four playable characters, each with a different class and abilities. All four of The Pre-Sequels protagonists were non-player characters (NPCs) or bosses in previous Borderlands games. Athena, "the Gladiator", is a rogue assassin from the Atlas Corporation introduced in the Borderlands DLC campaign The Secret Armory of General Knoxx. As her primary skill, Athena can temporarily use a shield to absorb damage; her skill trees revolve around upgrading the shield, allowing it to be thrown at enemies, and absorb and reflect elemental damage, or towards melee attacks or elemental damage. Nisha, "the Lawbringer", first appeared in Borderlands 2 as Handsome Jack's girlfriend and the sheriff of the town of Lynchwood. Her primary skill, "Showdown", allows her to automatically aim at enemies for a period of time, increasing gun performance for the duration. Her skill trees revolve around increasing her survivability, Showdown performance, or gun damage. Claptrap, "the Fragtrap", is the last remaining robot of its kind as of Borderlands 2; his skill "VaultHunter.exe" generates random effects depending on the current situation. These effects can have a positive or negative impact on the player and their party members; among these effects are versions of skills used by the previous playable characters in the franchise. Wilhelm, "the Enforcer", is a mercenary who becomes increasingly augmented with technology and weaponry over the course of the game, transforming him into Jack's cybernetic minion who is fought in Borderlands 2. He can summon a pair of drones, Wolf and Saint; Wolf serves an offensive role by attacking other enemies, while Saint defends Wilhelm by providing shields and health regeneration. Two additional playable characters have been released as downloadable content. The first character, Jack, "the Doppelganger", is a man called Timothy Lawrence working as a body double of Jack who can summon digital copies of himself to help in battle. The second, Aurelia, "the Baroness", is the sister of Sir Hammerlock who uses an experimental "Frost Diadem Shard" to deal ice elemental damage to enemies. Multiple characters from past Borderlands titles and DLC are featured or appear in cameos. Handsome Jack, the main antagonist of Borderlands 2, appears as a key non-playable character, with the game's story mainly centered around his descent into villainy and rise to power. Additional returning characters include Moxxi, Tiny Tina, Sir Hammerlock, Professor Nakayama, Crazy Earl, Angel, General Knoxx, and Mr. Torgue. The four playable Vault Hunters from the first game, Lilith, Roland, Brick, and Mordecai, also appear in supporting roles. The Pre-Sequels DLCs include appearances from Borderlands 2s playable Vault Hunters Gaige and Axton, Patricia Tannis, Dr. Zed, Mr. Blake, and T.K. Baha. Story The Pre-Sequel begins on Sanctuary after the events of Borderlands 2 and soon after Episode 3 of Tales from the Borderlands, where Lilith, Brick, and Mordecai interrogate the captured Athena. Athena recounts her story via flashback, starting after the death of General Knoxx, when she received an offer to find a Vault on Pandora's moon, Elpis, from a Hyperion programmer named Jack. She joins fellow Vault Hunters Claptrap, Nisha, Wilhelm, Timothy, and Aurelia on a spaceship headed for the Hyperion moon base Helios. On the way they are ambushed by the Lost Legion, an army of former Dahl marines led by Tungsteena Zarpedon, and crash-land onto Helios. After meeting up with Jack, they try to use Helios' defense system against the Lost Legion, but there is a jamming signal coming from Elpis. They attempt to escape but are stopped by Zarpedon and a mysterious alien, so Jack stays behind and sends the Vault Hunters to Elpis via a moonshot rocket. On Elpis, the Vault Hunters are guided by the junk dealer Janey Springs to the spaceport town Concordia. There, they request help from Jack's ex-girlfriend Moxxi to disable the jamming signal. They discover that the signal was put up by the Meriff, a former subordinate of Jack who is in charge of Concordia. Meanwhile, Zarpedon uses Helios' primary weapon, the Eye of Helios, to fire upon Elpis, intending to destroy it to stop Jack from opening the Vault. Jack kills the Meriff, then decides to build a robot army to retake Helios. The team infiltrates a Lost Legion base run by two Dahl officers, the Bosun and the Skipper, in search of a military artificial intelligence. After defeating the Bosun, the Skipper, who renames herself Felicity, is revealed to be the A.I. they seek. The Vault Hunters travel to a robot production facility, where Jack enlists Gladstone, a Hyperion scientist, to build his army. Gladstone suggests using his prototype robot, the Constructor, which can build an infinite number of robots. Felicity agrees to become the A.I. for the Constructor, but hesitates upon witnessing the violence she has to go through. She is forced into the Constructor, but takes control of it and battles the Vault Hunters. Felicity is defeated and her personality is deleted from the Constructor. With his robot army, Jack and the Vault Hunters travel to Helios with the aid of Moxxi and former Vault Hunters Roland and Lilith. On Helios, Jack kills Gladstone and his team of scientists, suspecting one of them to be a Lost Legion spy. The Vault Hunters defeat Zarpedon and proceed to reboot the Eye of Helios, which is revealed to be the eye of the Destroyer from the first game, turned into a weapon by Jack. Moxxi, Roland, and Lilith betray Jack and destroy the Eye to prevent him from gaining its power. Seeking revenge, Jack and the Vault Hunters travel back to Elpis, where they find its Vault already opened. They battle the Vault's alien forces and defeat its guardian, the Empyrean Sentinel. Jack enters the Vault but finds nothing of value, other than a mysterious floating symbol. As he interacts with it, the symbol shows Jack a vision of the Warrior. However, he is interrupted by Lilith who destroys the Vault symbol, burning it onto Jack's face and disfiguring him. She teleports away, leaving a scarred and insane Jack behind, who swears vengeance on Lilith and all the "bandits" on Pandora. Seeing how low Jack has fallen, Athena leaves his employ. After listening to Athena's story, Lilith orders the Crimson Raiders to execute her against Brick and Mordecai's protests. However, as they open fire on her, Athena is saved by the alien previously seen on Helios, revealed to be an Eridian. The Eridian warns the Vault Hunters of an imminent war, and that they will need "all the Vault Hunters they can get". During the credits, several scenes reveal what became of the Vault Hunters afterwards. Wilhelm and Nisha join Jack; Wilhelm is transformed further into a machine and destroys the settlement of New Haven while Nisha is made Lynchwood's sheriff and hooks up with Jack; Athena discards the money given to her by Jack and leaves Elpis; Claptrap is dismantled and left for dead by Jack. In a post-credit scene, Jack, now calling himself "Handsome Jack" and wearing a synthetic mask, murders his CEO Harold Tassiter and replaces him as the new head of Hyperion. Claptastic Voyage The Claptastic Voyage story add-on continues shortly after Handsome Jack's takeover of Hyperion, as he discovers a secret program called the H-Source, containing all of Hyperion's secrets. However, it was hidden inside the "Fragtrap" unit by Tassiter. Jack employs his Vault Hunters once more to be digitally scanned and sent into Claptrap's mind in order to retrieve the H-Source. In the process, the Vault Hunters are tricked into releasing 5H4D0W-TP, a subroutine representing Claptrap's inner evil side, who attempts to use the H-Source for his own gains. As the group pursues 5H4D0W-TP, they delve deeper into Claptrap's mind, learning of his origin and the reasons for his quirky behavior. Eventually, the group defeats 5H4D0W-TP and retrieve the H-Source for Jack. Jack reveals his plan to use the H-Source to wipe out all existing CL4P-TP units, including Claptrap himself. All CL4P-TP units are disabled and dumped in Windshear Waste; however, 5H4D0W-TP, who still remains alive within Claptrap, sacrifices himself to revive Claptrap, allowing him to be found and saved by Sir Hammerlock. Development Borderlands 2, developed by Gearbox Software and released in late 2012, was one of the most successful video games in 2K's history. Speaking in February 2013, Gearbox CEO Randy Pitchford stated that there were no plans for a third installment in the franchise, as the company believed that a sequel to Borderlands 2 would have to be "massive", but that "when you think of what Borderlands 3 should be... No, we don't know what that is yet. We can imagine what it must achieve, but we don't know what it is yet". The company also cited a desire to focus its attention onto new games for next-generation consoles, such as Brothers in Arms: Furious Four, Homeworld: Shipbreakers (a new game in the Homeworld franchise, which Gearbox had recently acquired in THQ's bankruptcy auction), and new properties such as Battleborn. Despite this, the company still believed that they had not yet met the demands of fans, or even its own staff, in regards to the franchise (even with the overall success of Borderlands 2 and the large amount of downloadable content that had been released), prompting the creation of spin-offs such as Tales from the Borderlands, an episodic adventure game being developed by Telltale Games, and a port of Borderlands 2 for PlayStation Vita. A few months after the release of Borderlands 2 (and shortly after it had concluded its contributions to BioShock Infinite), Gearbox began working with 2K Australia to develop a prequel to the game which would take place directly after the events of the original. The decision to make the game a prequel to Borderlands 2 was centered around a desire to use the Hyperion moonbase (a location alluded to, and visible in Borderlands 2) as a playable location; the development team felt that going to the moonbase in a sequel to Borderlands 2 would be too "boring" for players since the relevant conflict was already resolved, and because "if we're going to go to the moonbase anyway, what if we try something completely different that people aren't expecting[?]". Pitchford noted that this setting would allow the game to address plot elements and events alluded to in the first two games that were not yet completely addressed—on the possibility that the game could introduce holes in the continuity of the franchise, he joked that the franchise already contained many plot holes to begin with. He suggested that working on The Pre-Sequel could be a breakout role for 2K Australia, similar to Gearbox's own Half-Life: Opposing Force. As for the size of the game, Pitchford stated that The Pre-Sequels playable world would be in between the size of the original and Borderlands 2. 2K Australia performed the majority of development on The Pre-Sequel, but worked in collaboration with Gearbox on certain aspects of the game. The studio also provided its writing staff—including Anthony Burch, lead writer of Borderlands 2—as a complement to 2K Australia's own writers. The engine of Borderlands 2 was used as a starting point, allowing the 2K Australia team to quickly prototype and implement features on top of the existing functionality already provided by Borderlands 2. Most of the new mechanics in the game, such as ice weaponry, were conceived by the 2K Australia team; Gearbox's developers had shown concerns that freezing weapons were illogical in comparison to the other elemental weapon types, such as incendiary and acid, but Pitchford excused their inclusion in The Pre-Sequel because cryogenic technology was more "natural" in the space-oriented setting of the game. The four playable characters have an increased amount of dialogue in comparison to their equivalents in previous instalments; NPC dialogue can change depending on the characters present. Developers also felt that The Pre-Sequel would have more diverse humour than previous installments due to the makeup of its writing staff, and a decision to portray the Moon's inhabitants as being Australians themselves, allowing for references to Australian comedy and culture, including missions referencing cricket, the folk song "Waltzing Matilda", and a talking shotgun based upon the bogan stereotype. Bruce Spence, a New Zealand actor known for his role as the Gyro Captain in Mad Max 2 (the second film in the Mad Max franchise that was cited as an influence on the setting of Borderlands as a whole), is among the game's voice actors–voicing a gyrocopter pilot in reference to his role from the film. The Pre-Sequel would be the final video game developed by 2K Australia, as the studio was shut down on 16 April 2015. Release Borderlands: The Pre-Sequel was released in North America on 14 October 2014. Initially, the game was not released on eighth-generation consoles such as PlayStation 4 or Xbox One. As porting The Pre-Sequel to next-generation consoles would require rebuilding the engine (and thus defeating the purpose of retaining the engine used by Borderlands 2), developers instead targeted the game to the same console platforms that previous installments in the Borderlands franchise were released for. In July 2014, 2K Australia's head Tony Lawrence stated that there was a possibility that The Pre-Sequel could be ported to next-generation consoles, gauged by fan demand and sales. In August 2014, financial statements by Take-Two Interactive disclosed that a Linux port of the game was also in development; these details were confirmed by 2K in a statement to gaming news site IGN.com. The port, which was accompanied by a port of Borderlands 2 released in late-September 2014, was released for Linux through Steam. As part of pre-release promotional efforts for the game, Gearbox began releasing Pre-Sequel-inspired character skins for Borderlands 2 in July 2014, and at San Diego Comic-Con, Gearbox partnered with The Nerdist to set up a Borderlands-themed laser tag field at Petco Park during the convention. On 18 September 2014, an extended 10-minute trailer featuring Sir Hammerlock and Mr. Torgue was released. On 30 September 2014, Pitchford confirmed that the game had gone gold. On 20 January 2015, 2K announced that it would release a compilation of Borderlands 2 and The Pre-Sequel, Borderlands: The Handsome Collection, for PS4 and Xbox One on 24 March 2015. It includes both games and all of their respective DLC. On 26 March 2020, 2K announced that both games, as well as the original Borderlands game, would be released for Nintendo Switch as part of Borderlands Legendary Collection on 29 May 2020. Downloadable content As with Borderlands 2, downloadable content (DLC), including new characters and story campaigns, were made for The Pre-Sequel, which can be purchased separately or together as a "Season Pass". The Shock Drop Slaughter Pit was released at launch as a pre-order exclusive. The first DLC character, released on 11 November 2014, is a body double of Handsome Jack, "the Doppelganger"; he can summon clones of himself known as "digi-Jacks" to fight alongside him. Jack's skill trees mainly focus on granting bonuses to himself, as well as his Digi-Jacks. The first DLC campaign, The Holodome Onslaught, was released on 14 December 2014; it includes missions in the titular challenge arena, which features Athena re-telling a shortened version of the game's story to Borderlands 2s Axton and Gaige. The Holodome Onslaught DLC was received poorly by the playerbase on release. The third playthrough, "Ultimate Vault Hunter Mode", raises the character level cap to 60, and includes an additional mission that ties into Handsome Jack's presence in Tales from the Borderlands. The second DLC character, Lady Aurelia Hammerlock, "the Baroness", was released on 27 January 2015; she is the elder sister of supporting character Sir Hammerlock. Her action skill is a homing ice shard which can cycle between enemies as they are killed by it: her skill trees provide enhancements to the shard, can increase cryo damage, and the ability to assign a teammate as her "servant"—allowing both players to benefit from bonuses granted by each other's kills. The second DLC campaign, Claptastic Voyage and Ultimate Vault Hunter Upgrade Pack 2, was released on 24 March 2015, coinciding with the release of The Handsome Collection. The DLC's title is a reference to the film Fantastic Voyage, which has a similar plotline involving miniaturisation and travel through a body. It features the player characters being sent into the mind of Claptrap by Handsome Jack to retrieve a mysterious piece of software known as the "H-Source", hidden within it by Hyperion's former CEO Harold Tassiter, resulting in the release of Shadowtrap, the digital manifestation of Claptrap's FR4G-TP program. The story also features the CL4P-TP genocide and a deeper look into Claptrap's depression. 2K Australia's creative director Jonathan Pelling cited Fantastic Voyage, Tron, and the holodeck of Star Trek as influences on the campaign, explaining that "We thought the best way to get to know Claptrap a little bit more was to actually go inside his mind and see what he thinks. To get those perspectives, recover those memories, and dig through his dirty laundry." The DLC also raises the character level cap to 70, and feature a customisable challenge arena. On 28 March 2019, Gearbox announced that 4K support for PC and the Handsome Collection ports (on PlayStation 4 Pro and Xbox One X) would be released on 3 April 2019. Reception Borderlands: The Pre-Sequel received positive reviews from critics. Aggregating review website Metacritic gave the PlayStation 3 version 77/100 based on 24 reviews, the Microsoft Windows version 75/100 based on 55 reviews and the Xbox 360 version 74/100 based on 16 reviews. Daniel Bloodworth from GameTrailers gave the game an 8.4/10. He praised the characters and the new gameplay mechanics introduced in The Pre-Sequel. He ended the review by saying that "new playable characters are worth exploring and the tweaks to the formula have an impact across the entire breadth of the game." David Roberts from GamesRadar gave the game an 8/10, praising its diverse character classes, hilarious writing and the core combat which he stated, "has maintained the series' weird, satisfying mix of anarchic, tactical gunplay and compulsive RPG overtones". However, he criticised the weak story, as well as non-drastic changes when compared with Borderlands 2. He described the general experience as "a hilarious, fan-focused continuation of the series' core values, but lacking any true evolution, which made it a fun diversion rather than a meaningful new chapter." Vince Ingenito from IGN gave the game an 8/10. He praised the gearing options and the low-gravity mechanics, which made the game "a fresh experience". He also praised the entertaining Jack-focused story, but criticised its poor pacing. Jessica Conditt from Joystiq gave the game a 7/10, praising its new gameplay mechanics, well-defined classes, as well as the interesting and comedic bosses encountered and unique environments, but criticising the confusing level design, frustrating death and predictable missions which lack variety. Evan Lahti from PCGamer gave the game a 77/100, praising its new gameplay features, which he stated had brought novelty and a gracefulness to Borderlands' combat, but criticising the mission design, which seldom made use of the gameplay mechanics introduced in Pre-Sequel. He stated that "The Pre-Sequel feels like a super-sized Borderlands 2 DLC. While the new setting, classes, and weapon types reinvigorate the experience, The Pre-Sequel doesn't deviate much from the feel and format of Borderlands 2." Darren Nakamura from Destructoid gave the game a 6/10, praising its fast yet tactical combat, but criticising the disappointing ending, number of bugs, as well as boring and uninteresting environmental art direction, but he still summarised the game as a "solid entry to the series." Jim Sterling from The Escapist gave the game an 8/10. He praised the combination of weapons with the use of the Grinder, a new machine introduced in The Pre-Sequel, as well as the new vehicles available, but criticised the map design, frustrating encounters with enemies, as well as being too similar to the previous installments. Adam Beck from Hardcore Gamer gave the game a 2.5/5, criticising its bugs, loot system, script, campaign, world design and performance of characters. He summarised the game as "an unpolished, uninspired adventure where fun can be had with friends, but that time could be better spent elsewhere." IGN gave the Claptastic Voyage campaign an 8.4 out of 10, praising it for its "whimsical" setting, new mechanics, making better use of the anti-gravity mechanics that were introduced by The Pre-Sequel, and for not containing the "excessive backtracking and pacing problems" faced by the game's main storyline. Ingenito concluded that it "[still] doesn't quite match the towering success of Tiny Tina’s Assault on Dragon Keep for Borderlands 2, but it still handily sets a high watermark for The Pre-Sequel. It's lean and focused in a way the main game it belongs to sometimes wasn't, and yet it still feels substantial and complete." References External links Official website 2014 video games Action role-playing video games Android (operating system) games Borderlands (series) games First-person shooters Interquel video games LGBT-related video games Linux games Loot shooters MacOS games Malware in fiction Multiplayer and single-player video games Multiplayer online games Nintendo Switch games Open-world video games PlayStation 3 games PlayStation 4 games Unreal Engine games Video games developed in Australia Video games developed in the United States Video games featuring protagonists of selectable gender Video games scored by Jesper Kyd Video games set on fictional planets Video games using PhysX Video games with downloadable content Windows games Xbox 360 games Xbox One games
675426
https://en.wikipedia.org/wiki/Juniper%20Networks
Juniper Networks
Juniper Networks, Inc. is an American multinational corporation headquartered in Sunnyvale, California. The company develops and markets networking products, including routers, switches, network management software, network security products, and software-defined networking technology. The company was founded in 1996 by Pradeep Sindhu, with Scott Kriens as the first CEO, who remained until September 2008. Kriens has been credited with much of Juniper's early market success. It received several rounds of funding from venture capitalists and telecommunications companies before going public in 1999. Juniper grew to $673 million in annual revenues by 2000. By 2001 it had a 37% share of the core routers market, challenging Cisco's once-dominant market-share. It grew to $4 billion in revenues by 2004 and $4.63 billion in 2014. Juniper appointed Kevin Johnson as CEO in 2008, Shaygan Kheradpir in 2013 and Rami Rahim in 2014. Juniper Networks originally focused on core routers, which are used by internet service providers (ISPs) to perform IP address lookups and direct internet traffic. Through the acquisition of Unisphere, in 2002, the company entered the market for edge routers, which are used by ISPs to route internet traffic to individual consumers. In 2003, Juniper entered the IT security market with its own JProtect security toolkit before acquiring security company NetScreen Technologies the following year. In the early 2000s, Juniper entered the enterprise segment, which accounted for one-third of its revenues by 2005. , Juniper has been focused on developing new software-defined networking products. History Origins and funding Pradeep Sindhu, a scientist with Xerox’s Palo Alto Research Center (PARC), conceived the idea for Juniper Networks while on vacation in 1995 and founded the company in February 1996. At the time, most routers used for Internet traffic were intended for phone calls and had dedicated circuits for each caller (circuit switching). Sindhu wanted to create data packet-based routers that were optimized for Internet traffic (packet switching), whereby the routing and transferring of data occurs "by means of addressed packets so that a channel is occupied during the transmission of the packet only, and upon completion of the transmission the channel is made available for the transfer of other traffic." He was joined by engineers Bjorn Liencres from Sun Microsystems and Dennis Ferguson from MCI Communications. Sindhu started Juniper Networks with $2 million in seed funding, which was followed by $12 million in funding in the company's first year of operations. About seven months after the company's founding, Scott Kriens was appointed CEO to manage the business, while founder Sindhu became the Chief Technology Officer. By February 1997, Juniper had raised $8 million in venture funding. Later that year, Juniper Networks raised an additional $40 million in investments from a round that included four out of five of the largest telecommunications equipment manufacturers: Siemens, Ericsson, Nortel and 3Com. Juniper also received $2.5 million from Qwest and other investments from AT&T. Growth and IPO Juniper Networks had $3.8 million in annual revenue in 1998. By the following year, its only product, the M40 router, was being used by 50 telecommunications companies. Juniper Networks signed agreements with Alcatel and Ericsson to distribute the M40 internationally. A European headquarters was established in the United Kingdom and an Asia-Pacific headquarters in Hong Kong. A subsidiary was created in Japan and offices were established in Korea in 1999. Juniper Networks's market share for core routers grew from 6% in 1998 to 17.5% one year later, and 20% by April 2000. Juniper Networks filed for an initial public offering in April 1999 and its first day on the NASDAQ was that June. The stock set a record in first-day trading in the technology sector by increasing 191% to a market capitalization of $4.9 billion. According to Telephony, Juniper Networks became the "latest darling of Wall Street", reaching a $7 billion valuation by late July. Within a year, the company's stock grew five-fold. Juniper Networks's revenues grew 600% in 2000 to $673 million. That same year, Juniper Networks moved its headquarters from Mountain View to Sunnyvale, California. Competition By 2001, Juniper controlled one-third of the market for high-end core routers, mostly at the expense of Cisco Systems sales. According to BusinessWeek, "analysts unanimously agree[d] that Juniper's boxes [were] technically superior to Cisco's because the hardware does most of the data processing. Cisco routers still relied on software, which often results in slower speeds." However, Cisco provided a broader range of services and support and had an entrenched market position. The press often depicted Juniper and Cisco as a "David versus Goliath" story. Cisco had grown through acquisitions to be a large generalist vendor for routing equipment in homes, businesses and for ISPs, whereas Juniper was thought of as the "anti-Cisco" for being a small company with a narrow focus. In January 2001, Cisco introduced a suite of router products that BusinessWeek said was intended to challenge Juniper's increasing market-share. According to BusinessWeek, Juniper's top-end router was four times as fast at only twice the cost of comparable Cisco products. Cisco's routers were not expected to erode Juniper's growing share of the market, but other companies such as Lucent, Alcatel, and startups Avici Systems and Pluris had announced plans to release products that would out-pace Juniper's routers. Juniper introduced a suite of routers for the network edge that allowed it to compete with Cisco. Juniper's edge routers had a 9% market share two months after release. Both companies made exaggerated marketing claims; Juniper promoted its products as stable enough to make IT staff bored and Cisco announced lab tests from Light Reading proved its products were superior to Juniper, whereas the publication itself reached the opposite conclusion. By 2002, both companies were repeatedly announcing products with faster specifications than the other in what Network World called a "'speeds-and-feeds' public relations contest". By 2004, Juniper controlled 38% of the core router market. By 2007, it had a 5%, 18% and 30% share of the market for enterprise, edge and core routers respectively. Alcatel-Lucent was unsuccessful in challenging Juniper in the core router market but continued competing with Juniper in edge routers along with Cisco. Further development In late 2000, Juniper formed a joint venture with Ericsson to develop and market network switches for internet traffic on mobile devices, and with Nortel for fiber optic technology. In 2001, Juniper introduced a technical certification program and was involved in the first optical internet network in China. Juniper's growth slowed in 2001 as the telecommunications sector experienced a slowdown and revenues fell by two-thirds during the dot-com bust. 9 to 10% of its workforce was laid off. Juniper had rebounded by 2004, surpassing $1 billion in revenues for the first time that year and reaching $2 billion in revenue in 2005. Beginning in 2004, with the acquisition of NetScreen, Juniper Networks began developing and marketing products for the enterprise segment. Juniper had a reputation for serving ISPs, not enterprises, which it was trying to change. By 2005, enterprise customers accounted for one-third of the company's revenues, but it had spent $5 billion in acquisitions and R&D for the enterprise market. In 2006, more than 200 US companies restated their financial results due to a series of investigations into stock backdating practices. Juniper stockholders alleged the company engaged in deceptive backdating practices that benefited its top executives unfairly. In December 2006, Juniper restated its financials, charging $900 million in expenses to correct backdated stock options from 1999 to 2003. This was followed by a $169 million settlement with stockholders in February 2010. 2008–present In July 2008, Juniper's first CEO, Scott Kriens, became chairman and former Microsoft executive Kevin Johnson was appointed CEO. Johnson focused the company more on software, creating a software solutions division headed by a former Microsoft colleague, Bob Muglia. Juniper also hired other former Microsoft executives to focus on the company's software strategy and encourage developers to create software products that run on the Junos operating system. Juniper established partnerships with IBM, Microsoft and Oracle for software compatibility efforts. The SSL/VPN Pulse product family was launched in 2010, then later spun off to a private equity firm in 2014 for $250 million. In 2012, Juniper laid off 5% of its staff and four of its high-ranking executives departed. The following year, CEO Kevin Johnson announced he was retiring once a replacement was found. In November 2013, Juniper Networks announced that Shaygan Kheradpir would be appointed as the new CEO. He started the position in January 2014. In January 2014, hedge fund, activist investor and Juniper shareholder Elliott Associates advocated that Juniper reduce its cash reserves and cut costs, before Kheradpir was officially appointed. That February, Juniper reached an agreement with Elliott and other stakeholders for an Integrated Operating Plan (IOP) that involved repurchasing $2 billion in shares, reducing operating expenses by $160 million and appointing two new directors to its board. That April, 6% of the company's staff were laid off to cut expenses. In November 2014, Kheradpir unexpectedly resigned following a review by Juniper's board of directors regarding his conduct in a negotiation with an unnamed Juniper customer. An internal Juniper executive, Rami Rahim, took his place as CEO. In May 2014, Palo Alto Networks agreed to pay a $175 million settlement for allegedly infringing on Juniper's patents for application firewalls. In 2015, Wired Magazine reported that the company announced it had found unauthorized code that enabled backdoors into its ScreenOS products. The code was patched with updates from the company. Acquisitions and investments By 2001, Juniper had made only a few acquisitions of smaller companies, due to the leadership's preference for organic growth. The pace of acquisition picked up in 2001 and 2002 with the purchases of Pacific Broadband and Unisphere Networks. In 2004 Juniper made a $4 billion acquisition of network security company NetScreen Technologies. Juniper revised NetScreen's channel program that year and used its reseller network to bring other products to market. Juniper made five acquisitions in 2005, mostly of startups with deal values ranging from $8.7 to $337 million. It acquired application-acceleration vendor Redline Networks, VOIP company Kagoor Networks, as well as wide area network (WAN) company Peribit Networks. Peribit and Redline were incorporated into a new application products group and their technology was integrated into Juniper's infranet framework. Afterwards, Juniper did not make any additional acquisitions until 2010. From 2010 to September 2011, Juniper made six acquisitions and invested in eight companies. Often Juniper acquired early-stage startups, developing their technology, then selling it to pre-existing Juniper clients. Juniper acquired two digital video companies, Ankeena Networks and Blackwave Inc., as well as wireless LAN software company Trapeze Networks. In 2012, Juniper acquired Mykonos Software, which develops security software intended to deceive hackers already within the network perimeter. and a developer of software-defined network controllers, Contrail Systems. In 2014, Juniper acquired the software-defined networking (SDN) company WANDL. In April 2016, Juniper closed its acquisition of BTI, a provider of cloud and metro network technology, in an effort to beef up its data center interconnect and metro packet optical transport technology and services. Juniper acquired cloud operations management and optimization startup AppFormix in December 2016. In 2017, Juniper bought Cyphort, a Silicon Valley startup that makes security analytics software. Juniper acquired cloud storage company HTBASE in November 2018. In April 2019, Juniper acquired wireless LAN (WLAN) startup Mist Systems to bolster its software-defined enterprise portfolio and multicloud offerings. In February 2022, it was announced Juniper had acquired WiteSand, a specialist cloud-native zero trust Network Access Control (NAC) solutions company. Products Juniper Networks designs and markets IT networking products, such as routers, switches and IT security products. It started out selling core routers for ISPs, and expanded into edge routers, data centers, wireless networking, networking for branch offices and other access and aggregation devices. Juniper is the third largest market-share holder overall for routers and switches used by ISPs. According to analyst firm Dell'Oro Group, it is the fourth largest for edge routers and second for core routers with 25% of the core market. It is also the second largest market share holder for firewall products with a 24.8% share of the firewall market. In data center security appliances, Juniper is the second-place market-share holder behind Cisco. In WLAN, where Juniper used to hold a more marginal market share, it is now expanding through its acquisition of Mist Systems, now a visionary in WLAN according to Gartner. Juniper provides technical support and services through the J-Care program. As of February 2020, Juniper's product families include the following: Routers and switches Juniper Networks' first product was the Junos router operating system, which was released on July 1, 1998. The first Juniper router was made available that September and was a core router for internet service providers called the M40. It incorporated specialized application-specific integrated circuits (ASIC) for routing internet traffic that were developed in partnership with IBM. It had ten times the throughput of comparable contemporary Cisco products. The M40 was followed by the smaller M20 router in December 1999 and the M160 in March 2000. By 2000, Juniper had developed five hardware systems and made seven new releases of its Junos operating system. That April, Juniper released the second generation of the internet processors embedded in its core routers. In April 2002, Juniper released the first of the T-series family (originally known under the code-name Gibson), which could perform four times as many route lookups per second as the M160. The first products of the TX Matrix family, which could be used to combine up to four T-series routers, was released in December 2004. By 2003, Juniper had diversified into three major router applications: core routers, edge routers and routers for mobile traffic. Juniper's first major diversification from core routers was when it entered the market for edge routers, by acquiring the e-series product family (originally known as ERX) through the purchase of Unisphere in 2000. By 2002, both Cisco and Juniper had increased their focus on edge routers, because many ISPs had built up abundant bandwidth at the core. Several improvements to Juniper's software and its broadband aggregation features were released in late 2003. At this time, Juniper had the largest market-share (52%) of the broadband aggregation market. In 2003, Juniper entered the market for cable-modem termination systems with the G-series product family after the acquisition of Pacific Broadband. The product family was discontinued later that year. Juniper's first enterprise switch product was the EX 4200, which was released in 2008. In a comparative technical test, Network World said the EX4200 was the top performer out of network switches they tested in latency and throughput, but its multicast features were "newer and less robust" than other aspects of the product. Juniper Networks announced the T1600 1.6 Terabits per second core router in 2007 and the newer T4000 4 Terabit router in 2010. In 2012, it released the ACX family of universal access routers. In 2013, the company made several new releases in the MX family of edge routers: it introduced a smaller version of its core routers called PTX3000, and several new enterprise routers were released. Seven months later, Juniper acquired WANDL, and its technology was integrated into the NorthStar WAN controller Juniper announced in February 2014. In February 2011, Juniper introduced QFabric, a proprietary protocol methodology for transferring data over a network using a single network layer. Several individual products for the QFabric methodology were released throughout the year. In October 2013, Juniper introduced another network architecture called MetaFabric and a new set of switches, the QFX5100 family, as one of the foundations of the new architecture. In February 2014, several software and hardware improvements were introduced for Juniper routers, including a series of software applications ISPs could use to provide internet-based services to consumers. In December 2014, Juniper introduced a network switch, OCX1100, that could run on either the Junos operating system or the Open Compute Project open-source software. Security Juniper Networks introduced the JProtect security toolkit in May 2003. It included firewalls, flow monitoring, filtering and Network Address Translation (NAT). Through the 2004 acquisition of NetScreen Technologies, Juniper acquired the Juniper Secure Meeting product line, as well as remote desktop access software. The NetScreen-5GT ADSL security appliance was the first new NetScreen product Juniper introduced after the acquisition and its first wireless product. The first Juniper product intended for small businesses was a remote access appliance that was released in August 2004. An open interface for the development of third-party tools for the appliance was made available that September. In September 2004, Juniper entered the market for enterprise access routers with three routers that were the first of the J-series product family. It used the channel partners acquired with NetScreen to take the routers to market. Juniper released its first dedicated Network Access Control (NAC) product in late 2005, which was followed by the acquisition of Funk Software for its NAC capabilities for switches. According to a 2006 review in Network World, Juniper's SSG 520 firewall and routing product was "the first serious threat" to competing products from Cisco. Juniper released the SRX family of gateway products in 2008. The gateways sold well, but customers and resellers reported a wide range of technical issues starting in 2010, which Juniper did not acknowledge until 2012, when it began providing updates to the product software. In August 2011, Juniper and AT&T announced they would jointly develop the AT&T Mobile Security application based on Juniper's Pulse security software. In May 2012, Juniper released a series of new features for the web security software it acquired from Mykonos Software that February. Mykonos' software is focused on deceiving hackers by presenting fake vulnerabilities and tracking their activity. In January 2014, Juniper announced the Firefly Suite of security and switching products for virtual machines. The following month Juniper Networks released several products for "intrusion deception", which create fake files, store incorrect passwords and change network maps in order to confuse hackers that have already penetrated the network perimeter. An analysis of Juniper's ScreenOS firmware code in December 2015 discovered a backdoor key using Dual_EC_DRBG allowing to passively decrypt the traffic encrypted by ScreenOS. This backdoor was inserted in the year 2008 into the versions of ScreenOS from 6.2.0r15 to 6.2.0r18 and from 6.3.0r12 to 6.3.0r20 and gives any user administrative access when using a special master password. Some analysts claim that this backdoor still exists in ScreenOS. Stephen Checkoway was quoted in Wired that "If this backdoor was not intentional, then, in my opinion, it’s an amazing coincidence." In December 2015, Juniper Systems announced that they had discovered "unauthorized code" in the ScreenOS software that underlies their NetScreen devices, present from 2012 onwards. There were two vulnerabilities: One was a simple root password backdoor, and the other one was changing a point in Dual_EC_DRBG so that the attackers presumably had the key to use the preexisting (intentional or unintentional) kleptographic backdoor in ScreenOS to passively decrypt traffic. Software defined networking According to a 2014 SWOT analysis by MarketLine, in recent history Juniper has been focusing on software-defined networking (SDN). It acquired SDN company Contrail Systems in December 2012. The following month Juniper announced its SDN strategy, which included a new licensing model based on usage and new features for the Junos operating system. In February 2013, Juniper released several SDN products, including the application provisioning software, Services Activation Director and the Mobile Control Gateway appliance. In May 2013, Juniper announced an SDN controller called JunosV Contrail, using technology it acquired through Contrail Systems. A series of SDN products were released in February 2014, such as a network management software product, Junos Fusion, and an SDN controller called NorthStar. Northstar helps find the optimal path for data to travel through a network. Every year, since 2009, Juniper holds SDN Throwdown competition to encourage students from universities across the world to access NorthStar Controller and build a solution around it to optimize network throughput. In 2019, team from Rutgers University led by PhD student, Sumit Maheshwari won this competition. Recent updates In March 2015, Juniper announced a series of updates to the PTX family of core routers, the QFX family of switches, as well as updates to its security portfolio. According to a report published by technology consulting firm LexInnova, as of June 2015 Juniper Networks was the third largest recipient of network security-related patents with portfolio of 2,926 security-related patents. In October 2018, Juniper announced a new offering called EngNet, which is a set of developer tools and information meant to help companies move toward automation, and replace the typical command-line interface. Operations Juniper Networks has operations in more than 100 countries. Around 50% of its revenue is from the United States, 30% is from EMEA and 20% is from Asia. Juniper sells directly to businesses, as well as through resale and distribution partners, such as Ericsson, IBM, Nokia, IngramMicro and NEC. About 50% of Juniper's revenues are derived from routers, 13% from switches, 12% comes from IT security and 25% from services. According to a 2013 report by Glassdoor, Juniper Networks has the highest paid software engineers in the technology sector by a margin of about $24,000 per year. It operates the Juniper Networks Academic Alliance (JNAA) program, which scouts fresh college graduates. According to a SWOT analysis by MarketLine, Juniper has "a strong focus" on research and development. R&D expenses have been between 22 and 25% of revenue from 2011 to 2013. Most of the company's manufacturing is outsourced to three manufacturing companies: Celestica, Flextronics and Accton Technology. Juniper operates the Junos Innovation Fund, which was started with $50 million in 2010 and invests in early-stage technology companies developing applications for the Junos operating system. As of 2011, Juniper Networks invested in 20 companies. This is estimated to be 1 to 2% of the companies it has evaluated for a potential investment. ScreenOS Backdoor In December 2015, Juniper issued an emergency security patch for a backdoor in its security equipment. Together with another vulnerability it allowed to bypass authentication and decrypt VPN traffic on ScreenOS. Analysis showed that the mechanism of the backdoor was created by the NSA, but might later haven been taken over by an unnamed national government. See also List of networking hardware vendors References External links 1996 establishments in California American companies established in 1996 Companies based in Sunnyvale, California Companies formerly listed on the Nasdaq Companies listed on the New York Stock Exchange Computer companies established in 1996 Multinational companies headquartered in the United States Networking companies of the United States Networking hardware companies Software companies established in 1996 Software companies based in the San Francisco Bay Area Technology companies based in the San Francisco Bay Area 1999 initial public offerings Software companies of the United States
43274224
https://en.wikipedia.org/wiki/Cybersecurity%20Information%20Sharing%20Act
Cybersecurity Information Sharing Act
The Cybersecurity Information Sharing Act (CISA [113th Congress], [114th Congress]) is a United States federal law designed to "improve cybersecurity in the United States through enhanced sharing of information about cybersecurity threats, and for other purposes". The law allows the sharing of Internet traffic information between the U.S. government and technology and manufacturing companies. The bill was introduced in the U.S. Senate on July 10, 2014, and passed in the Senate October 27, 2015. Opponents question CISA's value, believing it will move responsibility from private businesses to the government, thereby increasing vulnerability of personal private information, as well as dispersing personal private information across seven government agencies, including the NSA and local police. The text of the bill was incorporated by amendment into a consolidated spending bill in the U.S. House on December 15, 2015, which was signed into law by President Barack Obama on December 18, 2015. History The Cybersecurity Information Sharing Act was introduced on July 10, 2014 during the 113th Congress, and was able to pass the Senate Intelligence Committee by a vote of 12-3. The bill did not reach a full senate vote before the end of the congressional session. The bill was reintroduced for the 114th Congress on March 12, 2015, and the bill passed the Senate Intelligence Committee by a vote of 14-1. Senate Majority Leader Mitch McConnell, (R-Ky) attempted to attach the bill as an amendment to the annual National Defense Authorization Act, but was blocked 56-40, not reaching the necessary 60 votes to include the amendment. Mitch McConnell hoped to bring the bill to senate-wide vote during the week of August 3–7, but was unable to take up the bill before the summer recess. The Senate tentatively agreed to limit debate to 21 particular amendments and a manager's amendment, but did not set time limits on debate. In October 2015, the US Senate took the bill back up following legislation concerning sanctuary cities. Provisions The main provisions of the bill make it easier for companies to share personal information with the government, especially in cases of cyber security threats. Without requiring such information sharing, the bill creates a system for federal agencies to receive threat information from private companies. With respect to privacy, the bill includes provisions for preventing the sharing of personal data that is irrelevant to cyber security. Any personal information that does not get removed during the sharing procedure can be used in a variety of ways. These shared cyber threat indicators can be used to prosecute cyber crimes, but may also be used as evidence for crimes involving physical force. Positions Indemnification Sharing National Intelligence threat data among public and private partners is a hard problem and one that we should all care about. The National Intelligence Threat Sharing (NITS) project is intended as an innovative solution to this hard problem. Altogether NITS is both innovative and useful. But first, to ensure that NITS is trustworthy, private partners must be indemnified. Indemnification takes an act of Congress, literally. The underlying impediment to more fulsome cooperation among buyers, sellers, and peers within a supply chain is indemnification. Indemnification is needed to secure industry partners against legal responsibility for their actions. Unfortunately, Congressional refusal to offer indemnification remains an impediment to real collaboration. At least qualified immunity should be accorded. This is immunity of individuals performing tasks as part of the government's actions. Businesses and trade groups The CISA has received some support from advocacy groups, including the United States Chamber of Commerce, the National Cable & Telecommunications Association, and the Financial Services Roundtable. A number of business groups have also opposed the bill, including the Computer & Communications Industry Association, as well as individual companies such as Twitter, Yelp, Apple, and Reddit. BSA (The Software Alliance) appeared initially supportive of CISA, sending a letter on July 21, 2015 urging the senate to bring the bill up for debate. On September 14, 2015, the BSA published a letter of support for amongst other things cyber threat information sharing legislation addressed to Congress, signed by board members Adobe, Apple Inc., Altium, Autodesk, CA Technologies, DataStax, IBM, Microsoft, Minitab, Oracle, Salesforce.com, Siemens, and Symantec. This prompted the digital rights advocacy group Fight for the Future to organize a protest against CISA. Following this opposition campaign, BSA stated that its letter expressed support for cyber threat sharing legislation in general, but did not endorse CISA, or any pending cyber threat sharing bill in particular. BSA later stated that it is opposed to CISA in its current form. The Computer & Communications Industry Association, another major trade group including members such as Google, Amazon.com, Cloudflare, Netflix, Facebook, Red Hat, and Yahoo!, also announced its opposition to the bill. Government officials Proponents of CISA include the bill's main cosponsors, senators Dianne Feinstein (D-CA) and Richard Burr (R-NC). Some senators have announced opposition to CISA, including Ron Wyden (D-OR), Rand Paul (R-KY), and Bernie Sanders (I-VT). Senator Ron Wyden (D-OR) has objected to the bill based on a classified legal opinion from the Justice Department written during the early George W Bush Administration. The Obama administration states that it does not rely on the legal justification laid out in the memo. Wyden has made repeated requests to the US Attorney General to declassify the memo, dating at least as far back as when a 2010 Office of Inspector General report cited the memo as a legal justification for the FBI's warrantless wire-tapping program. On August 4, 2015, White House spokesman Eric Schultz endorsed the legislation, calling for the senate to "take up this bill as soon as possible and pass it". The United States Department of Homeland Security initially supported the bill, with Jeh Johnson, the secretary of the DHS, calling for the bill to move forward on September 15. However, in an August 3 letter to senator Al Franken (D-MN), the deputy secretary of the DHS, Alejandro Mayorkas, expressed a desire to have all connections be brokered by the DHS, given the Department's charter to protect the executive branch networks. In the letter, the DHS found issue with the direct sharing of information with all government agencies, advocating instead that the DHS be the sole recipient of cyberthreat information, allowing it to scrub out private information. In addition, the Department of Homeland Security has published a Privacy Impact Assessment detailing its internal review of the proposed system for handling incoming indicators from Industry. Civil liberties groups Privacy advocates opposed a version of the Cybersecurity Information Sharing Act, passed by the Senate in October 2015, that left intact portions of the law they said made it more amenable to surveillance than actual security while quietly stripping out several of its remaining privacy protections. CISA has been criticized by advocates of Internet privacy and civil liberties, such as the Electronic Frontier Foundation and the American Civil Liberties Union. It has been compared to the criticized Cyber Intelligence Sharing and Protection Act proposals of 2012 and 2013, which passed the United States House of Representatives, but did not pass the Senate. Similar laws in different countries United Kingdom government policy: cyber security The Scottish Government Information Sharing See also Anti-Counterfeiting Trade Agreement Chinese intelligence operations in the United States Communications Assistance for Law Enforcement Act Federal Information Security Management Act of 2002 Freedom of information laws by country Intellectual Property Attache Act National Security Agency Vulnerabilities Equities Process References External links S.2588 - Cybersecurity Information Sharing Act of 2014, Congress.gov, Library of Congress. "Cybersecurity Information Sharing Act will help protect us", Dianne Feinstein, San Jose Mercury News, July 21, 2014. Forbes: Controversial Cybersecurity Bill Known As CISA Advances Out Of Senate Committee, Gregory S. McNeal, July 9, 2014. Center for Democracy and Technology: Analysis of Cybersecurity Information Sharing Act, Gregory T. Nojeim and Jake Laperruque, July 8, 2014. - CISA Security Bill Passes Senate With Privacy Flaws Unfixed ANDY GREENBERG AND YAEL GRAUER Oct 27, 2015 2010 to 2015 government policy: cyber security Computer security Copyright enforcement Internet law in the United States Proposed legislation of the 113th United States Congress Internet censorship
146640
https://en.wikipedia.org/wiki/Anti-aircraft%20warfare
Anti-aircraft warfare
Anti-aircraft warfare or counter-air defence is the battlespace response to aerial warfare, defined by NATO as "all measures designed to nullify or reduce the effectiveness of hostile air action". It includes surface based, subsurface (submarine launched), and air-based weapon systems, associated sensor systems, command and control arrangements, and passive measures (e.g. barrage balloons). It may be used to protect naval, ground, and air forces in any location. However, for most countries the main effort has tended to be homeland defence. NATO refers to airborne air defence counter-air and naval air defence as anti-aircraft warfare. Missile defence is an extension of air defence, as are initiatives to adapt air defence to the task of intercepting any projectile in flight. In some countries, such as Britain and Germany during the Second World War, the Soviet Union, and modern NATO and the United States, ground-based air defence and air defence aircraft have been under integrated command and control. However, while overall air defence may be for homeland defence (including military facilities), forces in the field, wherever they are, invariably deploy their own air defence capability if there is an air threat. A surface-based air defence capability can also be deployed offensively to deny the use of airspace to an opponent. Until the 1950s, guns firing ballistic munitions ranging from 7.62 mm (.30 in) to 152.4 mm (6 in) were the standard weapons; guided missiles then became dominant, except at the very shortest ranges (as with close-in weapon systems, which typically use rotary autocannons or, in very modern systems, surface-to-air adaptations of short range air-to-air missiles, often combined in one system with rotary cannons). Terminology The term "air defence" was probably first used by Britain when Air Defence of Great Britain (ADGB) was created as a Royal Air Force command in 1925. However, arrangements in the UK were also called 'anti-aircraft', abbreviated as AA, a term that remained in general use into the 1950s. After the First World War it was sometimes prefixed by 'Light' or 'Heavy' (LAA or HAA) to classify a type of gun or unit. Nicknames for anti-aircraft guns include AA, AAA or triple-A, an abbreviation of anti-aircraft artillery; "ack-ack" (from the spelling alphabet used by the British for voice transmission of "AA"); and "archie" (a World War I British term probably coined by Amyas Borton, and believed to derive via the Royal Flying Corps, from the music-hall comedian George Robey's line "Archibald, certainly not!"). NATO defines anti-aircraft warfare (AAW) as "measures taken to defend a maritime force against attacks by airborne weapons launched from aircraft, ships, submarines and land-based sites". In some armies the term All-Arms Air Defence (AAAD) is used for air defence by nonspecialist troops. Other terms from the late 20th century include "ground based air defence" (GBAD) with related terms "Short range air defense" (SHORAD) and Man-portable air-defense system (MANPADS). Anti-aircraft missiles are variously called surface-to-air missile, abbreviated and pronounced "SAM" and surface-to-air guided weapon (SAGW). Examples are the RIM-66 Standard, Raytheon Standard Missile 6, or the MBDA Aster missile. Non-English terms for air defence include the German Flak (Fliegerabwehrkanone, 'aircraft defence cannon', also cited as Flugabwehrkanone), whence English 'flak', and the Russian term Protivovozdushnaya oborona (Cyrillic: Противовозду́шная оборо́на), a literal translation of "anti-air defence", abbreviated as PVO. In Russian, the AA systems are called zenitnye (i.e., 'pointing to zenith') systems (guns, missiles etc.). In French, air defence is called DCA (Défense contre les aéronefs, aéronef being the generic term for all kinds of airborne threats (aeroplane, airship, balloon, missile, rocket). The maximum distance at which a gun or missile can engage an aircraft is an important figure. However, many different definitions are used but unless the same definition is used, performance of different guns or missiles cannot be compared. For AA guns only the ascending part of the trajectory can be usefully used. One term is "ceiling", the maximum ceiling being the height a projectile would reach if fired vertically, not practically useful in itself as few AA guns are able to fire vertically, and maximum fuse duration may be too short, but potentially useful as a standard to compare different weapons. The British adopted "effective ceiling", meaning the altitude at which a gun could deliver a series of shells against a moving target; this could be constrained by maximum fuse running time as well as the gun's capability. By the late 1930s the British definition was "that height at which a directly approaching target at can be engaged for 20 seconds before the gun reaches 70 degrees elevation". However, effective ceiling for heavy AA guns was affected by non-ballistic factors: The maximum running time of the fuse, this set the maximum usable time of flight. The capability of fire control instruments to determine target height at long range. The precision of the cyclic rate of fire, the fuse length had to be calculated and set for where the target would be at the time of flight after firing, to do this meant knowing exactly when the round would fire. General description The essence of air defence is to detect hostile aircraft and destroy them. The critical issue is to hit a target moving in three-dimensional space; an attack must not only match these three coordinates, but must do so at the time the target is at that position. This means that projectiles either have to be guided to hit the target, or aimed at the predicted position of the target at the time the projectile reaches it, taking into account speed and direction of both the target and the projectile. Throughout the 20th century, air defence was one of the fastest-evolving areas of military technology, responding to the evolution of aircraft and exploiting various enabling technologies, particularly radar, guided missiles and computing (initially electromechanical analogue computing from the 1930s on, as with equipment described below). Air defence evolution covered the areas of sensors and technical fire control, weapons, and command and control. At the start of the 20th century these were either very primitive or non-existent. Initially sensors were optical and acoustic devices developed during World War I and continued into the 1930s, but were quickly superseded by radar, which in turn was supplemented by optronics in the 1980s. Command and control remained primitive until the late 1930s, when Britain created an integrated system for ADGB that linked the ground-based air defence of the British Army's Anti-Aircraft Command, although field-deployed air defence relied on less sophisticated arrangements. NATO later called these arrangements an "air defence ground environment", defined as "the network of ground radar sites and command and control centres within a specific theatre of operations which are used for the tactical control of air defence operations". Rules of Engagement are critical to prevent air defences engaging friendly or neutral aircraft. Their use is assisted but not governed by identification friend or foe (IFF) electronic devices originally introduced during the Second World War. While these rules originate at the highest authority, different rules can apply to different types of air defence covering the same area at the same time. AAAD usually operates under the tightest rules. NATO calls these rules Weapon Control Orders (WCO), they are: weapons free: weapons may be fired at any target not positively recognised as friendly. weapons tight: weapons may be fired only at targets recognised as hostile. weapons hold: weapons may only be fired in self-defence or in response to a formal order. Until the 1950s, guns firing ballistic munitions were the standard weapon; guided missiles then became dominant, except at the very shortest ranges. However, the type of shell or warhead and its fuzing and, with missiles the guidance arrangement, were and are varied. Targets are not always easy to destroy; nonetheless, damaged aircraft may be forced to abort their mission and, even if they manage to return and land in friendly territory, may be out of action for days or permanently. Ignoring small arms and smaller machine-guns, ground-based air defence guns have varied in calibre from 20 mm to at least 152 mm. Ground-based air defence is deployed in several ways: Self-defence by ground forces using their organic weapons, AAAD. Accompanying defence, specialist aid defence elements accompanying armoured or infantry units. Point defence around a key target, such as a bridge, critical government building or ship. Area air defence, typically 'belts' of air defence to provide a barrier, but sometimes an umbrella covering an area. Areas can vary widely in size. They may extend along a nation's border, e.g. the Cold War MIM-23 Hawk and Nike belts that ran north–south across Germany, across a military formation's manoeuvre area, or above a city or port. In ground operations air defence areas may be used offensively by rapid redeployment across current aircraft transit routes. Air defence has included other elements, although after the Second World War most fell into disuse: Tethered barrage balloons to deter and threaten aircraft flying below the height of the balloons, where they are susceptible to damaging collisions with steel tethers. Searchlights to illuminate aircraft at night for both gun-layers and optical instrument operators. During World War II searchlights became radar controlled. Large smoke screens created by large smoke canisters on the ground to screen targets and prevent accurate weapon aiming by aircraft. Passive air defence is defined by NATO as "Passive measures taken for the physical defence and protection of personnel, essential installations and equipment in order to minimise the effectiveness of air and/or missile attack". It remains a vital activity by ground forces and includes camouflage and concealment to avoid detection by reconnaissance and attacking aircraft. Measures such as camouflaging important buildings were common in the Second World War. During the Cold War the runways and taxiways of some airfields were painted green. Organization While navies are usually responsible for their own air defence, at least for ships at sea, organisational arrangements for land-based air defence vary between nations and over time. The most extreme case was the Soviet Union, and this model may still be followed in some countries: it was a separate service, on a par with the army, navy, or air force. In the Soviet Union this was called Voyska PVO, and had both fighter aircraft, separate from the air force, and ground-based systems. This was divided into two arms, PVO Strany, the Strategic Air defence Service responsible for Air Defence of the Homeland, created in 1941 and becoming an independent service in 1954, and PVO SV, Air Defence of the Ground Forces. Subsequently, these became part of the air force and ground forces respectively. At the other extreme the United States Army has an Air Defense Artillery Branch that provided ground-based air defence for both homeland and the army in the field, however it is operationally under the Joint Force Air Component Commander. Many other nations also deploy an air-defence branch in the army. Other nations, such as Japan or Israel, choose to integrate their ground based air defence systems into their air force. In Britain and some other armies, the single artillery branch has been responsible for both home and overseas ground-based air defence, although there was divided responsibility with the Royal Navy for air defence of the British Isles in World War I. However, during the Second World War the RAF Regiment was formed to protect airfields everywhere, and this included light air defences. In the later decades of the Cold War this included the United States Air Force's operating bases in UK. However, all ground-based air defence was removed from Royal Air Force (RAF) jurisdiction in 2004. The British Army's Anti-Aircraft Command was disbanded in March 1955, but during the 1960s and 1970s the RAF's Fighter Command operated long-range air-defence missiles to protect key areas in the UK. During World War II the Royal Marines also provided air defence units; formally part of the mobile naval base defence organisation, they were handled as an integral part of the army-commanded ground based air defences. The basic air defence unit is typically a battery with 2 to 12 guns or missile launchers and fire control elements. These batteries, particularly with guns, usually deploy in a small area, although batteries may be split; this is usual for some missile systems. SHORAD missile batteries often deploy across an area with individual launchers several kilometres apart. When MANPADS is operated by specialists, batteries may have several dozen teams deploying separately in small sections; self-propelled air defence guns may deploy in pairs. Batteries are usually grouped into battalions or equivalent. In the field army, a light gun or SHORAD battalion is often assigned to a manoeuvre division. Heavier guns and long-range missiles may be in air-defence brigades and come under corps or higher command. Homeland air defence may have a full military structure. For example, the UK's Anti-Aircraft Command, commanded by a full British Army general was part of ADGB. At its peak in 1941–42 it comprised three AA corps with 12 AA divisions between them. History Earliest use The use of balloons by the U.S. Army during the American Civil War compelled the Confederates to develop methods of combating them. These included the use of artillery, small arms, and saboteurs. They were unsuccessful, but internal politics led the United States Army's Balloon Corps to be disbanded mid-war. The Confederates experimented with balloons as well. Turks carried out the first ever anti-airplane operation in history during the Italo-Turkish war. Although lacking anti-aircraft weapons, they were the first to shoot down an aeroplane by rifle fire. The first aircraft to crash in a war was the one of Lieutenant Piero Manzini, shot down on August 25, 1912. The earliest known use of weapons specifically made for the anti-aircraft role occurred during the Franco-Prussian War of 1870. After the disaster at Sedan, Paris was besieged and French troops outside the city started an attempt at communication via balloon. Gustav Krupp mounted a modified 1-pounder (37mm) gun – the Ballonabwehrkanone (Balloon defence cannon) or BaK — on top of a horse-drawn carriage for the purpose of shooting down these balloons. By the early 20th century balloon, or airship, guns, for land and naval use were attracting attention. Various types of ammunition were proposed, high explosive, incendiary, bullet-chains, rod bullets and shrapnel. The need for some form of tracer or smoke trail was articulated. Fuzing options were also examined, both impact and time types. Mountings were generally pedestal type but could be on field platforms. Trials were underway in most countries in Europe but only Krupp, Erhardt, Vickers Maxim, and Schneider had published any information by 1910. Krupp's designs included adaptations of their 65 mm 9-pounder, a 75 mm 12-pounder, and even a 105 mm gun. Erhardt also had a 12-pounder, while Vickers Maxim offered a 3-pounder and Schneider a 47 mm. The French balloon gun appeared in 1910, it was an 11-pounder but mounted on a vehicle, with a total uncrewed weight of 2 tons. However, since balloons were slow moving, sights were simple. But the challenges of faster moving aeroplanes were recognised. By 1913 only France and Germany had developed field guns suitable for engaging balloons and aircraft and addressed issues of military organisation. Britain's Royal Navy would soon introduce the QF 3-inch and QF 4-inch AA guns and also had Vickers 1-pounder quick firing "pom-pom"s that could be used in various mountings. The first US anti-aircraft cannon was a 1-pounder concept design by Admiral Twining in 1911 to meet the perceived threat of airships, that eventually was used as the basis for the US Navy's first operational anti-aircraft cannon: the 3"/23 caliber gun. First World War On 30 September 1915, troops of the Serbian Army observed three enemy aircraft approaching Kragujevac. Soldiers fired at them with shotguns and machine-guns but failed to prevent them from dropping 45 bombs over the city, hitting military installations, the railway station and many other, mostly civilian, targets in the city. During the bombing raid, private Radoje Ljutovac fired his cannon at the enemy aircraft and successfully shot one down. It crashed in the city and both pilots died from their injuries. The cannon Ljutovac used was not designed as an anti-aircraft gun; it was a slightly modified Turkish cannon captured during the First Balkan War in 1912. This was the first occasion in military history that a military aircraft was shot down with ground-to-air fire. The British recognised the need for anti-aircraft capability a few weeks before World War I broke out; on 8 July 1914, the New York Times reported that the British government had decided to 'dot the coasts of the British Isles with a series of towers, each armed with two quick-firing guns of special design,' while 'a complete circle of towers' was to be built around 'naval installations' and 'at other especially vulnerable points.' By December 1914 the Royal Naval Volunteer Reserve (RNVR) was manning AA guns and searchlights assembled from various sources at some nine ports. The Royal Garrison Artillery (RGA) was given responsibility for AA defence in the field, using motorised two-gun sections. The first were formally formed in November 1914. Initially they used QF 1-pounder "pom-pom" (a 37 mm version of the Maxim Gun). All armies soon deployed AA guns often based on their smaller field pieces, notably the French 75 mm and Russian 76.2 mm, typically simply propped up on some sort of embankment to get the muzzle pointed skyward. The British Army adopted the 13-pounder quickly producing new mountings suitable for AA use, the 13-pdr QF 6 cwt Mk III was issued in 1915. It remained in service throughout the war but 18-pdr guns were lined down to take the 13-pdr shell with a larger cartridge producing the 13-pr QF 9 cwt and these proved much more satisfactory. However, in general, these ad hoc solutions proved largely useless. With little experience in the role, no means of measuring target, range, height or speed the difficulty of observing their shell bursts relative to the target gunners proved unable to get their fuse setting correct and most rounds burst well below their targets. The exception to this rule was the guns protecting spotting balloons, in which case the altitude could be accurately measured from the length of the cable holding the balloon. The first issue was ammunition. Before the war it was recognised that ammunition needed to explode in the air. Both high explosive (HE) and shrapnel were used, mostly the former. Airburst fuses were either igniferious (based on a burning fuse) or mechanical (clockwork). Igniferious fuses were not well suited for anti-aircraft use. The fuse length was determined by time of flight, but the burning rate of the gunpowder was affected by altitude. The British pom-poms had only contact-fused ammunition. Zeppelins, being hydrogen-filled balloons, were targets for incendiary shells and the British introduced these with airburst fuses, both shrapnel type-forward projection of incendiary 'pot' and base ejection of an incendiary stream. The British also fitted tracers to their shells for use at night. Smoke shells were also available for some AA guns, these bursts were used as targets during training. German air attacks on the British Isles increased in 1915 and the AA efforts were deemed somewhat ineffective, so a Royal Navy gunnery expert, Admiral Sir Percy Scott, was appointed to make improvements, particularly an integrated AA defence for London. The air defences were expanded with more RNVR AA guns, 75 mm and 3-inch, the pom-poms being ineffective. The naval 3-inch was also adopted by the army, the QF 3-inch 20 cwt (76 mm), a new field mounting was introduced in 1916. Since most attacks were at night, searchlights were soon used, and acoustic methods of detection and locating were developed. By December 1916 there were 183 AA Sections defending Britain (most with the 3-inch), 74 with the BEF in France and 10 in the Middle East. AA gunnery was a difficult business. The problem was of successfully aiming a shell to burst close to its target's future position, with various factors affecting the shells' predicted trajectory. This was called deflection gun-laying, where 'off-set' angles for range and elevation were set on the gunsight and updated as their target moved. In this method, when the sights were on the target, the barrel was pointed at the target's future position. Range and height of the target determined fuse length. The difficulties increased as aircraft performance improved. The British dealt with range measurement first, when it was realised that range was the key to producing a better fuse setting. This led to the Height/Range Finder (HRF), the first model being the Barr & Stroud UB2, a 2-metre optical coincident rangefinder mounted on a tripod. It measured the distance to the target and the elevation angle, which together gave the height of the aircraft. These were complex instruments and various other methods were also used. The HRF was soon joined by the Height/Fuse Indicator (HFI), this was marked with elevation angles and height lines overlaid with fuse length curves, using the height reported by the HRF operator, the necessary fuse length could be read off. However, the problem of deflection settings — 'aim-off' — required knowing the rate of change in the target's position. Both France and the UK introduced tachymetric devices to track targets and produce vertical and horizontal deflection angles. The French Brocq system was electrical; the operator entered the target range and had displays at guns; it was used with their 75 mm. The British Wilson-Dalby gun director used a pair of trackers and mechanical tachymetry; the operator entered the fuse length, and deflection angles were read from the instruments. By the start of World War I, the 77 mm had become the standard German weapon, and came mounted on a large traverse that could be easily transported on a wagon. Krupp 75 mm guns were supplied with an optical sighting system that improved their capabilities. The German Army also adapted a revolving cannon that came to be known to Allied fliers as the "flaming onion" from the shells in flight. This gun had five barrels that quickly launched a series of 37 mm artillery shells. As aircraft started to be used against ground targets on the battlefield, the AA guns could not be traversed quickly enough at close targets and, being relatively few, were not always in the right place (and were often unpopular with other troops), so changed positions frequently. Soon the forces were adding various machine-gun based weapons mounted on poles. These short-range weapons proved more deadly, and the "Red Baron" is believed to have been shot down by an anti-aircraft Vickers machine gun. When the war ended, it was clear that the increasing capabilities of aircraft would require better means of acquiring targets and aiming at them. Nevertheless, a pattern had been set: anti-aircraft warfare would employ heavy weapons to attack high-altitude targets and lighter weapons for use when aircraft came to lower altitudes. Interwar years World War I demonstrated that aircraft could be an important part of the battlefield, but in some nations it was the prospect of strategic air attack that was the main issue, presenting both a threat and an opportunity. The experience of four years of air attacks on London by Zeppelins and Gotha G.V bombers had particularly influenced the British and was one of if not the main driver for forming an independent air force. As the capabilities of aircraft and their engines improved it was clear that their role in future war would be even more critical as their range and weapon load grew. However, in the years immediately after World War I, the prospect of another major war seemed remote, particularly in Europe, where the most militarily capable nations were, and little financing was available. Four years of war had seen the creation of a new and technically demanding branch of military activity. Air defence had made huge advances, albeit from a very low starting point. However, it was new and often lacked influential 'friends' in the competition for a share of limited defence budgets. Demobilisation meant that most AA guns were taken out of service, leaving only the most modern. However, there were lessons to be learned. In particular the British, who had had AA guns in most theatres in action in daylight and used them against night attacks at home. Furthermore, they had also formed an Anti-Aircraft Experimental Section during the war and accumulated large amounts of data that was subjected to extensive analysis. As a result, they published, in 1924–1925, the two-volume Textbook of Anti-Aircraft Gunnery. It included five key recommendations for HAA equipment: Shells of improved ballistic shape with HE fillings and mechanical time fuses. Higher rates of fire assisted by automation. Height finding by long-base optical instruments. Centralised control of fire on each gun position, directed by tachymetric instruments incorporating the facility to apply corrections of the moment for meteorological and wear factors. More accurate sound-location for the direction of searchlights and to provide plots for barrage fire. Two assumptions underpinned the British approach to HAA fire; first, aimed fire was the primary method and this was enabled by predicting gun data from visually tracking the target and having its height. Second, that the target would maintain a steady course, speed and height. This HAA was to engage targets up to 24,000 feet. Mechanical, as opposed to igniferous, time fuses were required because the speed of powder burning varied with height, so fuse length was not a simple function of time of flight. Automated fire ensured a constant rate of fire that made it easier to predict where each shell should be individually aimed. In 1925 the British adopted a new instrument developed by Vickers. It was a mechanical analogue computer Predictor AA No 1. Given the target height, its operators tracked the target and the predictor produced bearing, quadrant elevation and fuse setting. These were passed electrically to the guns, where they were displayed on repeater dials to the layers who 'matched pointers' (target data and the gun's actual data) to lay the guns. This system of repeater electrical dials built on the arrangements introduced by British coast artillery in the 1880s, and coast artillery was the background of many AA officers. Similar systems were adopted in other countries and for example the later Sperry device, designated M3A3 in the US, was also used by Britain as the Predictor AA No 2. Height finders were also increasing in size, in Britain, the World War I Barr & Stroud UB 2 (7-foot optical base) was replaced by the UB 7 (9-foot optical base) and the UB 10 (18-foot optical base, only used on static AA sites). Goertz in Germany and Levallois in France produced 5-metre instruments. However, in most countries the main effort in HAA guns until the mid-1930s was improving existing ones, although various new designs were on drawing boards. From the early 1930s eight countries developed radar; these developments were sufficiently advanced by the late 1930s for development work on sound-locating acoustic devices to be generally halted, although equipment was retained. Furthermore, in Britain the volunteer Observer Corps formed in 1925 provided a network of observation posts to report hostile aircraft flying over Britain. Initially radar was used for airspace surveillance to detect approaching hostile aircraft. However, the German Würzburg radar was capable of providing data suitable for controlling AA guns, and the British AA No 1 Mk 1 GL radar was designed to be used on AA gun positions. The Treaty of Versailles prevented Germany having AA weapons, and for example, the Krupps designers joined Bofors in Sweden. Some World War I guns were retained and some covert AA training started in the late 1920s. Germany introduced the 8.8 cm FlaK 18 in 1933, 36 and 37 models followed with various improvements, but ballistic performance was unchanged. In the late 1930s the 10.5 cm FlaK 38 appeared, soon followed by the 39; this was designed primarily for static sites but had a mobile mounting, and the unit had 220 V 24 kW generators. In 1938 design started on the 12.8 cm FlaK. The USSR introduced a new 76 mm M1931 in the early 1930s and an 85 mm M1938 towards the end of the decade. Britain had successfully tested a new HAA gun, 3.6-inch, in 1918. In 1928 3.7-inch became the preferred solution, but it took 6 years to gain funding. Production of the QF 3.7-inch (94 mm) began in 1937; this gun was used on mobile carriages with the field army and transportable guns on fixed mountings for static positions. At the same time the Royal Navy adopted a new 4.5-inch (114 mm) gun in a twin turret, which the army adopted in simplified single-gun mountings for static positions, mostly around ports where naval ammunition was available. The performance of the new guns was limited by their standard fuse No 199, with a 30-second running time, although a new mechanical time fuse giving 43 seconds was nearing readiness. In 1939 a Machine Fuse Setter was introduced to eliminate manual fuse setting. The US ended World War I with two 3-inch AA guns and improvements were developed throughout the inter-war period. However, in 1924 work started on a new 105 mm static mounting AA gun, but only a few were produced by the mid-1930s because by this time work had started on the 90 mm AA gun, with mobile carriages and static mountings able to engage air, sea and ground targets. The M1 version was approved in 1940. During the 1920s there was some work on a 4.7-inch which lapsed, but revived in 1937, leading to a new gun in 1944. While HAA and its associated target acquisition and fire control was the primary focus of AA efforts, low-level close-range targets remained and by the mid-1930s were becoming an issue. Until this time the British, at RAF insistence, continued their use of World War I machine guns, and introduced twin MG mountings for AAAD. The army was forbidden from considering anything larger than .50-inch. However, in 1935 their trials showed that the minimum effective round was an impact-fused 2 lb HE shell. The following year they decided to adopt the Bofors 40 mm and a twin barrel Vickers 2-pdr (40 mm) on a modified naval mount. The air-cooled Bofors was vastly superior for land use, being much lighter than the water-cooled pom-pom, and UK production of the Bofors 40 mm was licensed. The Predictor AA No 3, as the Kerrison Predictor was officially known, was introduced with it. The 40 mm Bofors had become available in 1931. In the late 1920s the Swedish Navy had ordered the development of a 40 mm naval anti-aircraft gun from the Bofors company. It was light, rapid-firing and reliable, and a mobile version on a four-wheel carriage was soon developed. Known simply as the 40 mm, it was adopted by some 17 different nations just before World War II and is still in use today in some applications such as on coastguard frigates. Rheinmetall in Germany developed an automatic 20 mm in the 1920s and Oerlikon in Switzerland had acquired the patent to an automatic 20 mm gun designed in Germany during World War I. Germany introduced the rapid-fire 2 cm FlaK 30 and later in the decade it was redesigned by Mauser-Werke and became the 2 cm FlaK 38. Nevertheless, while 20 mm was better than a machine gun and mounted on a very small trailer made it easy to move, its effectiveness was limited. Germany therefore added a 3.7 cm. The first, the 3.7 cm FlaK 18 developed by Rheinmetall in the early 1930s, was basically an enlarged 2 cm FlaK 30. It was introduced in 1935 and production stopped the following year. A redesigned gun 3.7 cm FlaK 36 entered service in 1938, it too had a two-wheel carriage. However, by the mid-1930s the Luftwaffe realised that there was still a coverage gap between 3.7 cm and 8.8 cm guns. They started development of a 5 cm gun on a four-wheel carriage. After World War I the US Army started developing a dual-role (AA/ground) automatic 37 mm cannon, designed by John M. Browning. It was standardised in 1927 as the T9 AA cannon, but trials quickly revealed that it was worthless in the ground role. However, while the shell was a bit light (well under 2 lbs) it had a good effective ceiling and fired 125 rounds per minute; an AA carriage was developed and it entered service in 1939. The Browning 37 mm proved prone to jamming, and was eventually replaced in AA units by the Bofors 40 mm. The Bofors had attracted attention from the US Navy, but none were acquired before 1939. Also, in 1931 the US Army worked on a mobile anti-aircraft machine mount on the back of a heavy truck having four .30 calibre water-cooled machine guns and an optical director. It proved unsuccessful and was abandoned. The Soviet Union also used a 37 mm, the 37 mm M1939, which appears to have been copied from the Bofors 40 mm. A Bofors 25 mm, essentially a scaled down 40 mm, was also copied as the 25 mm M1939. During the 1930s solid-fuel rockets were under development in the Soviet Union and Britain. In Britain the interest was for anti-aircraft fire, it quickly became clear that guidance would be required for precision. However, rockets, or 'unrotated projectiles' as they were called, could be used for anti-aircraft barrages. A 2-inch rocket using HE or wire obstacle warheads was introduced first to deal with low-level or dive bombing attacks on smaller targets such as airfields. The 3-inch was in development at the end of the inter-war period. Naval aspects WW1 had been a war in which air warfare blossomed but had not matured to the point of being a real threat to naval forces. The prevailing assumption was that a few relatively small caliber naval guns could manage to keep enemy aircraft beyond a range where harm might be expected. In 1939 radio controlled drones became available to the US Navy in quantity allowing a more realistic testing of existing anti-aircraft suites against actual flying and manoeuvring targets. The results were sobering to an unexpected degree. The United States was still emerging from the effects of the Great Depression and funds for the military had been sparse to the degree that 50% of shells used were still powder fused. The US Navy found that a significant portion of its shells were duds or low order detonations (incomplete detonation of the explosive contained by the shell). Virtually every major country involved in combat in World War 2 invested in aircraft development. The cost of aircraft research and development was small and the results could be large. So rapid was the performance leaps of evolving aircraft that the British HAC's fire control system was obsolete and designing a successor very difficult for the British establishment. Electronics would prove to be an enabler for effective anti-aircraft systems and both the US and Great Britain had a growing electronics industry. In 1939 radio controlled drones became available to actually test existing systems in British and American service. The results were disappointing by any measure. High-level manoeuvring drones were virtually immune to shipboard AA systems. The US drones could simulate dive bombing which showed the dire need for autocannons. Japan introduced powered gliders in 1940 as drones but apparently was unable to dive bomb. There is no evidence of other powers using drones in this application at all. It may have caused a major underestimation of the threat and an inflated view of their AA systems. Second World War Poland's AA defences were no match for the German attack and the situation was similar in other European countries. Significant AA warfare started with the Battle of Britain in the summer of 1940. QF 3.7-inch AA guns provided the backbone of the ground-based AA defences, although initially significant numbers of QF 3-inch 20 cwt were also used. The Army's Anti-aircraft command, which was under command of the Air Defence UK organisation, grew to 12 AA divisions in 3 AA corps. Bofors 40 mm guns entered service in increasing numbers. In addition, the RAF regiment was formed in 1941 with responsibility for airfield air defence, eventually with Bofors 40 mm as their main armament. Fixed AA defences, using HAA and LAA, were established by the Army in key overseas places, notably Malta, Suez Canal and Singapore. While the 3.7-inch was the main HAA gun in fixed defences and the only mobile HAA gun with the field army, the QF 4.5-inch gun, manned by artillery, was used in the vicinity of naval ports and made use of the naval ammunition supply. The 4.5-inch at Singapore had the first success in shooting down Japanese bombers. Mid war QF 5.25-inch naval guns started being emplaced in some permanent sites around London. This gun was also deployed in dual-role coast defence/AA positions. Germany's high-altitude needs were originally going to be filled by a 75 mm gun from Krupp, designed in collaboration with their Swedish counterpart Bofors, but the specifications were later amended to require much higher performance. In response Krupp's engineers presented a new 88 mm design, the FlaK 36. First used in Spain during the Spanish Civil War, the gun proved to be one of the best anti-aircraft guns in the world, as well as particularly deadly against light, medium, and even early heavy tanks. After the Dambusters raid in 1943 an entirely new system was developed that was required to knock down any low-flying aircraft with a single hit. The first attempt to produce such a system used a 50 mm gun, but this proved inaccurate and a new 55 mm gun replaced it. The system used a centralised control system including both search and targeting radar, which calculated the aim point for the guns after considering windage and ballistics, and then sent electrical commands to the guns, which used hydraulics to point themselves at high speeds. Operators simply fed the guns and selected the targets. This system, modern even by today's standards, was in late development when the war ended. The British had already arranged licence building of the Bofors 40 mm, and introduced these into service. These had the power to knock down aircraft of any size, yet were light enough to be mobile and easily swung. The gun became so important to the British war effort that they even produced a movie, The Gun, that encouraged workers on the assembly line to work harder. The Imperial measurement production drawings the British had developed were supplied to the Americans who produced their own (unlicensed) copy of the 40 mm at the start of the war, moving to licensed production in mid-1941. Service trials demonstrated another problem however: that ranging and tracking the new high-speed targets was almost impossible. At short range, the apparent target area is relatively large, the trajectory is flat and the time of flight is short, allowing to correct lead by watching the tracers. At long range, the aircraft remains in firing range for a long time, so the necessary calculations can, in theory, be done by slide rules—though, because small errors in distance cause large errors in shell fall height and detonation time, exact ranging is crucial. For the ranges and speeds that the Bofors worked at, neither answer was good enough. The solution was automation, in the form of a mechanical computer, the Kerrison Predictor. Operators kept it pointed at the target, and the Predictor then calculated the proper aim point automatically and displayed it as a pointer mounted on the gun. The gun operators simply followed the pointer and loaded the shells. The Kerrison was fairly simple, but it pointed the way to future generations that incorporated radar, first for ranging and later for tracking. Similar predictor systems were introduced by Germany during the war, also adding radar ranging as the war progressed. A plethora of anti-aircraft gun systems of smaller calibre was available to the German Wehrmacht combined forces, and among them the 1940-origin Flakvierling quadruple-20 mm-autocannon-based anti-aircraft weapon system was one of the most often-seen weapons, seeing service on both land and sea. The similar Allied smaller-calibre air-defence weapons of the American forces were also quite capable, although they receive little attention. Their needs could cogently be met with smaller-calibre ordnance beyond using the usual singly-mounted M2 .50 caliber machine gun atop a tank's turret, as four of the ground-used "heavy barrel" (M2HB) guns were mounted together on the American Maxson firm's M45 Quadmount weapon (as a direct answer to the Flakvierling), which were often mounted on the back of a half-track to form the Half Track, M16 GMC, Anti-Aircraft. Although of less power than Germany's 20 mm systems, the typical four or five combat batteries of an Army AAA battalion were often spread many kilometres apart from each other, rapidly attaching and detaching to larger ground combat units to provide welcome defence from enemy aircraft. AAA battalions were also used to help suppress ground targets. Their larger 90 mm M3 gun would prove, as did the eighty-eight, to make an excellent anti-tank gun as well, and was widely used late in the war in this role. Also available to the Americans at the start of the war was the 120 mm M1 gun stratosphere gun, which was the most powerful AA gun with an impressive altitude capability, however no 120 M1 was ever fired at an enemy aircraft. The 90 mm and 120 mm guns would continue to be used into the 1950s. The United States Navy had also put some thought into the problem, When the US Navy began to rearm in 1939 in many ships the primary short ranged gun was the M2 .50 caliber machine gun. While effective in fighters at 300 to 400 yards this is point blank range in naval anti-aircraft ranges. Production of the Swiss Oerlikon 20mm had already started to provide protection for the British and this was adopted in exchange for the M2 machine guns. In the December 1941 to January 1942 time frame production had risen to not only cover all British requirements but also allowed 812 units to be actually delivered to the US Navy. By the end of 1942 the 20mm had accounted for 42% of all aircraft destroyed by the US Navy's shipboard AA. However, the King Board had noted that the balance was shifting towards the larger guns used by the fleet. The US Navy had intended to use the British Pom-Pom, however, the weapon required the use of cordite which BuOrd had found objectionable for US service. Further investigation revealed that US powders would not work in the Pom-Pom. Bureau of Ordnance was well aware of the Bofors 40mm gun. The firm York Safe and Lock was negotiating with Bofors to attain the rights to the air-cooled version of the weapon. At the same time Henry Howard, an engineer, and businessman became aware of it and contacted RAMD W. R. Furlong Chief of the Bureau of Ordnance. He ordered the Bofors weapon system to be investigated. York Safe and Lock would be used as the contracting agent. The system had to be redesigned for both the English measurement system and mass production, as the original documents recommended hand filing and drilling to shape. As early as 1928 the US Navy saw the need to replace the .50 caliber machine gun with something heavier. The 1.1"/75 (28 mm) Mark 1 was designed. Placed in quadruple mounts with a 500 rpm rate of fire it would have fit the requirements. However, the gun was suffering teething issues being prone to jamming. While this could have been solved the weight of the system was equal to that of the quad mount Bofors 40mm while lacking the range and power that the Bofors provided. The gun was relegated to smaller less vital ships by the end of the war. The 5"/38 naval gun rounded out the US Navy's AA suite. A dual propose mount it was used in both the surface and AA roles with great success. Mated with the Mark 37 director and the proximity fuse it could routinely knock drones out of the sky at ranges as far as 13,000 yards.A 3"/50 MK 22 semiautomatic dual gun was produced but not employed before the end of the war and therefore beyond the scope of this article. However early marks of the 3"/50 were employed in destroyer escorts and on merchant ships. 3″/50 caliber guns (Marks 10, 17, 18, and 20) first entered service in 1915 as a refit to , and were subsequently mounted on many types of ships as the need for anti-aircraft protection was recognized. During World War II, they were the primary gun armament on destroyer escorts, patrol frigates, submarine chasers, minesweepers, some fleet submarines, and other auxiliary vessels, and were used as a secondary dual-purpose battery on some other types of ships, including some older battleships. They also replaced the original low-angle 4"/50 caliber guns (Mark 9) on "flush-deck" and s to provide better anti-aircraft protection. The gun was also used on specialist destroyer conversions; the "AVD" seaplane tender conversions received two guns; the "APD" high-speed transports, "DM" minelayers, and "DMS" minesweeper conversions received three guns, and those retaining destroyer classification received six. The Germans developed massive reinforced-concrete blockhouses, some more than six stories high, which were known as Hochbunker "High Bunkers" or "Flaktürme flak towers, on which they placed anti-aircraft artillery. Those in cities attacked by the Allied land forces became fortresses. Several in Berlin were some of the last buildings to fall to the Soviets during the Battle of Berlin in 1945. The British built structures such as the Maunsell Forts in the North Sea, the Thames Estuary and other tidal areas upon which they based guns. After the war most were left to rot. Some were outside territorial waters, and had a second life in the 1960s as platforms for pirate radio stations, while another became the base of a micronation, the Principality of Sealand. Some nations started rocket research before World War II, including for anti-aircraft use. Further research started during the war. The first step was unguided missile systems like the British 2-inch RP and 3-inch, which was fired in large numbers from Z batteries, and were also fitted to warships. The firing of one of these devices during an air raid is suspected to have caused the Bethnal Green disaster in 1943. Facing the threat of Japanese Kamikaze attacks the British and US developed surface-to-air rockets like British Stooge or the American Lark as counter measures, but none of them were ready at the end of the war. The Germans missile research was the most advanced of the war as the Germans put considerable effort in the research and development of rocket systems for all purposes. Among them were several guided and unguided systems. Unguided systems involved the Fliegerfaust (literally "aircraft fist") as the first MANPADS. Guided systems were several sophisticated radio, wire, or radar guided missiles like the Wasserfall ("waterfall") rocket. Owing to the severe war situation for Germany all of those systems were only produced in small numbers and most of them were only used by training or trial units. Another aspect of anti-aircraft defence was the use of barrage balloons to act as physical obstacle initially to bomber aircraft over cities and later for ground attack aircraft over the Normandy invasion fleets. The balloon, a simple blimp tethered to the ground, worked in two ways. Firstly, it and the steel cable were a danger to any aircraft that tried to fly among them. Secondly, to avoid the balloons, bombers had to fly at a higher altitude, which was more favourable for the guns. Barrage balloons were limited in application, and had minimal success at bringing down aircraft, being largely immobile and passive defences. The allies' most advanced technologies were showcased by the anti-aircraft defence against the German V-1 cruise missiles (V stands for Vergeltungswaffe, "retaliation weapon"). The 419th and 601st Anti-aircraft Gun Battalions of the US Army were first allocated to the Folkestone-Dover coast to defend London, and then moved to Belgium to become part of the "Antwerp X" project coordinated from the in Keerbergen. With the liberation of Antwerp, the port city immediately became the highest priority target, and received the largest number of V-1 and V-2 missiles of any city. The smallest tactical unit of the operation was a gun battery consisting of four 90 mm guns firing shells equipped with a radio proximity fuse. Incoming targets were acquired and automatically tracked by SCR-584 radar, developed at the MIT Rad Lab. Output from the gun-laying radar was fed to the M-9 director, an electronic analogue computer developed at Bell Laboratories to calculate the lead and elevation corrections for the guns. With the help of these three technologies, close to 90% of the V-1 missiles, on track to the defence zone around the port, were destroyed. Post-war Post-war analysis demonstrated that even with newest anti-aircraft systems employed by both sides, the vast majority of bombers reached their targets successfully, on the order of 90%. While these figures were undesirable during the war, the advent of the nuclear bomb considerably altered the acceptability of even a single bomber reaching its target. The developments during World War II continued for a short time into the post-war period as well. In particular the U.S. Army set up a huge air defence network around its larger cities based on radar-guided 90 mm and 120 mm guns. US efforts continued into the 1950s with the 75 mm Skysweeper system, an almost fully automated system including the radar, computers, power, and auto-loading gun on a single powered platform. The Skysweeper replaced all smaller guns then in use in the Army, notably the 40 mm Bofors. By 1955, the US Military deemed the 40mm Bofors obsolete due to its reduced capability to shoot down jet powered aircraft, and turned to SAM development, with the Nike Ajax and the RSD-58. In Europe NATO's Allied Command Europe developed an integrated air defence system, NATO Air Defence Ground Environment (NADGE), that later became the NATO Integrated Air Defence System. The introduction of the guided missile resulted in a significant shift in anti-aircraft strategy. Although Germany had been desperate to introduce anti-aircraft missile systems, none became operational during World War II. Following several years of post-war development, however, these systems began to mature into viable weapons. The US started an upgrade of their defences using the Nike Ajax missile, and soon the larger anti-aircraft guns disappeared. The same thing occurred in the USSR after the introduction of their SA-2 Guideline systems. As this process continued, the missile found itself being used for more and more of the roles formerly filled by guns. First to go were the large weapons, replaced by equally large missile systems of much higher performance. Smaller missiles soon followed, eventually becoming small enough to be mounted on armoured cars and tank chassis. These started replacing, or at least supplanting, similar gun-based SPAAG systems in the 1960s, and by the 1990s had replaced almost all such systems in modern armies. Man-portable missiles, MANPADS as they are known today, were introduced in the 1960s and have supplanted or replaced even the smallest guns in most advanced armies. In the 1982 Falklands War, the Argentine armed forces deployed the newest west European weapons including the Oerlikon GDF-002 35 mm twin cannon and SAM Roland. The Rapier missile system was the primary GBAD system, used by both British artillery and RAF regiment, a few brand-new FIM-92 Stinger were used by British special forces. Both sides also used the Blowpipe missile. British naval missiles used included Sea Dart and the older Sea Slug longer range systems, Sea Cat and the new Sea Wolf short range systems. Machine guns in AA mountings were used both ashore and afloat. During the 2008 South Ossetia war air power faced off against powerful SAM systems, like the 1980s Buk-M1. In February 2018, an Israeli F-16 fighter was downed in the occupied Golan Heights province, after it had attacked an Iranian target in Syria. In 2006, Israel also lost a helicopter over Lebanon, shot down by a Hezbollah rocket. AA warfare systems Although the firearms used by the infantry, particularly machine guns, can be used to engage low altitude air targets, on occasion with notable success, their effectiveness is generally limited and the muzzle flashes reveal infantry positions. Speed and altitude of modern jet aircraft limit target opportunities, and critical systems may be armoured in aircraft designed for the ground attack role. Adaptations of the standard autocannon, originally intended for air-to-ground use, and heavier artillery systems were commonly used for most anti-aircraft gunnery, starting with standard pieces on new mountings, and evolving to specially designed guns with much higher performance prior to World War II. The ammunition and shells fired by these weapons are usually fitted with different types of fuses (barometric, time-delay, or proximity) to explode close to the airborne target, releasing a shower of fast metal fragments. For shorter-range work, a lighter weapon with a higher rate of fire is required, to increase a hit probability on a fast airborne target. Weapons between 20 mm and 40 mm calibre have been widely used in this role. Smaller weapons, typically .50 calibre or even 8 mm rifle calibre guns have been used in the smallest mounts. Unlike the heavier guns, these smaller weapons are in widespread use due to their low cost and ability to quickly follow the target. Classic examples of autocannons and large calibre guns are the 40 mm autocannon designed by Bofors and the 8.8 cm FlaK 18, 36 gun designed by Krupp. Artillery weapons of this sort have for the most part been superseded by the effective surface-to-air missile systems that were introduced in the 1950s, although they were still retained by many nations. The development of surface-to-air missiles began in Nazi Germany during the late World War II with missiles such as the Wasserfall, though no working system was deployed before the war's end, and represented new attempts to increase effectiveness of the anti-aircraft systems faced with growing threat from bombers. Land-based SAMs can be deployed from fixed installations or mobile launchers, either wheeled or tracked. The tracked vehicles are usually armoured vehicles specifically designed to carry SAMs. Larger SAMs may be deployed in fixed launchers, but can be towed/re-deployed at will. The SAMs launched by individuals are known in the United States as the Man-Portable Air Defence Systems (MANPADS). MANPADS of the former Soviet Union have been exported around the World, and can be found in use by many armed forces. Targets for non-ManPAD SAMs will usually be acquired by air-search radar, then tracked before/while a SAM is "locked-on" and then fired. Potential targets, if they are military aircraft, will be identified as friend or foe before being engaged. The developments in the latest and relatively cheap short-range missiles have begun to replace autocannons in this role. The interceptor aircraft (or simply interceptor) is a type of fighter aircraft designed specifically to intercept and destroy enemy aircraft, particularly bombers, usually relying on high speed and altitude capabilities. A number of jet interceptors such as the F-102 Delta Dagger, the F-106 Delta Dart, and the MiG-25 were built in the period starting after the end of World War II and ending in the late 1960s, when they became less important due to the shifting of the strategic bombing role to ICBMs. Invariably the type is differentiated from other fighter aircraft designs by higher speeds and shorter operating ranges, as well as much reduced ordnance payloads. The radar systems use electromagnetic waves to identify the range, altitude, direction, or speed of aircraft and weather formations to provide tactical and operational warning and direction, primarily during defensive operations. In their functional roles they provide target search, threat detection, guidance, reconnaissance, navigation, instrumentation, and weather reporting support to combat operations. Anti-UAV defences An Anti-UAV Defence System (AUDS) is a system for defence against military unmanned aerial vehicles. A variety of designs have been developed, using lasers, net-guns and air-to-air netting, signal jamming, and hi-jacking by means of in-flight hacking. Anti-UAV defence systems have been deployed against ISIL drones during the Battle of Mosul (2016–2017). Alternative approaches for dealing with UAVs have included using a shotgun at close range, and for smaller drones, training eagles to snatch them from the air. It is important to keep in mind that this only works on relatively small UAVs and loitering munitions (also called "suicide drones"). Larger UCAVs such as the MQ-1 Predator can be (and frequently are) shot down like manned aircraft of similar sizes and flight profiles. Future developments Guns are being increasingly pushed into specialist roles, such as the Dutch Goalkeeper CIWS, which uses the GAU-8 Avenger 30 mm seven-barrel Gatling gun for last ditch anti-missile and anti-aircraft defence. Even this formerly front-line weapon is currently being replaced by new missile systems, such as the RIM-116 Rolling Airframe Missile, which is smaller, faster, and allows for mid-flight course correction (guidance) to ensure a hit. To bridge the gap between guns and missiles, Russia in particular produces the Kashtan CIWS, which uses both guns and missiles for final defense with two six-barrelled 30 mm Gsh-6-30 Gatling guns and eight 9M311 surface-to-air missiles provide for its defensive capabilities. Upsetting this development to all-missile systems is the current move to stealth aircraft. Long range missiles depend on long-range detection to provide significant lead. Stealth designs cut detection ranges so much that the aircraft is often never even seen, and when it is, it is often too late for an intercept. Systems for detection and tracking of stealthy aircraft are a major problem for anti-aircraft development. However, as stealth technology grows, so does anti-stealth technology. Multiple transmitter radars such as those from bistatic radars and low-frequency radars are said to have the capabilities to detect stealth aircraft. Advanced forms of thermographic cameras such as those that incorporate QWIPs would be able to optically see a Stealth aircraft regardless of the aircraft's Radar Cross-Section (RCS). In addition, Side looking radars, High-powered optical satellites, and sky-scanning, high-aperture, high sensitivity radars such as radio telescopes, would all be able to narrow down the location of a stealth aircraft under certain parameters. The newest SAMs have a claimed ability to be able to detect and engage stealth targets, with the most notable being the Russian S-400, which is claimed to be able to detect a target with a 0.05-metre squared RCS from 90 km away. Another potential weapon system for anti-aircraft use is the laser. Although air planners have imagined lasers in combat since the late 1960s, only the most modern laser systems are currently reaching what could be considered "experimental usefulness". In particular the Tactical High Energy Laser can be used in the anti-aircraft and anti-missile role. The future of projectile based weapons may be found in the railgun. Currently tests are underway on developing systems that could create as much damage as a Tomahawk (missile), but at a fraction of the cost. In February 2008 the US Navy tested a railgun; it fired a shell at per hour using 10 megajoules of energy. Its expected performance is over per hour muzzle velocity, accurate enough to hit a 5-metre target from away while shooting at 10 shots per minute. It is expected to be ready in 2020 to 2025. These systems, while currently designed for static targets, would only need the ability to be retargeted to become the next generation of AA system. Force structures Most Western and Commonwealth militaries integrate air defence purely with the traditional services of the military (i.e. army, navy and air force), as a separate arm or as part of artillery. In the British Army for instance, air defence is part of the artillery arm, while in the Pakistan Army, it was split off from the artillery to form a separate arm of its own in 1990. This is in contrast to some (largely communist or ex-communist) countries where not only are there provisions for air defence in the army, navy and air force but there are specific branches that deal only with the air defence of territory, for example, the Soviet PVO Strany. The USSR also had a separate strategic rocket force in charge of nuclear intercontinental ballistic missiles. Navy Smaller boats and ships typically have machine-guns or fast cannons, which can often be deadly to low-flying aircraft if linked to a radar-directed fire-control system radar-controlled cannon for point defence. Some vessels like Aegis-equipped destroyers and cruisers are as much a threat to aircraft as any land-based air defence system. In general, naval vessels should be treated with respect by aircraft, however the reverse is equally true. Carrier battle groups are especially well defended, as not only do they typically consist of many vessels with heavy air defence armament but they are also able to launch fighter jets for combat air patrol overhead to intercept incoming airborne threats. Nations such as Japan use their SAM-equipped vessels to create an outer air defence perimeter and radar picket in the defence of its Home islands, and the United States also uses its Aegis-equipped ships as part of its Aegis Ballistic Missile Defense System in the defence of the Continental United States. Some modern submarines, such as the Type 212 submarines of the German Navy, are equipped with surface-to-air missile systems, since helicopters and anti-submarine warfare aircraft are significant threats. The subsurface launched anti-air missile was first purposed by US Navy Rear Admiral Charles B. Momsen, in a 1953 article. Layered air defence Air defence in naval tactics, especially within a carrier group, is often built around a system of concentric layers with the aircraft carrier at the centre. The outer layer will usually be provided by the carrier's aircraft, specifically its AEW&C aircraft combined with the CAP. If an attacker is able to penetrate this layer, then the next layers would come from the surface-to-air missiles carried by the carrier's escorts; the area-defence missiles, such as the RIM-67 Standard, with a range of up to 100 nmi, and the point-defence missiles, like the RIM-162 ESSM, with a range of up to 30 nmi. Finally, virtually every modern warship will be fitted with small-calibre guns, including a CIWS, which is usually a radar-controlled Gatling gun of between 20mm and 30mm calibre capable of firing several thousand rounds per minute. Army Armies typically have air defence in depth, from integral man-portable air-defense systems (MANPADS) such as the RBS 70, Stinger and Igla at smaller force levels up to army-level missile defence systems such as Angara and Patriot. Often, the high-altitude long-range missile systems force aircraft to fly at low level, where anti-aircraft guns can bring them down. As well as the small and large systems, for effective air defence there must be intermediate systems. These may be deployed at regiment-level and consist of platoons of self-propelled anti-aircraft platforms, whether they are self-propelled anti-aircraft guns (SPAAGs), integrated air-defence systems like Tunguska or all-in-one surface-to-air missile platforms like Roland or SA-8 Gecko. On a national level the United States Army was atypical in that it was primarily responsible for the missile air defences of the Continental United States with systems such as Project Nike. Air force Air defence by air forces is typically provided by fighter jets carrying air-to-air missiles. However, most air forces choose to augment airbase defence with surface-to-air missile systems as they are such valuable targets and subject to attack by enemy aircraft. In addition, some countries choose to put all air defence responsibilities under the air force. Area air defence Area air defence, the air defence of a specific area or location, (as opposed to point defence), have historically been operated by both armies (Anti-Aircraft Command in the British Army, for instance) and Air Forces (the United States Air Force's CIM-10 Bomarc). Area defence systems have medium to long range and can be made up of various other systems and networked into an area defence system (in which case it may be made up of several short range systems combined to effectively cover an area). An example of area defence is the defence of Saudi Arabia and Israel by MIM-104 Patriot missile batteries during the first Gulf War, where the objective was to cover populated areas. Tactics Mobility Most modern air defence systems are fairly mobile. Even the larger systems tend to be mounted on trailers and are designed to be fairly quickly broken down or set up. In the past, this was not always the case. Early missile systems were cumbersome and required much infrastructure; many could not be moved at all. With the diversification of air defence there has been much more emphasis on mobility. Most modern systems are usually either self-propelled (i.e. guns or missiles are mounted on a truck or tracked chassis) or towed. Even systems that consist of many components (transporter/erector/launchers, radars, command posts etc.) benefit from being mounted on a fleet of vehicles. In general, a fixed system can be identified, attacked and destroyed whereas a mobile system can show up in places where it is not expected. Soviet systems especially concentrate on mobility, after the lessons learnt in the Vietnam war between the US and Vietnam. For more information on this part of the conflict, see SA-2 Guideline. Air defence versus air defence suppression Israel and the US Air Force, in conjunction with the members of NATO, have developed significant tactics for air defence suppression. Dedicated weapons such as anti-radiation missiles and advanced electronics intelligence and electronic countermeasures platforms seek to suppress or negate the effectiveness of an opposing air-defence system. It is an arms race; as better jamming, countermeasures and anti-radiation weapons are developed, so are better SAM systems with ECCM capabilities and the ability to shoot down anti-radiation missiles and other munitions aimed at them or the targets they are defending. Insurgent tactics Rocket-propelled grenades can be—and often are—used against hovering helicopters (e.g., by Somali militiamen during the Battle of Mogadishu (1993)). Firing an RPG at steep angles poses a danger to the user, because the backblast from firing reflects off the ground. In Somalia, militia members sometimes welded a steel plate onto the exhaust end of an RPG's tube to deflect pressure away from the shooter when shooting up at US helicopters. RPGs are used in this role only when more effective weapons are not available. Another example of using RPGs against helicopters is Operation ANACONDA in March 2002 in Afghanistan. Taliban Insurgents defending Shah-i-Kot Valley used RPGs in the direct fire role against landing helicopters. Four rangers were killed when their helicopter was shot down by an RPG and SEAL team member Neil C. Roberts fell out of his helicopter when it was hit by two RPG. In other instances helicopters have been shot down in Afghanistan during a mission in Wardak province. One feature that makes RPGs useful in air defence is that they are fused to automatically detonate at 920 m. If aimed into the air this causes the warhead to airburst which can release a limited but potentially damaging amount of shrapnel hitting a helicopter landing or taking off. For insurgents the most effective method of countering aircraft is to attempt to destroy them on the ground, either by penetrating an airbase perimeter and destroying aircraft individually, e.g. the September 2012 Camp Bastion raid, or finding a position where aircraft can be engaged with indirect fire, such as mortars. A recent trend emerging during the Syrian Civil War is the use of ATGM against landing helicopters. See also Air supremacy Artillery Gun laying List of anti-aircraft weapons Self-propelled anti-aircraft weapon The bomber will always get through References Citations Sources AAP-6 NATO Glossary of Terms. 2009. Bethel, Colonel HA. 1911. "Modern Artillery in the Field". London: Macmillan and Co Ltd Checkland, Peter and Holwell, Sue. 1998. "Information, Systems and Information Systems – making sense of the field". Chichester: Wiley Gander, T 2014. "The Bofors gun", 3rd edn. Barnsley, South Yorkshire: Pen & Sword Military. Hogg, Ian V. 1998. "Allied Artillery of World War Two". Malborough: The Crowood Press Hogg, Ian V. 1998. "Allied Artillery of World War One" Malborough: The Crowood Press Handbook for the Ordnance, Q.F. 3.7-inch Mark II on Mounting, 3.7-inch A.A. Mark II – Land Service. 1940. London: War Office 26|Manuals|2494 History of the Ministry of Munitions. 1922. Volume X The Supply of Munitions, Part VI Anti-Aircraft Supplies. Reprinted by Naval & Military Press Ltd and Imperial War Museum. Flavia Foradini: I bunker di Vienna", Abitare 2/2006, Milano Flavia Foradini, Edoardo Conte: I templi incompiuti di Hitler", catalogo della mostra omonima, Milano, Spazio Guicciardini, 17.2-13.3.2009 External links 1914 1918 war in Alsace – The Battle of Linge 1915 – The 63rd Anti Aircraft Regiment in 14 18 – The 96th poste semi-fixed in the Vosges Archie to SAM: A Short Operational History of Ground-Based Air Defense by Kenneth P. Werrell (book available for download) Japanese Anti-aircraft land/vessel doctrines in 1943–44 2nd/3rd Australian Light Anti-Aircraft Regiment Military aviation Warfare by type
28752673
https://en.wikipedia.org/wiki/TLA%2B
TLA+
{{DISPLAYTITLE:TLA+}} TLA+ is a formal specification language developed by Leslie Lamport. It is used to design, model, document, and verify programs, especially concurrent systems and distributed systems. TLA+ has been described as exhaustively-testable pseudocode, and its use likened to drawing blueprints for software systems; TLA is an acronym for Temporal Logic of Actions. For design and documentation, TLA+ fulfills the same purpose as informal technical specifications. However, TLA+ specifications are written in a formal language of logic and mathematics, and the precision of specifications written in this language is intended to uncover design flaws before system implementation is underway. Since TLA+ specifications are written in a formal language, they are amenable to finite model checking. The model checker finds all possible system behaviours up to some number of execution steps, and examines them for violations of desired invariance properties such as safety and liveness. TLA+ specifications use basic set theory to define safety (bad things won't happen) and temporal logic to define liveness (good things eventually happen). TLA+ is also used to write machine-checked proofs of correctness both for algorithms and mathematical theorems. The proofs are written in a declarative, hierarchical style independent of any single theorem prover backend. Both formal and informal structured mathematical proofs can be written in TLA+; the language is similar to LaTeX, and tools exist to translate TLA+ specifications to LaTeX documents. TLA+ was introduced in 1999, following several decades of research into a verification method for concurrent systems. A toolchain has since developed, including an IDE and distributed model checker. The pseudocode-like language PlusCal was created in 2009; it transpiles to TLA+ and is useful for specifying sequential algorithms. TLA+2 was announced in 2014, expanding language support for proof constructs. The current TLA+ reference is The TLA+ Hyperbook by Leslie Lamport. History Modern temporal logic was developed by Arthur Prior in 1957, then called tense logic. Although Amir Pnueli was the first to seriously study the applications of temporal logic to computer science, Prior speculated on its use a decade earlier in 1967: Pnueli researched the use of temporal logic in specifying and reasoning about computer programs, introducing linear temporal logic in 1977. LTL became an important tool for analysis of concurrent programs, easily expressing properties such as mutual exclusion and freedom from deadlock. Concurrent with Pnueli's work on LTL, academics were working to generalize Hoare logic for verification of multiprocess programs. Leslie Lamport became interested in the problem after peer review found an error in a paper he submitted on mutual exclusion. Ed Ashcroft introduced invariance in his 1975 paper "Proving Assertions About Parallel Programs", which Lamport used to generalize Floyd's method in his 1977 paper "Proving Correctness of Multiprocess Programs". Lamport's paper also introduced safety and liveness as generalizations of partial correctness and termination, respectively. This method was used to verify the first concurrent garbage collection algorithm in a 1978 paper with Edsger Dijkstra. Lamport first encountered Pnueli's LTL during a 1978 seminar at Stanford organized by Susan Owicki. According to Lamport, "I was sure that temporal logic was some kind of abstract nonsense that would never have any practical application, but it seemed like fun, so I attended." In 1980 he published "'Sometime' is Sometimes 'Not Never'", which became one of the most frequently-cited papers in the temporal logic literature. Lamport worked on writing temporal logic specifications during his time at SRI, but found the approach to be impractical: His search for a practical method of specification resulted in the 1983 paper "Specifying Concurrent Programming Modules", which introduced the idea of describing state transitions as boolean-valued functions of primed and unprimed variables. Work continued throughout the 1980s, and Lamport began publishing papers on the temporal logic of actions in 1990; however, it was not formally introduced until "The Temporal Logic of Actions" was published in 1994. TLA enabled the use of actions in temporal formulas, which according to Lamport "provides an elegant way to formalize and systematize all the reasoning used in concurrent system verification." TLA specifications mostly consisted of ordinary non-temporal mathematics, which Lamport found less cumbersome than a purely temporal specification. TLA provided a mathematical foundation to the specification language TLA+, introduced with the paper "Specifying Concurrent Systems with TLA+" in 1999. Later that same year, Yuan Yu wrote the TLC model checker for TLA+ specifications; TLC was used to find errors in the cache coherence protocol for a Compaq multiprocessor. Lamport published a full textbook on TLA+ in 2002, titled "Specifying Systems: The TLA+ Language and Tools for Software Engineers". PlusCal was introduced in 2009, and the TLA+ proof system (TLAPS) in 2012. TLA+2 was announced in 2014, adding some additional language constructs as well as greatly increasing in-language support for the proof system. Lamport is engaged in creating an updated TLA+ reference, "The TLA+ Hyperbook". The incomplete work is available from his official website. Lamport is also creating The TLA+ Video Course, described therein as "a work in progress that consists of the beginning of a series of video lectures to teach programmers and software engineers how to write their own TLA+ specifications". Language TLA+ specifications are organized into modules. Modules can extend (import) other modules to use their functionality. Although the TLA+ standard is specified in typeset mathematical symbols, existing TLA+ tools use LaTeX-like symbol definitions in ASCII. TLA+ uses several terms which require definition: State – an assignment of values to variables Behaviour – a sequence of states Step – a pair of successive states in a behavior Stuttering step – a step during which variables are unchanged Next-state relation – a relation describing how variables can change in any step State function – an expression containing variables and constants that is not a next-state relation State predicate – a Boolean-valued state function Invariant – a state predicate true in all reachable states Temporal formula – an expression containing statements in temporal logic Safety TLA+ concerns itself with defining the set of all correct system behaviours. For example, a one-bit clock ticking endlessly between 0 and 1 could be specified as follows: VARIABLE clock Init == clock \in {0, 1} Tick == IF clock = 0 THEN clock' = 1 ELSE clock' = 0 Spec == Init /\ [][Tick]_<<clock>> The next-state relation Tick sets clock′ (the value of clock in the next state) to 1 if clock is 0, and 0 if clock is 1. The state predicate Init is true if the value of clock is either 0 or 1. Spec is a temporal formula asserting all behaviours of one-bit clock must initially satisfy Init and have all steps either match Tick or be stuttering steps. Two such behaviours are: 0 -> 1 -> 0 -> 1 -> 0 -> ... 1 -> 0 -> 1 -> 0 -> 1 -> ... The safety properties of the one-bit clock – the set of reachable system states – are adequately described by the spec. Liveness The above spec disallows strange states for the one-bit clock, but does not say the clock will ever tick. For example, the following perpetually-stuttering behaviours are accepted: 0 -> 0 -> 0 -> 0 -> 0 -> ... 1 -> 1 -> 1 -> 1 -> 1 -> ... A clock which does not tick is not useful, so these behaviours should be disallowed. One solution is to disable stuttering, but TLA+ requires stuttering always be enabled; a stuttering step represents a change to some part of the system not described in the spec, and is useful for refinement. To ensure the clock must eventually tick, weak fairness is asserted for Tick: Spec == Init /\ [][Tick]_<<clock>> /\ WF_<<clock>>(Tick) Weak fairness over an action means if that action is continuously enabled, it must eventually be taken. With weak fairness on Tick only a finite number of stuttering steps are permitted between ticks. This temporal logical statement about Tick is called a liveness assertion. In general, a liveness assertion should be machine-closed: it shouldn't constrain the set of reachable states, only the set of possible behaviours. Most specifications do not require assertion of liveness properties. Safety properties suffice both for model checking and guidance in system implementation. Operators TLA+ is based on ZF, so operations on variables involve set manipulation. The language includes set membership, union, intersection, difference, powerset, and subset operators. First-order logic operators such as , , , , , are also included, as well as universal and existential quantifiers and . Hilbert's is provided as the CHOOSE operator, which uniquely selects an arbitrary set element. Arithmetic operators over reals, integers, and natural numbers are available from the standard modules. Temporal logic operators are built into TLA+. Temporal formulas use to mean P is always true, and to mean P is eventually true. The operators are combined into to mean P is true infinitely often, or to mean eventually P will always be true. Other temporal operators include weak and strong fairness. Weak fairness WFe(A) means if action A is enabled continuously (i.e. without interruptions), it must eventually be taken. Strong fairness SFe(A) means if action A is enabled continually (repeatedly, with or without interruptions), it must eventually be taken. Temporal existential and universal quantification are included in TLA+, although without support from the tools. User-defined operators are similar to macros. Operators differ from functions in that their domain need not be a set: for example, the set membership operator has the category of sets as its domain, which is not a valid set in ZFC (since its existence leads to Russell's paradox). Recursive and anonymous user-defined operators were added in TLA+2. Data structures The foundational data structure of TLA+ is the set. Sets are either explicitly enumerated or constructed from other sets using operators or with {x \in S : p} where p is some condition on x, or {e : x \in S} where e is some function of x. The unique empty set is represented as {}. Functions in TLA+ assign a value to each element in their domain, a set. [S -> T] is the set of all functions with f[x] in T, for each x in the domain set S. For example, the TLA+ function Double[x \in Nat] == x*2 is an element of the set [Nat -> Nat] so Double \in [Nat -> Nat] is a true statement in TLA+. Functions are also defined with [x \in S |-> e] for some expression e, or by modifying an existing function [f EXCEPT ![v1] = v2]. Records are a type of function in TLA+. The record [name |-> "John", age |-> 35] is a record with fields name and age, accessed with r.name and r.age, and belonging to the set of records [name : String, age : Nat]. Tuples are included in TLA+. They are explicitly defined with <<e1,e2,e3>> or constructed with operators from the standard Sequences module. Sets of tuples are defined by Cartesian product; for example, the set of all pairs of natural numbers is defined Nat \X Nat. Standard modules TLA+ has a set of standard modules containing common operators. They are distributed with the syntactic analyzer. The TLC model checker uses Java implementations for improved performance. FiniteSets: Module for working with finite sets. Provides IsFiniteSet(S) and Cardinality(S) operators. Sequences: Defines operators on tuples such as Len(S), Head(S), Tail(S), Append(S, E), concatenation, and filter. Bags: Module for working with multisets. Provides primitive set operation analogues and duplicate counting. Naturals: Defines the Natural numbers along with inequality and arithmetic operators. Integers: Defines the Integers. Reals: Defines the Real numbers along with division and infinity. RealTime: Provides definitions useful in real-time system specifications. TLC: Provides utility functions for model-checked specifications, such as logging and assertions. Standard modules are imported with the EXTENDS or INSTANCE statements. Tools IDE An integrated development environment is implemented on top of Eclipse. It includes an editor with error and syntax highlighting, plus a GUI front-end to several other TLA+ tools: The SANY syntactic analyzer, which parses and checks the spec for syntax errors. The LaTeX translator, to generate pretty-printed specs. The PlusCal translator. The TLC model checker. The TLAPS proof system. The IDE is distributed in The TLA Toolbox. Model checker The TLC model checker builds a finite state model of TLA+ specifications for checking invariance properties. TLC generates a set of initial states satisfying the spec, then performs a breadth-first search over all defined state transitions. Execution stops when all state transitions lead to states which have already been discovered. If TLC discovers a state which violates a system invariant, it halts and provides a state trace path to the offending state. TLC provides a method of declaring model symmetries to defend against combinatorial explosion. It also parallelizes the state exploration step, and can run in distributed mode to spread the workload across a large number of computers. As an alternative to exhaustive breadth-first search, TLC can use depth-first search or generate random behaviours. TLC operates on a subset of TLA+; the model must be finite and enumerable, and some temporal operators are not supported. In distributed mode TLC cannot check liveness properties, nor check random or depth-first behaviours. TLC is available as a command line tool or bundled with the TLA toolbox. Proof system The TLA+ Proof System, or TLAPS, mechanically checks proofs written in TLA+. It was developed at the Microsoft Research-INRIA Joint Centre to prove correctness of concurrent and distributed algorithms. The proof language is designed to be independent of any particular theorem prover; proofs are written in a declarative style, and transformed into individual obligations which are sent to back-end provers. The primary back-end provers are Isabelle and Zenon, with fallback to SMT solvers CVC3, Yices, and Z3. TLAPS proofs are hierarchically structured, easing refactoring and enabling non-linear development: work can begin on later steps before all prior steps are verified, and difficult steps are decomposed into smaller sub-steps. TLAPS works well with TLC, as the model checker quickly finds small errors before verification is begun. In turn, TLAPS can prove system properties which are beyond the capabilities of finite model checking. TLAPS does not currently support reasoning with real numbers, nor most temporal operators. Isabelle and Zenon generally cannot prove arithmetic proof obligations, requiring use of the SMT solvers. TLAPS has been used to prove correctness of Byzantine Paxos, the Memoir security architecture, components of the Pastry distributed hash table, and the Spire consensus algorithm. It is distributed separately from the rest of the TLA+ tools and is free software, distributed under the BSD license. TLA+2 greatly expanded language support for proof constructs. Industry use At Microsoft, a critical bug was discovered in the Xbox 360 memory module during the process of writing a specification in TLA+. TLA+ was used to write formal proofs of correctness for Byzantine Paxos and components of the Pastry distributed hash table. Amazon Web Services has used TLA+ since 2011. TLA+ model checking uncovered bugs in DynamoDB, S3, EBS, and an internal distributed lock manager; some bugs required state traces of 35 steps. Model checking was also used to verify aggressive optimizations. In addition, TLA+ specifications were found to hold value as documentation and design aids. Microsoft Azure used TLA+ to design Cosmos DB, a globally-distributed database with five different consistency models. Altreonic NV used TLA+ to model check OpenComRTOS Examples A key-value store with snapshot isolation --------------------------- MODULE KeyValueStore --------------------------- CONSTANTS Key, \* The set of all keys. Val, \* The set of all values. TxId \* The set of all transaction IDs. VARIABLES store, \* A data store mapping keys to values. tx, \* The set of open snapshot transactions. snapshotStore, \* Snapshots of the store for each transaction. written, \* A log of writes performed within each transaction. missed \* The set of writes invisible to each transaction. ---------------------------------------------------------------------------- NoVal == \* Choose something to represent the absence of a value. CHOOSE v : v \notin Val Store == \* The set of all key-value stores. [Key -> Val \cup {NoVal}] Init == \* The initial predicate. /\ store = [k \in Key |-> NoVal] \* All store values are initially NoVal. /\ tx = {} \* The set of open transactions is initially empty. /\ snapshotStore = \* All snapshotStore values are initially NoVal. [t \in TxId |-> [k \in Key |-> NoVal]] /\ written = [t \in TxId |-> {}] \* All write logs are initially empty. /\ missed = [t \in TxId |-> {}] \* All missed writes are initially empty. TypeInvariant == \* The type invariant. /\ store \in Store /\ tx \subseteq TxId /\ snapshotStore \in [TxId -> Store] /\ written \in [TxId -> SUBSET Key] /\ missed \in [TxId -> SUBSET Key] TxLifecycle == /\ \A t \in tx : \* If store != snapshot & we haven't written it, we must have missed a write. \A k \in Key : (store[k] /= snapshotStore[t][k] /\ k \notin written[t]) => k \in missed[t] /\ \A t \in TxId \ tx : \* Checks transactions are cleaned up after disposal. /\ \A k \in Key : snapshotStore[t][k] = NoVal /\ written[t] = {} /\ missed[t] = {} OpenTx(t) == \* Open a new transaction. /\ t \notin tx /\ tx' = tx \cup {t} /\ snapshotStore' = [snapshotStore EXCEPT ![t] = store] /\ UNCHANGED <<written, missed, store>> Add(t, k, v) == \* Using transaction t, add value v to the store under key k. /\ t \in tx /\ snapshotStore[t][k] = NoVal /\ snapshotStore' = [snapshotStore EXCEPT ![t][k] = v] /\ written' = [written EXCEPT ![t] = @ \cup {k}] /\ UNCHANGED <<tx, missed, store>> Update(t, k, v) == \* Using transaction t, update the value associated with key k to v. /\ t \in tx /\ snapshotStore[t][k] \notin {NoVal, v} /\ snapshotStore' = [snapshotStore EXCEPT ![t][k] = v] /\ written' = [written EXCEPT ![t] = @ \cup {k}] /\ UNCHANGED <<tx, missed, store>> Remove(t, k) == \* Using transaction t, remove key k from the store. /\ t \in tx /\ snapshotStore[t][k] /= NoVal /\ snapshotStore' = [snapshotStore EXCEPT ![t][k] = NoVal] /\ written' = [written EXCEPT ![t] = @ \cup {k}] /\ UNCHANGED <<tx, missed, store>> RollbackTx(t) == \* Close the transaction without merging writes into store. /\ t \in tx /\ tx' = tx \ {t} /\ snapshotStore' = [snapshotStore EXCEPT ![t] = [k \in Key |-> NoVal]] /\ written' = [written EXCEPT ![t] = {}] /\ missed' = [missed EXCEPT ![t] = {}] /\ UNCHANGED store CloseTx(t) == \* Close transaction t, merging writes into store. /\ t \in tx /\ missed[t] \cap written[t] = {} \* Detection of write-write conflicts. /\ store' = \* Merge snapshotStore writes into store. [k \in Key |-> IF k \in written[t] THEN snapshotStore[t][k] ELSE store[k]] /\ tx' = tx \ {t} /\ missed' = \* Update the missed writes for other open transactions. [otherTx \in TxId |-> IF otherTx \in tx' THEN missed[otherTx] \cup written[t] ELSE {}] /\ snapshotStore' = [snapshotStore EXCEPT ![t] = [k \in Key |-> NoVal]] /\ written' = [written EXCEPT ![t] = {}] Next == \* The next-state relation. \/ \E t \in TxId : OpenTx(t) \/ \E t \in tx : \E k \in Key : \E v \in Val : Add(t, k, v) \/ \E t \in tx : \E k \in Key : \E v \in Val : Update(t, k, v) \/ \E t \in tx : \E k \in Key : Remove(t, k) \/ \E t \in tx : RollbackTx(t) \/ \E t \in tx : CloseTx(t) Spec == \* Initialize state with Init and transition with Next. Init /\ [][Next]_<<store, tx, snapshotStore, written, missed>> ---------------------------------------------------------------------------- THEOREM Spec => [](TypeInvariant /\ TxLifecycle) ============================================================================= A rule-based firewall ------------------------------ MODULE Firewall ------------------------------ EXTENDS Integers CONSTANTS Address, \* The set of all addresses Port, \* The set of all ports Protocol \* The set of all protocols AddressRange == \* The set of all address ranges {r \in Address \X Address : r[1] <= r[2]} InAddressRange[r \in AddressRange, a \in Address] == /\ r[1] <= a /\ a <= r[2] PortRange == \* The set of all port ranges {r \in Port \X Port : r[1] <= r[2]} InPortRange[r \in PortRange, p \in Port] == /\ r[1] <= p /\ p <= r[2] Packet == \* The set of all packets [sourceAddress : Address, sourcePort : Port, destAddress : Address, destPort : Port, protocol : Protocol] Firewall == \* The set of all firewalls [Packet -> BOOLEAN] Rule == \* The set of all firewall rules [remoteAddress : AddressRange, remotePort : PortRange, localAddress : AddressRange, localPort : PortRange, protocol : SUBSET Protocol, allow : BOOLEAN] Ruleset == \* The set of all firewall rulesets SUBSET Rule Allowed[rset \in Ruleset, p \in Packet] == \* Whether the ruleset allows the packet LET matches == {rule \in rset : /\ InAddressRange[rule.remoteAddress, p.sourceAddress] /\ InPortRange[rule.remotePort, p.sourcePort] /\ InAddressRange[rule.localAddress, p.destAddress] /\ InPortRange[rule.localPort, p.destPort] /\ p.protocol \in rule.protocol} IN /\ matches /= {} /\ \A rule \in matches : rule.allow ============================================================================= A multi-car elevator system ------------------------------ MODULE Elevator ------------------------------ (***************************************************************************) (* This spec describes a simple multi-car elevator system. The actions in *) (* this spec are unsurprising and common to all such systems except for *) (* DispatchElevator, which contains the logic to determine which elevator *) (* ought to service which call. The algorithm used is very simple and does *) (* not optimize for global throughput or average wait time. The *) (* TemporalInvariant definition ensures this specification provides *) (* capabilities expected of any elevator system, such as people eventually *) (* reaching their destination floor. *) (***************************************************************************) EXTENDS Integers CONSTANTS Person, \* The set of all people using the elevator system Elevator, \* The set of all elevators FloorCount \* The number of floors serviced by the elevator system VARIABLES PersonState, \* The state of each person ActiveElevatorCalls, \* The set of all active elevator calls ElevatorState \* The state of each elevator Vars == \* Tuple of all specification variables <<PersonState, ActiveElevatorCalls, ElevatorState>> Floor == \* The set of all floors 1 .. FloorCount Direction == \* Directions available to this elevator system {"Up", "Down"} ElevatorCall == \* The set of all elevator calls [floor : Floor, direction : Direction] ElevatorDirectionState == \* Elevator movement state; it is either moving in a direction or stationary Direction \cup {"Stationary"} GetDistance[f1, f2 \in Floor] == \* The distance between two floors IF f1 > f2 THEN f1 - f2 ELSE f2 - f1 GetDirection[current, destination \in Floor] == \* Direction of travel required to move between current and destination floors IF destination > current THEN "Up" ELSE "Down" CanServiceCall[e \in Elevator, c \in ElevatorCall] == \* Whether elevator is in position to immediately service call LET eState == ElevatorState[e] IN /\ c.floor = eState.floor /\ c.direction = eState.direction PeopleWaiting[f \in Floor, d \in Direction] == \* The set of all people waiting on an elevator call {p \in Person : /\ PersonState[p].location = f /\ PersonState[p].waiting /\ GetDirection[PersonState[p].location, PersonState[p].destination] = d} TypeInvariant == \* Statements about the variables which we expect to hold in every system state /\ PersonState \in [Person -> [location : Floor \cup Elevator, destination : Floor, waiting : BOOLEAN]] /\ ActiveElevatorCalls \subseteq ElevatorCall /\ ElevatorState \in [Elevator -> [floor : Floor, direction : ElevatorDirectionState, doorsOpen : BOOLEAN, buttonsPressed : SUBSET Floor]] SafetyInvariant == \* Some more comprehensive checks beyond the type invariant /\ \A e \in Elevator : \* An elevator has a floor button pressed only if a person in that elevator is going to that floor /\ \A f \in ElevatorState[e].buttonsPressed : /\ \E p \in Person : /\ PersonState[p].location = e /\ PersonState[p].destination = f /\ \A p \in Person : \* A person is in an elevator only if the elevator is moving toward their destination floor /\ \A e \in Elevator : /\ (PersonState[p].location = e /\ ElevatorState[e].floor /= PersonState[p].destination) => /\ ElevatorState[e].direction = GetDirection[ElevatorState[e].floor, PersonState[p].destination] /\ \A c \in ActiveElevatorCalls : PeopleWaiting[c.floor, c.direction] /= {} \* No ghost calls TemporalInvariant == \* Expectations about elevator system capabilities /\ \A c \in ElevatorCall : \* Every call is eventually serviced by an elevator /\ c \in ActiveElevatorCalls ~> \E e \in Elevator : CanServiceCall[e, c] /\ \A p \in Person : \* If a person waits for their elevator, they'll eventually arrive at their floor /\ PersonState[p].waiting ~> PersonState[p].location = PersonState[p].destination PickNewDestination(p) == \* Person decides they need to go to a different floor LET pState == PersonState[p] IN /\ ~pState.waiting /\ pState.location \in Floor /\ \E f \in Floor : /\ f /= pState.location /\ PersonState' = [PersonState EXCEPT ![p] = [@ EXCEPT !.destination = f]] /\ UNCHANGED <<ActiveElevatorCalls, ElevatorState>> CallElevator(p) == \* Person calls the elevator to go in a certain direction from their floor LET pState == PersonState[p] IN LET call == [floor |-> pState.location, direction |-> GetDirection[pState.location, pState.destination]] IN /\ ~pState.waiting /\ pState.location /= pState.destination /\ ActiveElevatorCalls' = IF \E e \in Elevator : /\ CanServiceCall[e, call] /\ ElevatorState[e].doorsOpen THEN ActiveElevatorCalls ELSE ActiveElevatorCalls \cup {call} /\ PersonState' = [PersonState EXCEPT ![p] = [@ EXCEPT !.waiting = TRUE]] /\ UNCHANGED <<ElevatorState>> OpenElevatorDoors(e) == \* Open the elevator doors if there is a call on this floor or the button for this floor was pressed. LET eState == ElevatorState[e] IN /\ ~eState.doorsOpen /\ \/ \E call \in ActiveElevatorCalls : CanServiceCall[e, call] \/ eState.floor \in eState.buttonsPressed /\ ElevatorState' = [ElevatorState EXCEPT ![e] = [@ EXCEPT !.doorsOpen = TRUE, !.buttonsPressed = @ \ {eState.floor}]] /\ ActiveElevatorCalls' = ActiveElevatorCalls \ {[floor |-> eState.floor, direction |-> eState.direction]} /\ UNCHANGED <<PersonState>> EnterElevator(e) == \* All people on this floor who are waiting for the elevator and travelling the same direction enter the elevator. LET eState == ElevatorState[e] IN LET gettingOn == PeopleWaiting[eState.floor, eState.direction] IN LET destinations == {PersonState[p].destination : p \in gettingOn} IN /\ eState.doorsOpen /\ eState.direction /= "Stationary" /\ gettingOn /= {} /\ PersonState' = [p \in Person |-> IF p \in gettingOn THEN [PersonState[p] EXCEPT !.location = e] ELSE PersonState[p]] /\ ElevatorState' = [ElevatorState EXCEPT ![e] = [@ EXCEPT !.buttonsPressed = @ \cup destinations]] /\ UNCHANGED <<ActiveElevatorCalls>> ExitElevator(e) == \* All people whose destination is this floor exit the elevator. LET eState == ElevatorState[e] IN LET gettingOff == {p \in Person : PersonState[p].location = e /\ PersonState[p].destination = eState.floor} IN /\ eState.doorsOpen /\ gettingOff /= {} /\ PersonState' = [p \in Person |-> IF p \in gettingOff THEN [PersonState[p] EXCEPT !.location = eState.floor, !.waiting = FALSE] ELSE PersonState[p]] /\ UNCHANGED <<ActiveElevatorCalls, ElevatorState>> CloseElevatorDoors(e) == \* Close the elevator doors once all people have entered and exited the elevator on this floor. LET eState == ElevatorState[e] IN /\ ~ENABLED EnterElevator(e) /\ ~ENABLED ExitElevator(e) /\ eState.doorsOpen /\ ElevatorState' = [ElevatorState EXCEPT ![e] = [@ EXCEPT !.doorsOpen = FALSE]] /\ UNCHANGED <<PersonState, ActiveElevatorCalls>> MoveElevator(e) == \* Move the elevator to the next floor unless we have to open the doors here. LET eState == ElevatorState[e] IN LET nextFloor == IF eState.direction = "Up" THEN eState.floor + 1 ELSE eState.floor - 1 IN /\ eState.direction /= "Stationary" /\ ~eState.doorsOpen /\ eState.floor \notin eState.buttonsPressed /\ \A call \in ActiveElevatorCalls : \* Can move only if other elevator servicing call /\ CanServiceCall[e, call] => /\ \E e2 \in Elevator : /\ e /= e2 /\ CanServiceCall[e2, call] /\ nextFloor \in Floor /\ ElevatorState' = [ElevatorState EXCEPT ![e] = [@ EXCEPT !.floor = nextFloor]] /\ UNCHANGED <<PersonState, ActiveElevatorCalls>> StopElevator(e) == \* Stops the elevator if it's moved as far as it can in one direction LET eState == ElevatorState[e] IN LET nextFloor == IF eState.direction = "Up" THEN eState.floor + 1 ELSE eState.floor - 1 IN /\ ~ENABLED OpenElevatorDoors(e) /\ ~eState.doorsOpen /\ nextFloor \notin Floor /\ ElevatorState' = [ElevatorState EXCEPT ![e] = [@ EXCEPT !.direction = "Stationary"]] /\ UNCHANGED <<PersonState, ActiveElevatorCalls>> (***************************************************************************) (* This action chooses an elevator to service the call. The simple *) (* algorithm picks the closest elevator which is either stationary or *) (* already moving toward the call floor in the same direction as the call. *) (* The system keeps no record of assigning an elevator to service a call. *) (* It is possible no elevator is able to service a call, but we are *) (* guaranteed an elevator will eventually become available. *) (***************************************************************************) DispatchElevator(c) == LET stationary == {e \in Elevator : ElevatorState[e].direction = "Stationary"} IN LET approaching == {e \in Elevator : /\ ElevatorState[e].direction = c.direction /\ \/ ElevatorState[e].floor = c.floor \/ GetDirection[ElevatorState[e].floor, c.floor] = c.direction } IN /\ c \in ActiveElevatorCalls /\ stationary \cup approaching /= {} /\ ElevatorState' = LET closest == CHOOSE e \in stationary \cup approaching : /\ \A e2 \in stationary \cup approaching : /\ GetDistance[ElevatorState[e].floor, c.floor] <= GetDistance[ElevatorState[e2].floor, c.floor] IN IF closest \in stationary THEN [ElevatorState EXCEPT ![closest] = [@ EXCEPT !.floor = c.floor, !.direction = c.direction]] ELSE ElevatorState /\ UNCHANGED <<PersonState, ActiveElevatorCalls>> Init == \* Initializes people and elevators to arbitrary floors /\ PersonState \in [Person -> [location : Floor, destination : Floor, waiting : {FALSE}]] /\ ActiveElevatorCalls = {} /\ ElevatorState \in [Elevator -> [floor : Floor, direction : {"Stationary"}, doorsOpen : {FALSE}, buttonsPressed : {{}}]] Next == \* The next-state relation \/ \E p \in Person : PickNewDestination(p) \/ \E p \in Person : CallElevator(p) \/ \E e \in Elevator : OpenElevatorDoors(e) \/ \E e \in Elevator : EnterElevator(e) \/ \E e \in Elevator : ExitElevator(e) \/ \E e \in Elevator : CloseElevatorDoors(e) \/ \E e \in Elevator : MoveElevator(e) \/ \E e \in Elevator : StopElevator(e) \/ \E c \in ElevatorCall : DispatchElevator(c) TemporalAssumptions == \* Assumptions about how elevators and people will behave /\ \A p \in Person : WF_Vars(CallElevator(p)) /\ \A e \in Elevator : WF_Vars(OpenElevatorDoors(e)) /\ \A e \in Elevator : WF_Vars(EnterElevator(e)) /\ \A e \in Elevator : WF_Vars(ExitElevator(e)) /\ \A e \in Elevator : SF_Vars(CloseElevatorDoors(e)) /\ \A e \in Elevator : SF_Vars(MoveElevator(e)) /\ \A e \in Elevator : WF_Vars(StopElevator(e)) /\ \A c \in ElevatorCall : SF_Vars(DispatchElevator(c)) Spec == \* Initialize state with Init and transition with Next, subject to TemporalAssumptions /\ Init /\ [][Next]_Vars /\ TemporalAssumptions THEOREM Spec => [](TypeInvariant /\ SafetyInvariant /\ TemporalInvariant) ============================================================================= See also Alloy (specification language) B-Method Computation tree logic PlusCal Temporal logic Temporal logic of actions Z notation References External links The TLA Home Page, Leslie Lamport's webpage linking to the TLA+ tools and resources The TLA+ Hyperbook, a TLA+ textbook by Leslie Lamport How Amazon Web Services Uses Formal Methods, an article in the April 2015 Communications of the ACM Thinking for Programmers, a talk by Leslie Lamport at Build 2014 Thinking Above the Code, a talk by Leslie Lamport at the 2014 Microsoft Research faculty summit Who Builds a Skyscraper without Drawing Blueprints?, a talk by Leslie Lamport at React San Francisco 2014 Programming Should Be More than Coding, a 2015 talk at Stanford by Leslie Lamport Euclid Writes an Algorithm: A Fairytale, a TLA+ introduction by Leslie Lamport included in a festschrift for Manfred Broy The TLA+ Google Group Formal methods Formal methods tools Software using the BSD license Specification languages Formal specification languages Concurrency (computer science)
599421
https://en.wikipedia.org/wiki/Ghost%20%28disambiguation%29
Ghost (disambiguation)
A ghost is a spirit of a dead person that may appear to the living. Ghost or Ghosts may also refer to: People Ghost (producer), British hip hop producer Ghost (singer) (born 1974), singer Robert Guerrero (born 1983), a.k.a. The Ghost, American boxer Ivan Moody (born 1980), a.k.a. Ghost, member of Five Finger Death Punch Kelly Pavlik (born 1982), a.k.a. The Ghost, American boxer Styles P (born 1974), a.k.a. The Ghost, American rapper Matt Urban (1919–1995), a.k.a. The Ghost, United States Army Lieutenant Colonel Arts, entertainment, and media Fictional entities Ghost (comics), several characters and publications Ghost (Dungeons & Dragons) Ghost (Hamlet), character from William Shakespeare's play Hamlet Ghost (The Matrix), a character in Enter the Matrix Ghost, the robotic companion of guardians in Destiny Ghost, a character in the novel Lost Souls Ghost, a type of Pokémon Haunter (Pokémon), a Pokémon known in Japan as Ghost Ghosts (Pac-Man), the recurring antagonists in the Pac-Man franchise Ghosts, group of Gaunt's Ghosts characters Ghosts, type of Terran soldier in StarCraft Simon "Ghost" Riley, a character in Call of Duty: Modern Warfare 2 The Ghost, a VCX-100 light freighter from Star Wars: Rebels Films Ghosts (1915 film), silent American film starring Henry B. Walthall The Ghost (1963 film), Italian horror film The Ghost (1982 film), German drama film Ghosts… of the Civil Dead, 1988 Australian political suspense film Ghost (1990 film), American romantic fantasy film Michael Jackson's Ghosts, 1997 short film Ghost (1998 film), Iranian film The Ghost (2004 film), South Korean horror film Ghosts (2005 film), German drama film Ghosts (2006 film), British drama film The Ghost (2008 film), Russian thriller film The Ghost (2010 film), French-German-British political thriller film Ghost: In Your Arms Again, 2010 Japanese remake of the 1990 American film Ghost Ghost (2012 film), Indian horror film Ghosts, a 2014 film also known as Jessabelle Ghosts (2014 film), Iranian drama film Ghost (2015 film), Russian comedy film Ghost (2019 film), Indian horror thriller film Ghost (2020 film), British independent film about the first day of freedom for an ex-con Gaming Ghost (game), word game Ghost (video gaming), game feature GHOSTS (video game), an upcoming game Call of Duty: Ghosts, a 2013 game in the Call of Duty franchise Ghost 1.0, video game Ghosts (board game) StarCraft: Ghost, indefinitely suspended video game Literature Ghost (Reynolds novel), by Jason Reynolds "Ghost", a story by Larry Niven in Crashlander Ghost, by John Ringo Ghosts (Aira novel), 1990 novel by César Aira Ghosts (Auster novel), by Paul Auster Ghosts (Banville novel), 1993 novel by Irish writer John Banville Ghosts (play), by Henrik Ibsen The Ghost (novel), by Robert Harris The Ghost, novel series by George Mann The Ghosts, 1969 novel by Antonia Barber Music Groups and labels Ghost (1984 band), Japanese experimental rock group Ghost (2004 band), Japanese visual kei rock group Ghost (production team), Swedish producing and songwriting team Ghost (Swedish band), heavy metal group Ghosts (band), British indie/pop group The Ghost (American band), American punk rock group The Ghost (Faroese band), electropop duo Ghost9, South Korean boygroup Albums Ghost (soundtrack), to the 1990 film Ghost (Crack the Sky album), 2001 Ghost (Devin Townsend Project album), 2011 Ghost (Gary Numan album), 1988 Ghost (Ghost album), 1990 Ghost (In Fiction EP), 2007 Ghost (Kate Rusby album), 2014 Ghost (Radical Face album), 2007 Ghost (Sky Ferreira EP), 2012 Ghost (Third Eye Foundation album), 1997 Ghosts (Albert Ayler album), 1965 Ghosts (Ash Riser album), 2017 Ghosts (Big Wreck album), 2014 Ghosts (Cowboy Junkies album), 2020 Ghosts (The Marked Men album), 2009 Ghosts (Monolake album), 2012 Ghosts (Rage album), 1999 Ghosts (Siobhán Donaghy album), 2007 Ghosts (Sleeping at Last album), 2003 Ghosts (Strawbs album), 1975 Ghosts (Techno Animal album), 1991 Ghosts (Wendy Matthews album), 1997 Ghosts, a series of albums by Nine Inch Nails Ghosts I–IV, 2008 Ghosts V: Together, 2020 Ghosts VI: Locusts, 2020 The Ghost (Songs: Ohia album), 1999 The Ghost (Before the Dawn album), 2006 Songs "Ghost" (Ella Henderson song), 2014 "Ghost" (Fefe Dobson song), 2010 "Ghost" (Gackt song), 2009 "Ghost" (Halsey song), 2015 "Ghost" (Jamie-Lee Kriewitz song), 2015 "Ghost" (Justin Bieber song), 2021 "Ghost" (Mystery Skulls song), 2013 "Ghost" (Phish song), 1998 "Ghost", by The 69 Eyes from Angels, 2007 "Ghost", by Beat Crusaders from EpopMAKING ~ Pop Tono Sogu ~, 2007 "Ghost", by Buckethead from Colma, 1998 "Ghost", by Clutch from Blast Tyrant, 2004 "Ghost", by Depeche Mode from Sounds of the Universe, 2009 "Ghost", by Guttermouth from Full Length LP, 1991 "Ghost", by Hollywood Undead from Day of the Dead, 2015 "Ghost", by House of Heroes from The End Is Not the End, 2009 "Ghost", by Howie Day from Australia, 2000 "Ghost", by Indigo Girls from Rites of Passage, 1992 "Ghost", by Ingrid Michaelson from Human Again, 2011 "GHOST", a single by American rapper Jaden "Ghost", by Katy Perry from Prism, 2013 "Ghost", by Neutral Milk Hotel from In the Aeroplane Over the Sea, 1998 "Ghost", by Little Boots from Hands, 2009 "Ghost", by Live from Secret Samadhi, 1997 "Ghost", by Mark Owen from The Art of Doing Nothing, 2013 "Ghost", by Mystery Skulls from Forever, 2014 "Ghost", by Pearl Jam from Riot Act, 2002 "Ghost", by Plastic Tree from Chandelier, 2005 "Ghost", by Sir Sly from Gold, 2013 "Ghost", by Skip the Use from Can Be Late, 2012 "Ghost", by Slash from Slash, 2010 "Ghost", by Sleeping with Sirens from How It Feels to Be Lost, 2019 "Ghost", by Tom Swoon, 2014 "Ghost", the first part of the 2013 song "Haunted" by Beyoncé "Ghost" (Zoe Wees song), by Zoe Wees from Golden Wings, 2021 "GHOST!", by Kid Cudi from Man on the Moon II: The Legend of Mr. Rager, 2010 "Ghosts" (Bruce Springsteen song), 2020 "Ghosts" (Dirty Vegas song), 2002 "Ghosts" (Japan song), 1982 "Ghosts" (Laura Marling song), 2007 "Ghosts" (Michael Jackson song), 1997 "Ghosts", by 1 Giant Leap from 1 Giant Leap, 2002 "Ghosts", by Assemblage 23 from Meta, 2007 "Ghosts", by Albert Ayler from Spiritual Unity, 1965 "Ghosts", by Big Wreck from Ghosts, 2014 "Ghosts", by Caravan Palace from Chronologic, 2019 "Ghosts", by Funeral for a Friend from Memory and Humanity, 2008 "Ghosts", by Ghosts from The World Is Outside, 2007 "Ghosts", by The Jam from The Gift, 1982 "Ghosts", by Kansas from In the Spirit of Things, 1988 "Ghosts", by Ladytron from Velocifero, 2008 "Ghosts", by Mike Shinoda from Post Traumatic, 2018 "Ghosts", by The Presets from Pacifica, 2012 "Ghosts", by PVRIS from White Noise, 2014 "Ghosts", by Rage from Ghosts, 1999 "Ghosts", by Robbie Williams from Intensive Care, 2005 "Ghosts", by Shellac from 1000 Hurts, 2000 "Ghosts", by Susumu Hirasawa from Sword-Wind Chronicle BERSERK Original Soundtrack, 1997 Other music Ghost, a piano trio by Beethoven Ghost note, a type of musical note Ghost the Musical, a 2011 stage musical Television Series Ghosts (1995 TV series), a BBC series Ghost (2008 TV series), a Malaysian mystery series Ghost (Korean TV series), 2012 police procedural series Ghosts (2019 TV series), a 2019 BBC sitcom Ghosts (2021 TV series), a 2021 CBS sitcom based on BBC sitcom Kamen Rider Ghost, a 2015-16 TV Asahi tokusatsu series Episodes "Ghost" (Dollhouse) "Ghosts", an episode of Dark "Ghosts" (Gotham) "Ghosts" (Hidden Palms) "Ghosts" (Justified) "Ghosts", an episode of One Day at a Time "Ghosts" (Person of Interest) "Ghosts" (Psych) "Ghosts", a season two episode of The Protector "The Ghost" (Agents of S.H.I.E.L.D.) "The Ghost" (Miracles) Other television Ghosting (television), an unwanted image Computing and technology Ghost (blogging platform), blogging software built in JavaScript Ghost (disk utility), a disk cloning program Ghost (operating system), an operating system project G.ho.st, an operating system IAI Ghost, a rotary mini UAV Science Biology Ghost cell, a necrotic cell that retains its cellular architecture but has no nucleus Ghost lineage, an inferred phylogenetic lineage Ghost population, an inferred statistical population Physics Ghost (physics), an unphysical state in quantum field theory Faddeev–Popov ghost, a type of unphysical field in quantum field theory Other uses Ghost (fashion brand) Ghost (mascot), a joke mainly used by sports fans in Latin America Global horizontal sounding technique, an atmospheric field research project in the late 1960s Juliet Marine Systems Ghost, a stealth ship Rolls-Royce Ghost, a car See also Apparition (disambiguation) Ghost in the machine (disambiguation) Ghost Island (disambiguation) Ghost pepper (Bhut jolokia) Ghost town (disambiguation) Ghost train (disambiguation) Ghosted (disambiguation) Ghosting (disambiguation) Ghostly (disambiguation) Phantom (disambiguation)
62698487
https://en.wikipedia.org/wiki/Software%20Sudheer
Software Sudheer
Software Sudheer is a 2019 Indian Telugu-language romantic comedy film directed by P Rajasekhar Reddy and produced by K Sekhar Raju under Sekhara Art Creations banner. The cast includes Sudigali Sudheer and Dhanya Balakrishna in the lead roles. Bheems scored music. The film was released on 28 December 2019. It opened to mixed reviews from the critics. Plot Chandu (Sudigali Sudheer) is a software employee who is a happy-go-lucky guy. He loves his colleague Swathi (Dhanya Balakrishna) and they both decides to get married. But unfortunately they come to know about Chandu’s horoscope. It says he will get killed soon. So as a precautionary measure they decides to do pooja with the help of Swamiji. But at the same time, Chandu gets involved in a 1000Cr scam. With a problem after problem, Chandu is now on a mission to prove his innocence and come out of that problem in horoscope. Cast Sudigali Sudheer as Chandu Dhanya Balakrishna as Swathi Nassar as Rajanna Sayaji Shinde as Chandu’s father Indraja as Chandu’s mother Sanjay Swaroop as supporting actor Posani Krishna Murali as Chandu’s uncle Ravi Kale as Dubai Don Prudhvi Raj as Doctor Soundtrack The soundtrack of the film is produced by Bheems. The official soundtrack of Software Sudheer consisting of five songs was composed by Bheems Ceciroleo, the lyrics of which were written by Suresh Upadhyaya, Bheemas Ceciroleo, and Gaddar. The soundtrack was released on 25 December 2019 at Prasad Labs, Hyderabad with the film's cast and crew in attendance. Reception Sudigali Sudheer received mixed reviews from the critics. In his review for The Times of India, Thadhagath Pathi rated the film 1.5 stars of 5 and wrote: "There’s nothing good about Software Sudheer except for some funny dance moves." A reviewer from NTV Telugu stated: "Software Sudheer is about gags that will make you squirm in your seats. Most of the jokes just don't land. Director P Rajasekhar Reddy's efforts to tickle the funny bone end up frustrating the audience. Even the story is told in a rather haphazard manner. The emotional scenes are hardly emotional." 123Telugu.com rated 2.5/5 and wrote: "Software Sudheer is a good launchpad for Sudigali Sudheer and he proves his mettle as a solo hero with his impressive dances and performance." References External links Telugu-language films Indian romantic comedy-drama films 2010s Telugu-language films 2019 films 2019 romantic comedy-drama films Films shot in Hyderabad, India Films shot at Ramoji Film City Indian films 2019 comedy films 2019 drama films
52491
https://en.wikipedia.org/wiki/Non-repudiation
Non-repudiation
Non-repudiation refers to a situation where a statement's author cannot successfully dispute its authorship or the validity of an associated contract. The term is often seen in a legal setting when the authenticity of a signature is being challenged. In such an instance, the authenticity is being "repudiated". For example, Mallory buys a cell phone for $100, writes a paper cheque as payment, and signs the cheque with a pen. Later, she finds that she can't afford it, and claims that the cheque is a forgery. The signature guarantees that only Mallory could have signed the cheque, and so Mallory's bank must pay the cheque. This is non-repudiation; Mallory cannot repudiate the cheque. In practice, pen-and-paper signatures aren't hard to forge, but digital signatures can be very hard to break. In security In general, non-repudiation involves associating actions or changes with a unique individual. For example, a secure area may use a key card access system where non-repudiation would be violated if key cards were shared or if lost and stolen cards were not immediately reported. Similarly, the owner of a computer account must not allow others to use it, such as by giving away their password, and a policy should be implemented to enforce this. In digital security In digital security, non-repudiation means: A service that provides proof of the integrity and origin of data. An authentication that can be said to be genuine with high confidence. Proof of data integrity is typically the easiest of these requirements to accomplish. A data hash such as SHA2 usually ensures that the data will not be changed undetectably. Even with this safeguard, it is possible to tamper with data in transit, either through a man-in-the-middle attack or phishing. Because of this, data integrity is best asserted when the recipient already possesses the necessary verification information, such as after being mutually authenticated. The common method to provide non-repudiation in the context of digital communications or storage is Digital Signatures, a more powerful tool that provides non-repudiation in a publicly verifiable manner. Message Authentication Codes (MAC), useful when the communicating parties have arranged to use a shared secret that they both possess, does not give non-repudiation. A misconception is that encrypting, per se, provides authentication "If the message decrypts properly then it is authentic" - Wrong! MAC can be subject to several types of attacks, like: message reordering, block substitution, block repetition, .... Thus just providing message integrity and authentication, but not non-repudiation. To achieve non-repudiation one must trust a service (a certificate generated by a trusted third party (TTP) called certificate authority (CA)) which prevents an entity from denying previous commitments or actions (e.g. sending message A to B). The difference between MAC and Digital Signatures, one uses symmetric keys and the other asymmetric keys (provided by the CA). Note that the goal is not to achieve confidentiality: in both cases (MAC or digital signature), one simply appends a tag to the otherwise plaintext, visible message. If confidentiality is also required, then an encryption scheme can be combined with the digital signature, or some form of authenticated encryption could be used. Verifying the digital origin means that the certified/signed data likely came from someone who possesses the private key corresponding to the signing certificate. If the key used to digitally sign a message is not properly safeguarded by the original owner, digital forgery can occur. Trusted third parties (TTPs) To mitigate the risk of people repudiating their own signatures, the standard approach is to involve a trusted third party. The two most common TTPs are forensic analysts and notaries. A forensic analyst specializing in handwriting can compare some signature to a known valid signature and assess its legitimacy. A notary is a witness who verifies an individual's identity by checking other credentials and affixing their certification that the person signing is who they claim to be. A notary provides the extra benefit of maintaining independent logs of their transactions, complete with the types of credentials checked, and another signature that can be verified by the forensic analyst. This double security makes notaries the preferred form of verification. For digital information, the most commonly employed TTP is a certificate authority, which issues public key certificates. A public key certificate can be used by anyone to verify digital signatures without a shared secret between the signer and the verifier. The role of the certificate authority is to authoritatively state to whom the certificate belongs, meaning that this person or entity possesses the corresponding private key. However, a digital signature is forensically identical in both legitimate and forged uses. Someone who possesses the private key can create a valid digital signature. Protecting the private key is the idea behind some smart cards such as the United States Department of Defense's Common Access Card (CAC), which never lets the key leave the card. That means that to use the card for encryption and digital signatures, a person needs the personal identification number (PIN) code necessary to unlock it. See also Plausible deniability Shaggy defense Designated verifier signature Information security Undeniable signature References External links "Non-repudiation in Electronic Commerce" (Jianying Zhou), Artech House, 2001 'Non-repudiation' taken from Stephen Mason, Electronic Signatures in Law (3rd edn, Cambridge University Press, 2012) 'Non-repudiation' in the legal context in Stephen Mason, Electronic Signatures in Law (4th edn, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2016) now open source Public-key cryptography Contract law Notary
44939247
https://en.wikipedia.org/wiki/Gunther%20Schmidt
Gunther Schmidt
Gunther Schmidt (born 1939, Rüdersdorf) is a German mathematician who works also in informatics. Life Schmidt began studying Mathematics in 1957 at Göttingen University. His academic teachers were in particular Kurt Reidemeister, Wilhelm Klingenberg and Karl Stein. In 1960 he transferred to Ludwig-Maximilians-Universität München where he studied functions of several complex variables with Karl Stein. Schmidt wrote a thesis on analytic continuation of such functions. In 1962 Schmidt began work at TU München with students of Robert Sauer, in the beginning in labs and tutorials, later in mentoring and administration. Schmidt's interests turned toward programming when he collaborated with Hans Langmaack on rewriting and the braid group in 1969. Friedrich L. Bauer and Klaus Samelson were establishing software engineering at the university and Schmidt joined their group in 1974. In 1977 he submitted his Habilitation "Programs as partial graphs". He became a professor in 1980. Shortly after that, he was appointed to hold the chair of the late Klaus Samelson for one and a half years. From 1988 until his retirement in 2004, he held a professorship at the Faculty for Computer Science of the Universität der Bundeswehr München. He was a classroom instructor for beginners courses as well as special courses in mathematical logic, semantics of programming languages, construction of compilers, and algorithmic languages. Working with Thomas Strohlein, he authored a textbook on relations and graphs, published in German in 1989 and English in 1993 and again in 2012. In 2001 he became involved in a large project (17 nations) with the European Cooperation in Science and Technology: Schmidt was chairman of project COST 274 TARSKI (Theory and Application of Relational Structures as Knowledge Instruments). In 2014 a festschrift was organized to celebrate his 75th year. The calculus of relations had a relatively low profile among mathematical topics in the twentieth century, but Schmidt and others have raised that profile. The partial order of binary relations can be organized by grouping through closure. In 2018 Schmidt and Michael Winter published Relational Topology which reviews classical mathematical structures, such as binary operations and topological space, through the lens of calculus of relations. Work In 1981 he participated in the International Summer School Marktoberdorf, and edited the lecture notes Theoretical Foundations of Programming Methodology with Manfred Broy. Gunther Schmidt is mainly known for his work on Relational Mathematics; he was co-founder of the RAMiCS conference series in 1994. His textbooks on calculus of relations exhibit applications and potential of algebraic logic. Books 1989: (with T. Ströhlein) Relationen und Graphen, Mathematik für Informatiker, Springer Verlag, , 1993: (with T. Ströhlein) Relations and Graphs Discrete Mathematics for Computer Scientists, EATCS Monographs on Theoretical Computer Science, Springer Verlag, 2011: Relational Mathematics, Encyclopedia of Mathematics and its Applications, vol. 132, Cambridge University Press 2018: (with M. Winter) Relational Topology, Lecture Notes in Mathematics vol. 2208, Springer Verlag, 2020: Rückblick auf die Anfänge der Münchner Informatik, Die blaue Stunde der Informatik, Springer-Vieweg, , Editorships 2006: (with de Swart, H. C. M., Orłowska, E., and Roubens, M.) Theory and Application of Relational Structures as Knowledge Instruments II, Wrap-up volume of the COST Action 274: TARSKI, Lecture Notes in Computer Science #4342, Springer , 2003: (with de Swart, H. C. M., Orłowska, E., and Roubens, M.) Theory and Application of Relational Structures as Knowledge Instruments, Kickoff volume of the COST Action 274: TARSKI, Lecture Notes in Computer Science #2929, Springer, 2001: (with Parnas, D., Kahl, W.) Relational Methods in Software, Special Issue of Electronic Notes in Theoretical Computer Science,, vol. 44, numbers 3, 1999: (with Jaoua, A.) Relational Methods in Computer Science, Special Issue of Information Sciences, vol. 119, numbers 3+4, Elsevier 1997: with Brink, C., Kahl, W.: Relational Methods in Computer Science, Advances in Computing Science. Springer 1994: (with Mayr, E. W., and Tinhofer, G.) Graph-Theoretic Concepts in Computer Science, vol. 903 of Lecture Notes in Computer Science, Proc. 20th Intern. Workshop WG '94, Jun 17–19, Herrsching, Springer 1994, 1991: (with Berghammer, R.) Graph-Theoretic Concepts in Computer Science, vol. 570 of Lecture Notes in Computer Science, Proc. 17th Intern. Workshop WG '91, Jun 17-19, Richterheim Fischbachau, Springer 1991, , 1987: (with Tinhofer, G) Graph-Theoretic Concepts in Computer Science vol. 246 of Lecture Notes in Computer Science, Proc. 12th Intern. Workshop WG '86, Jun 17–19, Kloster Bernried, Springer, , 1982: (with Broy, M.) Theoretical Foundations of Programming Methodology. Reidel Publishers, . 1981: (with Bauer, F. L.) Erinnerungen an Robert Sauer, Beiträge zum Gedächtniskolloquium anläßlich seines 10. Todestages, Springer References External links Homepage at Universität der Bundeswehr München with access to a full list of publications and talks researchr 1939 births 20th-century German mathematicians 21st-century German mathematicians German computer scientists People from Märkisch-Oderland Living people German Lutherans Ludwig Maximilian University of Munich alumni Technical University of Munich faculty Bundeswehr University Munich faculty Formal methods people Programming language researchers German textbook writers Computer science writers Theoretical computer scientists
6467957
https://en.wikipedia.org/wiki/Cartes%20du%20Ciel
Cartes du Ciel
Cartes du Ciel ("CDC" and "SkyChart") is a free and open source planetarium program for Linux, macOS, and Windows. With the change to version 3, Linux has been added as a target platform, licensing has changed from freeware to GPLv2 and the project moved to a new website. CDC includes the ability to control computerized GoTo telescope mounts, is ASCOM and INDI compliant, and supports the USNO's UCAC catalogs and ESA Gaia data, along with numerous other catalogs and utilities. The "red bulb" feature is useful when using software outside on a laptop on a dark night. According to the programmer, Patrick Chevalley, it was released as freeware because "I’d rather see amateurs spend their money for a new eyepiece than for astronomy software". Chevalley has also created a lunar atlas program, Virtual Moon Atlas, which is also free and open source software. See also Space flight simulation game List of space flight simulation games Planetarium software List of observatory software References External links Version 2 (archived) Version 4 Free astronomy software Free software programmed in Pascal Planetarium software for Linux Educational software for MacOS Educational software for Windows Science software for MacOS Science software for Windows Formerly proprietary software Pascal (programming language) software
37313008
https://en.wikipedia.org/wiki/Jane%20Fountain
Jane Fountain
Jane E. Fountain is an American political scientist and technology theorist. She is Distinguished University Professor of political science and public policy, the founder and director of the National Center for Digital Government at the University of Massachusetts Amherst, and formerly faculty at the John F. Kennedy School of Government at Harvard University. She is known for her work on institutional change and on the use of technology in governance. Career Fountain earned a bachelor's degree in music from the Boston Conservatory of Music in 1977, serving as concertmaster from 1975–77. She then earned a master's degree in education (administration, planning and social policy) from Harvard University in 1982, and several degrees from Yale University, culminating in a Ph.D. in political science and organizational behavior in 1990. She began teaching at the John F. Kennedy School of Government at Harvard in 1989. In 1998 she established the Women in the Information Age Project and in 2001, with support from the National Science Foundation, established the National Center for Digital Government with David Lazer. In 2005 Fountain moved to the University of Massachusetts Amherst and NCDG was re-established there. Fountain also directs the Science, Technology and Society Initiative at the University of Massachusetts Amherst. Fountain has worked with numerous governmental and NGO organizations. She has been involved with the World Economic Forum for a number of years, serving on the Global Agenda Council on the Future of Government and as Council Chair and Vice Chair. In 2012 she was appointed to the Massachusetts-based Governor's Council on Innovation. She has also worked with the World Bank, the European Commission, the National Science Foundation, and numerous national governments. In 2001, Fountain published Building the Virtual State: Information Technology and Institutional Change, which has been published in multiple languages, and is regarded as a key text in digital government scholarship. Selected works Building the Virtual State: Information Technology and Institutional Change (2001) (awarded "Outstanding Academic Title" by Choice) "Social Capital: A Key Enabler of Innovation" Investing in Innovation: Toward A Consensus Strategy for Federal Technology Policy. Ed. Lewis Branscomb and James Keller. Cambridge, MA: MIT Press, 1998. "Paradoxes of Public Sector Customer Service", Governance, v.14, n.1, pp. 55–73 (Jan. 2001) "Constructing the Information Society: Women, Information Technology and Design." Technology in Society 22: 45-62. 2000. Awards and honors 2014 - Federal 100 Award, Federal Computer Week 2013 - Distinguished University Professor, University of Massachusetts Amherst 2012 - Elected Fellow of the National Academy of Public Administration 2012 - Chancellor's Medal for Academic Excellence and Extraordinary Service to the Campus, University of Massachusetts Amherst. 2011 - Inaugural ITP Section Senior Fellow, Information Technology and Politics Section, American Political Science Association. 2010 - Chancellor's Award for Outstanding Accomplishments in Research and Creative Activity, University of Massachusetts Amherst. 2000 - Fellow, Radcliffe Institute for Advanced Study. Notes External links "Jane Fountain" (faculty profile), University of Massachusetts Dept. of Political Science. Jane Fountain personal website. Jane Fountain Selected Works (institutional repository page). Digital-Government.net. National Center for Digital Government. Science, Technology, and Society Initiative. University of Massachusetts Amherst faculty Harvard Kennedy School faculty Yale University alumni Living people 1955 births American women political scientists American political scientists E-government in the United States Politics and technology Harvard Graduate School of Education alumni Boston Conservatory at Berklee alumni American women academics 21st-century American women
271151
https://en.wikipedia.org/wiki/Rich%20Skrenta
Rich Skrenta
Richard Skrenta (born June 6, 1967 in Pittsburgh, Pennsylvania) is a computer programmer and Silicon Valley entrepreneur who created the web search engine blekko. Biography Richard J Skrenta Jr was born in Pittsburgh on June 6, 1967. In 1982, at age 15, as a high school student at Mt. Lebanon High School, Skrenta wrote the Elk Cloner virus that infected Apple II machines. It is widely believed to have been one of the first large-scale self-spreading personal computer viruses ever created. In 1989, Skrenta graduated with a B.A. in computer science from Northwestern University. Between 1989 and 1991, Skrenta worked at Commodore Business Machines with Amiga Unix. In 1989, Skrenta started working on a multiplayer simulation game. In 1994, it was launched under the name Olympia as a pay-for-play PBEM game by Shadow Island Games. Between 1991 and 1995, Skrenta worked at Unix System Labs and from 1996 to 1998 with IP-level encryption at Sun Microsystems. He later left Sun and became one of the founders of DMOZ. He stayed on board after the Netscape acquisition, and continued to work on the directory as well as Netscape Search, AOL Music, and AOL Shopping. After his stint at AOL, Skrenta went on to cofound Topix LLC, a Web 2.0 company in the news aggregation & forums market. In 2005, Skrenta and his fellow cofounders sold a 75% share of Topix to a newspaper consortium made up of Tribune, Gannett, and Knight Ridder. In the late 2000s, Skrenta headed the startup company Blekko Inc, which was an Internet search engine. Blekko received early investment support from Marc Andreessen and began public beta testing on November 1, 2010. In 2015, IBM acquired both the Blekko company and search engine for their Watson computer system. Skrenta was involved in the development of VMS Monster, an old MUD for VMS. VMS Monster was part of the inspiration for TinyMUD. He is also known for his role in developing TASS, an ancestor of tin, the popular threaded Usenet newsreader for Unix systems. References External links Skrenta.com American computer programmers MUD developers 1967 births Living people Northwestern University alumni People from Mt. Lebanon, Pennsylvania DMOZ sv:Rich Skrenta
3974806
https://en.wikipedia.org/wiki/Granville%20Technology%20Group
Granville Technology Group
Granville Technology Group Ltd was a British computer retailer and manufacturer based in Simonstone, near Burnley, Lancashire, marketing its products under the brand names Time, Tiny, Colossus, Omega and MJN. It sold mainly through mail order, though late in its life, the firm added a chain of shops in the United Kingdom that traded as The Computer Shop, and rapidly grew to over 300 stores. The main competitors were PC World, Comet, and Maplin. Granville Technology Group had 3 manufacturing companies but operated in the same place. The companies were called VMT, GTG and OMT all operated under Granville Technology Group Limited as subsidiaries. VMT was the manufacturing department and GTG was the office part of Granville Technology Group. History Formerly known as Time Group, the company was formed in 1994, by Tahir Mohsan. The manufacturing unit was located at Time Technology Park, in Simonstone. The company produced computers under the Tiny and Time Computers brands, although tracing the ownership of these brands later proved difficult for administrators Grant Thornton due to the group's convoluted and opaque ownership structure. In the year ending June 2003, the group made profits of £2.5m before tax with a turnover of £207m,. The company went into receivership on 27 July 2005, due to the fall in demand for personal computers. That month, it employed 1,600 people, and was one of the largest retailers of computers in the United Kingdom. Time UK Time UK (also known as Time Computers) was the main supplier and manufacturer for GTG. Their products included desktop computers, notebooks, and flat screen televisions. The company supported consumers who purchased Time products from Granville Technology Group Ltd. The company's manufacturing plant and headquarter was based at Simonstone, Lancashire, in the private industrial park, the Time Technology Park, named after its brand. With a turnover of £750 million during the 1990s, the firm became Britain's largest computer manufacturer, by establishing its market in the United Kingdom, Middle East, and the Far East. It was selling computers through retailers, and a chain of stores, The Computer Shop, which was established by Granville Technology Group. In 1999, Time offered a "free PC" where customers could obtain a Windows 98 PC for no upfront cost, but were required to sign up to Time's internet service provider for 24 months. Mohsan subdivided the company into Time Computers and Time Computer Systems, in order to segregate manufacturing and retailing. The company went into administration on 27 July 2005. In March 2006, the brand was rebranded as Time UK. The Time UK television commercials used a duo entitled Dr. Apocalypse & Egore, an "outer space" duo. Another advert used Leonard Nimoy, of Star Trek fame, who reprised his Mr. Spock character. Before the adverts starring Dr. Apocalypse & Egore. Time UK released a campaign with the slogan: we're on your side. The adverts involved a man, bearing a resemblance to Greg Pope, as a computer salesman. In August 2000, Time UK handed its advertising account to HHCL & Partners, which saw the end of the adverts featuring Leonard Nimoy. Time UK's slogan we're on your side has been featured in many adverts from Time circa 1998-2000 and represents that there were no middlesmen and that they built it and that they were with the customer. Time Computer Systems were notable for the full packages of components and software that were available with every system ordered – very often a digital camera, colour inkjet printer and a flatbed scanner were included as standard to enable home users to get the fullest benefit from their investment, and very large bundles of additional software were supplied on CDROM, almost always full products from such publishers as Europress, GSP, Dorling Kindersley and many others, in contrast to the manufacturer–sponsored "crippleware", "trialware", "demos" and generally ad–infested apps that are supplied with new pcs or laptops today. These large packages of software were supplied in addition to the machine recovery discs and the driver CDs included with the major components themselves. The systems were generally supplied with Windows 98SE, Windows ME or Windows XP according the date of manufacture and release to the public. Only a basic "Quick Start Guide" was generally supplied in booklet form, consumers were expected to obtain help and support directly from the machine itself, or the help files supplied with each major component or software application, although a chargeable Technical Support line was available at extra cost. Lawsuits In 2000 Time UK sued IBM over supplying Time UK with faulty components but eventually Time UK won an out-of-court settlement and no further charges were held against IBM. References Defunct companies of England Defunct retail companies of the United Kingdom Companies that have entered administration in the United Kingdom Retail companies established in 1994 Retail companies disestablished in 2005 Computer companies of the United Kingdom Companies based in Lancashire
275715
https://en.wikipedia.org/wiki/Digital%20Radio%20Mondiale
Digital Radio Mondiale
Digital Radio Mondiale (DRM; mondiale being Italian and French for "worldwide") is a set of digital audio broadcasting technologies designed to work over the bands currently used for analogue radio broadcasting including AM broadcasting—particularly shortwave—and FM broadcasting. DRM is more spectrally efficient than AM and FM, allowing more stations, at higher quality, into a given amount of bandwidth, using xHE-AAC audio coding format. Various other MPEG-4 and Opus codecs are also compatible, but the standard now specifies xHE-AAC. Digital Radio Mondiale is also the name of the international non-profit consortium that has designed the platform and is now promoting its introduction. Radio France Internationale, TéléDiffusion de France, BBC World Service, Deutsche Welle, Voice of America, Telefunken (now Transradio) and Thomcast (now Ampegon) took part at the formation of the DRM consortium. The principle of DRM is that bandwidth is the limiting factor, and computer processing power is cheap; modern CPU-intensive audio compression techniques enable more efficient use of available bandwidth, at the expense of processing resources. Features DRM can broadcast on frequencies below 30 MHz (long wave, medium wave and short wave), which allow for very-long-distance signal propagation. The modes for these lower frequencies were previously known as "DRM30". In the VHF bands, the term "DRM+" was used. DRM+ is able to use available broadcast spectra between 30 and 300 MHz; generally this means band I (47 to 68 MHz), band II (87.5 to 108 MHz) and band III (174 to 230 MHz). DRM has been designed to be able to re-use portions of existing analogue transmitter facilities such as antennas, feeders, and, especially for DRM30, the transmitters themselves, avoiding major new investment. DRM is robust against the fading and interference which often plague conventional broadcasting in these frequency ranges. The encoding and decoding can be performed with digital signal processing, so that a low-cost embedded system with a conventional transmitter and receiver can perform the rather complex encoding and decoding. As a digital medium, DRM can transmit other data besides the audio channels (datacasting) — as well as RDS-type metadata or program-associated data as Digital Audio Broadcasting (DAB) does. DRM services can be operated in many different network configurations, from a traditional AM one-service one-transmitter model to a multi-service (up to four) multi-transmitter model, either as a single-frequency network (SFN) or multi-frequency network (MFN). Hybrid operation, where the same transmitter delivers both analogue and DRM services simultaneously is also possible. DRM incorporates technology known as Emergency Warning Features that can override other programming and activates radios which are in standby in order to receive emergency broadcasts. Status The technical standard is available free-of-charge from the ETSI, and the ITU has approved its use in most of the world. Approval for ITU region 2 is pending amendments to existing international agreements. The inaugural broadcast took place on June 16, 2003, in Geneva, Switzerland, at the ITU's World Radio Conference. Current broadcasters include All India Radio, BBC World Service, funklust (formerly known as BitXpress), Radio Exterior de España, Radio New Zealand International, Vatican Radio, Radio Romania International and Radio Kuwait. Until now DRM receivers have typically used a personal computer. A few manufacturers have introduced DRM receivers which have thus far remained niche products due to limited choice of broadcasts. It is expected that the transition of national broadcasters to digital services on DRM, notably All India Radio, will stimulate the production of a new generation of affordable, and efficient receivers. Chengdu NewStar Electronics is offering the DR111 from May 2012 on which meets the minimum requirements for DRM receivers specified by the DRM consortium and is sold worldwide. The General Overseas Service of All India Radio broadcasts daily in DRM to Western Europe on 9.95 MHz at 17:45 to 22:30 UTC. All India Radio is in the process of replacing and refurbishing many of its domestic AM transmitters with DRM. The project which began in 2012 is scheduled to complete during 2015. The British Broadcasting Corporation BBC has trialled the technology in the United Kingdom by broadcasting BBC Radio Devon in the Plymouth area in the MF band. The trial lasted for a year (April 2007 – April 2008). The BBC also trialed DRM+ in the FM band in 2010 from the Craigkelly transmitting station in Fife, Scotland, over an area which included the city of Edinburgh. In this trial, a 10 kW (ERP) FM transmitter was replaced with a 1 kW DRM+ transmitter in two different modes, and coverage compared with FM. Digital Radio Mondiale was included in the 2007 Ofcom consultation on the future of radio in the United Kingdom for the AM medium wave band. RTÉ has also run single and multiple programme overnight tests during a similar period on the 252 kHz LW transmitter in Trim, County Meath, Ireland which was upgraded to support DRM after Atlantic 252 closed. The Fraunhofer Institute for integrated circuits IIS offers a package for software defined radios which can be licensed to radio manufacturers. International regulation On 28 September 2006, the Australian spectrum regulator, the Australian Communications and Media Authority, announced that it had "placed an embargo on frequency bands potentially suitable for use by broadcasting services using Digital Radio Mondiale until spectrum planning can be completed" "those bands being "5,950–6,200; 7,100–7,300; 9,500–9,900; 11,650–12,050; 13,600–13,800; 15,100–15,600; 17,550–17,900; 21,450–21,850 and 25,670–26,100 kHz. The United States Federal Communications Commission states in that: "For digitally modulated emissions, the Digital Radio Mondiale (DRM) standard shall be employed." Part 73, section 758 is for HF broadcasting only. Technological overview Audio source coding Useful bitrates for DRM30 range from 6.1 kbit/s (Mode D) to 34.8 kbit/s (Mode A) for a 10 kHz bandwidth (±5 kHz around the central frequency). It is possible to achieve bit rates up to 72 kbit/s (Mode A) by using a standard 20 kHz (±10 kHz) wide channel. (For comparison, pure digital HD Radio can broadcast 20 kbit/s using channels 10 kHz wide and up to 60 kbit/s using 20 kHz channels.) Useful bitrate depends also on other parameters, such as: desired robustness to errors (error coding) power needed (modulation scheme) robustness in regard to propagation conditions (multipath propagation, doppler effect), etc. When DRM was originally designed, it was clear that the most robust modes offered insufficient capacity for the then state-of-the-art audio coding format MPEG-4 HE-AAC (High Efficiency Advanced Audio Coding). Therefore, the standard launched with a choice of three different audio coding systems (source coding) depending on the bitrate: MPEG-4 HE-AAC (High Efficiency Advanced Audio Coding). AAC is a perceptual coder suited for voice and music and the High Efficiency is an optional extension for reconstruction of high frequencies (SBR: spectral bandwidth replication) and stereo image (PS: Parametric Stereo). 24 kHz or 12 kHz sampling frequencies can be used for core AAC (no SBR) which correspond respectively to 48 kHz and 24 kHz when using SBR oversampling. MPEG-4 CELP which is a parametric coder suited for voice only (vocoder) but that is robust to errors and needs a small bit rate. MPEG-4 HVXC which is also a parametric coder for speech programs that uses an even smaller bitrate than CELP. However, with the development of MPEG-4 xHE-AAC, which is an implementation of MPEG Unified Speech and Audio Coding, the DRM standard was updated and the two speech-only coding formats, CELP and HVXC, were replaced. USAC is designed to combine the properties of a speech and a general audio coding according to bandwidth constraints and so is able to handle all kinds of programme material. Given that there were few CELP and HVXC broadcasts on-air, the decision to drop the speech-only coding formats has passed without issue. Many broadcasters still use the HE-AAC coding format because it still offers an acceptable audio quality, somewhat comparable to FM broadcast at bitrates above about 15 kbit/s. However, it is anticipated that in future, most broadcasters will adopt xHE-AAC. Additionally, as of v2.1, the popular Dream software can broadcast using the Opus coding format. Whilst not within the current DRM standard the inclusion of this codec is provided for experimentation. Aside from perceived technical advantages over the MPEG family such as low latency (delay between coding and decoding), the codec is royalty-free and has an open source implementation. It is an alternative to the proprietary MPEG family whose use is permitted at the discretion of the patent holders. Unfortunately it has a substantially lower audio quality than xHE-AAC at low bitrates, which are a key to conserve bandwidth. In fact, at 8 Kbps Opus actually sounds worse than analog shortwave radio. A video showing the comparison between Opus and xHE-AAC is available here. Equipment manufacturers currently pay royalties for incorporating the MPEG codecs. Bandwidth DRM broadcasting can be done using a choice of different bandwidths: 4.5 kHz. Gives the ability for the broadcaster to do a simulcast and use the lower-sideband area of a 9 kHz raster channel for AM, with a 4.5 kHz DRM signal occupying the area traditionally taken by the upper-sideband. However the resulting bit rate and audio quality is not good. 5 kHz. Gives the ability for the broadcaster to do a simulcast and use the lower-sideband area of a 10 kHz raster channel for AM, with a 5 kHz DRM signal occupying the area traditionally taken by the upper-sideband. However the resulting bit rate and audio quality is marginal (7.1–16.7 kbit/s for 5 kHz). This technique could be used on the shortwave bands throughout the world. 9 kHz. Occupies half the standard bandwidth of a region 1 long wave or medium wave broadcast channel. 10 kHz. Occupies half the standard bandwidth of a region 2 broadcast channel, and could be used to simulcast with analogue audio channel restricted to NRSC5. Occupies a full worldwide short wave broadcast channel (giving 14.8–34.8 kbit/s). 18 kHz. Occupies full bandwidth of region 1 long wave or medium wave channels according to the existing frequency plan. This offers better audio quality. 20 kHz. Occupies full bandwidth of region 2 or region 3 AM channel according to the existing frequency plan. This offers highest audio quality of the DRM30 standard (giving 30.6–72 kbit/s). 100 kHz for DRM+. This bandwidth can be used in band I, II, and III and DRM+ can transmit four different programs in this bandwidth or even one low definition digital video channel. Modulation The modulation used for DRM is coded orthogonal frequency division multiplexing (COFDM), where every carrier is modulated with quadrature amplitude modulation (QAM) with a selectable error coding. The choice of transmission parameters depends on signal robustness wanted and propagation conditions. Transmission signal is affected by noise, interference, multipath wave propagation and Doppler effect. It is possible to choose among several error coding schemes and several modulation patterns: 64-QAM, 16-QAM and 4-QAM. OFDM modulation has some parameters that must be adjusted depending on propagation conditions. This is the carrier spacing which will determine the robustness against Doppler effect (which cause frequencies offsets, spread: Doppler spread) and OFDM guard interval which determine robustness against multipath propagation (which cause delay offsets, spread: delay spread). The DRM consortium has determined four different profiles corresponding to typical propagation conditions: A: Gaussian channel with very little multipath propagation and Doppler effect. This profile is suited for local or regional broadcasting. B: multipath propagation channel. This mode is suited for medium range transmission. It is nowadays frequently used. C: similar to mode B, but with better robustness to Doppler (more carrier spacing). This mode is suited for long distance transmission. D: similar to mode B, but with a resistance to large delay spread and Doppler spread. This case exists with adverse propagation conditions on very long distance transmissions. The useful bit rate for this profile is decreased. The trade-off between these profiles stands between robustness, resistance in regards to propagation conditions and useful bit rates for the service. This table presents some values depending on these profiles. The larger the carrier spacing, the more the system is resistant to Doppler effect (Doppler spread). The larger the guard interval, the greater the resistance to long multipath propagation errors (delay spread). The resulting low-bit rate digital information is modulated using COFDM. It can run in simulcast mode by switching between DRM and AM, and it is also prepared for linking to other alternatives (e.g., DAB or FM services). DRM has been tested successfully on shortwave, mediumwave (with 9 as well as 10 kHz channel spacing) and longwave. There is also a lower bandwidth two-way communication version of DRM as a replacement for SSB communications on HF - note that it is not compatible with the official DRM specification. It may be possible in some future time for the 4.5 kHz bandwidth DRM version used by the Amateur Radio community to be merged with the existing DRM specification. The Dream software will receive the commercial versions and also limited transmission mode using the FAAC AAC encoder. Error coding Error coding can be chosen to be more or less robust. This table shows an example of useful bitrates depending on protection classes: OFDM propagation profiles (A or B) carrier modulation (16QAM or 64QAM) and channel bandwidth (9 or 10 kHz) The lower the protection class the higher the level of error correction. DRM+ While the initial DRM standard covered the broadcasting bands below 30 MHz, the DRM consortium voted in March 2005 to begin the process of extending the system to the VHF bands up to 108 MHz. On 31 August 2009, DRM+ (Mode E) became an official broadcasting standard with the publication of the technical specification by the European Telecommunications Standards Institute; this is effectively a new release of the whole DRM spec with the additional mode permitting operation above 30 MHz up to 174 MHz. Wider bandwidth channels are used, which allows radio stations to use higher bit rates, thus providing higher audio quality. A 100 kHz DRM+ channel has sufficient capacity to carry one low-definition 0.7 megabit/s wide mobile TV channel: it would be feasible to distribute mobile TV over DRM+ rather than DMB or DVB-H. However, DRM+ (DRM Mode E) as designed and standardized only provides bitrates between 37.2 and 186.3 kbit/s depending on robustness level, using 4-QAM or 16-QAM modulations and 100 kHz bandwidth. DRM+ has been successfully tested in all the VHF bands, and this gives the DRM system the widest frequency usage; it can be used in band I, II and III. DRM+ can coexist with DAB in band III but also the present FM-band can be utilized. The ITU has published three recommendations on DRM+, known in the documents as Digital System G. This indicates the introduction of the full DRM system (DRM 30 and DRM+). ITU-R Rec. BS.1114 is the ITU recommendation for sound broadcasting in the frequency range 30 MHz to 3 GHz. DAB, HD-Radio and ISDB-T were already recommended in this document as Digital Systems A, C and F, respectively. In 2011, the pan-European organisation Community Media Forum Europe has recommended to the European Commission that DRM+ should rather be used for small scale broadcasting (local radio, community radio) than DAB/DAB+. See also AMSS AM signalling system Digital Audio Broadcasting (DAB) Digital Multimedia Broadcasting (DMB) DVB-H (Digital Video Broadcasting - Handhelds) DVB-T (Digital Video Broadcasting - Terrestrial) ETSI Satellite Digital Radio (SDR) HD Radio, American system for digital radio ISDB-Tsb, Japanese system for digital radio. Cliff effect, which affects digital communications such as radio Shortwave Radio In-band on-channel References External links Digital Radio Mondiale (DRM) - official homepage How to receive DRM on the long-, medium- and shortwave bands Diorama DRM receiver. An open source DRM receiver written by the Institute of Telecommunications of the University Kaiserslautern (Germany)) WinDRM DRM software for amateur radio users Dream - an open-source software DRM Receiver gr-drm GNU Radio transmitter implementation DRM Software DRM software collection Global DRM transmissions schedule Digital radio International broadcasting Open standards Radio hobbies
9552360
https://en.wikipedia.org/wiki/Periphere%20Computer%20Systeme
Periphere Computer Systeme
Periphere Computer Systeme (PCS) was founded in Munich by the brothers Georg and Eberhard Färber in 1969. In the 1980s and 1990s it was a manufacturer of a line of UNIX-based workstations called "". Their flavor of System V was called ; it was the first port of System V performed in Germany. They also developed a networking protocol that was based on the Newcastle Connection ("UNIXes of the World Unite!") and dubbed MUNIX/net, at the time competing with Sun Microsystems' NFS. In addition to UNIX computers, PCS also manufactured industrial terminals. In 1985, PCS founded a US daughter company named Cadmus Computer Systems to distribute the workstations in the US. Eventually, PCS was bought out by Mannesmann-Kienzle, which in turn was bought out by Ken Olsen to become part of DEC, Digital Equipment Corporation. The main driver for the buyouts was a client/server ERP product developed by a dynamic young team at Mannesmann Kienzle Software, competing with SAP R/3. Ken Olson had planned to diversify the corporation, but was ousted by shareholders who did not share his vision of no longer relying on the more and more commoditized computer systems sale and rather jump on the ERP bandwagon early on. One of the reasons for naming DECs last line of CPUs (AlphaAXP) was that they were intended to be sold as the Alpha and the Omega (codename for the ERP system). As a result, Digital-Kienzle, including its PCS subsidiary, entered into a staff buy-out of the company, in part sponsored by some of the German states. The timing of this decision pulled the financial bottom out of the whole setup in terms of the ERP system that was just getting a decent foothold in the market. As a result, there is not much left to be said or heard about said ERP system. In the late 1980s, whenever Mannesmann-Kienzle's representatives were speaking at conferences prior to SAP's presenters, SAP during Q&A usually had to state that they were planning to be in their development of R/3 where the "Omega"-team already had arrived at. One of the more prominent figures of the former PCS might be Jürgen Gulbins, who authored several books on Unix and related tools, as well as Jordan Hubbard, who spent several years at PCS (in the X11 group) before departing for Ireland, where he co-founded the FreeBSD project. See also Super-root (a feature of MUNIX) Karlsruhe Accurate Arithmetic (for Cadmus computer) Pascal-SC (for Cadmus computer) References Defunct computer companies of Germany
25123244
https://en.wikipedia.org/wiki/MLR%20Institute%20of%20Technology
MLR Institute of Technology
MLR Institute of Technology (MLRIT) is located at Dundigal, Hyderabad, Telangana, India. The institution was started in 2005 by the KMR Education Trust, headed by Mr. Marri Laxman Reddy. The Institute has Six UG courses along with Seven PG Courses. The Institute is affiliated with Jawaharlal Nehru Technological University, Hyderabad (JNTUH). It was granted Autonomous status by [[University Grants Commission (India)] in the year 2015. Courses The institute offers a four-year degree Bachelor of Technology, in Nine key disciplines: Department of Aeronautical Engineering. Department of Computer Science & Engineering (CSE). Department of Computer Science & Engineering (AI & ML). Department of Computer Science & Engineering (Data Science). Department of Computer Science & Engineering (Cyber Security). Information Technology (IT). Department of Electronics and Communication Engineering (ECE). Department of Mechanical Engineering (ME). Department of Electrical and Electronics Engineering(EEE). Every semester has a minimum of eight courses, which includes a minimum of two laboratory courses. Students are required to attend the institute for duration of not less than four years and not more than eight years to be considered for the award of the degree. The institute follows JNTUH strict attendance regulation, which requires students to put in a minimum class attendance of 75% to progress to the next semester. External (university) exams are held every semester in the campus and consolidated results in all these exams count towards the final aggregate grading. Admissions The minimum criterion for admission is 50% marks in the Intermediate/10+2 Examination. Students are admitted primarily based on their ranks in the Common Entrance Test EAMCET, held by the JNTUH, Telangana every year. Departments The institute has 13 departments: Department of Aeronautical Engineering Department of Artificial Intelligence & Machine Learning Department of Computer Science & Engineering Department of Computer Science & Engineering - Cyber Security Department of Computer Science & Engineering - Data Science Department of Electronics and Communication Engineering Department of Information Technology Department of Computer Science & Information Technology Department of Mechanical Engineering Department of Electrical and Electronics Engineering Department of Mathematics and Humanities Department of Physical Education Department of R&D Industry Associations The institute has associations and MOUs with a lot of industries in various fields for imparting training to students at a professional level, some of them include IBM, Microsoft, Tata Advanced Systems, National Instruments, Dassault Systems to name a few. External links Universities and colleges in Hyderabad, India 2005 establishments in Andhra Pradesh Educational institutions established in 2005
166087
https://en.wikipedia.org/wiki/Software%20development%20kit
Software development kit
A software development kit (SDK) is a collection of software development tools in one installable package. They facilitate the creation of applications by having a compiler, debugger and perhaps a software framework. They are normally specific to a hardware platform and operating system combination. To create applications with advanced functionalities such as advertisements, push notifications, etc; most application software developers use specific software development kits. Some SDKs are required for developing a platform-specific app. For example, the development of an Android app on the Java platform requires a Java Development Kit. For iOS applications (apps) the iOS SDK is required. For Universal Windows Platform the .NET Framework SDK might be used. There are also SDKs that add additional features and can be installed in apps to provide analytics, data about application activity, and monetization options. Some prominent creators of these types of SDKs include Google, Smaato, InMobi, and Facebook. Details An SDK can take the form of application programming interfaces (APIs) in the form of on-device libraries of reusable functions used to interface to a particular programming language, or it may be as complex as hardware-specific tools that can communicate with a particular embedded system. Common tools include debugging facilities and other utilities, often presented in an integrated development environment (IDE). SDKs may include sample software and/or technical notes along with documentation, and tutorials to help clarify points made by the primary reference material. SDKs often include licenses that make them unsuitable for building software intended to be developed under an incompatible license. For example, a proprietary SDK is generally incompatible with free software development, while a GPL-licensed SDK could be incompatible with proprietary software development, for legal reasons. However, SDKs built under the GNU Lesser General Public License (LGPL) are typically usable for proprietary development. In cases where the underlying technology is new, SDKs may include hardware. For example, AirTag's 2021 NFC SDK included both the paying and the reading halves of the necessary hardware stack. The average Android mobile app implements 15.6 separate SDKs, with gaming apps implementing on average 17.5 different SDKs. The most popular SDK categories for Android mobile apps are analytics and advertising. SDKs can be unsafe (because they are implemented within apps, yet run separate code). Malicious SDKs (with honest intentions or not) can violate users' data privacy, damage app performance, or even cause apps to be banned from Google Play or the App Store. New technologies allow app developers to control and monitor client SDKs in real time. Providers of SDKs for specific systems or subsystems sometimes substitute a more specific term instead of software. For instance, both Microsoft and Citrix provide a driver development kit (DDK) for developing device drivers. Notable examples Notable examples of software development kits for various plattforms include: Android NDK iOS SDK Java Development Kit Java Web Services Development Pack Microsoft Windows SDK VaxTele SIP Server SDK Visage SDK Vuforia Augmented Reality SDK Windows App SDK Xbox Development Kit See also Application programming interface Game development kit Widget toolkit (or GUI toolkit) References Computer libraries Software development
48639018
https://en.wikipedia.org/wiki/CircuitMaker
CircuitMaker
CircuitMaker is electronic design automation software for printed circuit board designs targeted at the hobby, hacker, and maker community. CircuitMaker is available as freeware, and the hardware designed with it may be used for commercial and non-commercial purposes without limitations. It is currently available publicly as version 2.0 by Altium Limited, with the first non-beta release on January 17, 2016. History MicroCode CircuitMaker CircuitMaker, TraxMaker and SimCode were originally developed by the Orem-based MicroCode Engineering, Inc. since 1988. CircuitMaker 5 for Windows 3.1, 9x and NT became available in 1997, CircuitMaker 6, CircuitMaker PRO, TraxMaker 3 and TraxMaker PRO in 1998. Protel CircuitMaker Electronic design automation software (EDA) developer Protel marketed CircuitMaker 2000 as a schematic capture tool, together with TraxMaker as its PCB layout counterpart, as a powerful yet affordable solution for circuit board needs. Its ease of use and comparatively low cost quickly gained it popularity among students, and the software suite was commonly used to teach circuit board design to engineering students in universities. The wide availability of plug-ins and component libraries have accelerated adoption, and quickly amassed a worldwide community. When Protel was renamed Altium Limited in the early 2000s, engineering efforts were redirected towards the development of DXP 2004, and CircuitMaker 2000 was eventually discontinued. Due to its new status as abandonware, CircuitMaker 2000 remained popular among hobby users and students. This popularity has been observed by Altium, and the most successful features of CircuitMaker 2000 have since been integrated in DXP 2004 and later were incorporated into Altium Designer. Altium CircuitMaker Open source hardware and easy-to-use development boards such as the Arduino and the Raspberry Pi have increased community interest in electronics, particularly in fablabs, hackerspaces and makerspaces. The leading EDA software vendors traditionally lack free versions, and professional licenses are unaffordable for amateurs. This resulted in high piracy rates for professional software packages, or users sticking to outdated software, including CircuitMaker 2000. Several initiatives such as EAGLE have attempted to fill this void, releasing restricted versions of semi-professional EDA tools. The rise of KiCad further fragmented the market. This pressure eventually provided the incentive for Altium to release a simplified and more user friendly version of their professional EDA software package and flagship product, Altium Designer, targeted at less complex circuit board projects. This culminated into the rebirth of CircuitMaker as schematic capture and PCB design software. Despite the resemblance in naming, the current CircuitMaker differs entirely from CircuitMaker 2000 regarding features and graphical user interface: the SPICE simulation module has been removed; the library system has been overhauled; and the controls changed from classic menus to a more modern and visually appealing ribbon interface. Merge with Upverter On 14 May 2018, Altium announced plans to merge CircuitMaker and Upverter into a single, free to use design platform. However, in a blog post on May 11, 2019, Altium COO Ted Pawela stated that the plans had evolved, and the products would remain separate, with interoperability features for the design files. Features CircuitMaker implements schematic capture and PCB design using the same engine as Altium Designer, providing an almost identical user experience. The schematic editor includes basic component placement and circuit design as well as advanced multi-channel design and hierarchical schematics. All schematics are uploaded to the Altium server and can be viewed by anyone with a CircuitMaker account, stimulating design re-use. CircuitMaker supports integration with the Octopart search engine and allows drag and drop placement of components from the Octopart search results if schematic models are attached to them. Users can build missing schematic symbols and commit them to the server, called the Community Vault, making them available for other users. The continuously growing part database eliminates the need for a custom schematic symbol or footprint design for common parts, increasing user-friendliness for beginners. Concurrency editing was added in version 1.3, allowing multiple users to collaborate on a schematic or PCB document simultaneously and exchange thoughts through an integrated comment and annotation system. Transfer of schematics to a PCB is a straightforward process in CircuitMaker since PCB footprints are automatically attached to any component on the schematic that was picked from the Octopart library. PCB footprints may have simple 3D models or complex STEP models attached to them, enabling real time 3D rendering of the PCB during development. CircuitMaker supports design rule configuration and real time design rule checking. Some advanced features, including differential pair routing, interactive length tuning, and polygon pour management, are also available. Production files can be exported directly, although an external Gerber viewer must be used to check the exports. The entire PCB can also be exported as a 3D STEP model for further use in mechanical 3D CAD software. CircuitMaker is only available for the Windows operating system. This requires users to have access to a Windows license to use CircuitMaker. As of 2020, CircuitMaker can be run in Wine on Ubuntu, with limitations, but some users reported it does not work on their Linux distribution. Unofficial support for Linux and BSD users is provided by Altium staff and volunteers on the CircuitMaker forum. While users can import resources from competing EDA software packages, CircuitMaker does not support exporting design resources itself. A workaround for this issue is provided by Altium Designer 15 and 16 which do support the import of CircuitMaker files. Open source hardware CircuitMaker requires a free account to represent its users in the community. An active internet connection is required to start and use the software. Users are allowed to have 5 private projects, the so-called sandbox mode for practicing. By default, all schematics and PCBs are uploaded to the server and can be viewed by other users as soon as they are committed through the internal svn engine. While this renders CircuitMaker undesirable for closed source projects, it encourages collaboration in the community. Users are allowed to fork existing projects, or request permission to collaborate in existing projects. Importing schematic documents and PCBs from other EDA packages (OrCAD, PADS, P-CAD, EAGLE) is supported. Users are allowed to own unlimited projects, and there is no hard limit on board complexity. However, Altium warns that users may experience a performance drop for large projects. All documents are under version control by default, allowing users to revert changes made in their projects, and build new versions of existing schematic symbols or footprints in the Community Vault. Users can comment on each other's projects and parts, rate them, and propose improvements. CircuitMaker supports direct generation of production files in industry standard formats such as Gerber and NC Drill, as well as printing of stencils for DIY circuit board etching. See also Altium Limited Cloud storage Altium Designer Comparison of EDA software References External links Electronic design automation software Printed circuit board manufacturing
57191798
https://en.wikipedia.org/wiki/Marie-Claude%20Gaudel
Marie-Claude Gaudel
Marie-Claude Gaudel (born 1946) is a French computer scientist. She is a professor emerita at the University of Paris-Sud. She helped develop PLUSS language for software specifications and was involved in both theoretical and applied computer science. Gaudel is still active in professional societies. Early life and education Marie-Claude Gaudel was born in 1946 in Nancy, France, into a family of scientists and mathematicians. She attended the University of Nancy and graduated with a Masters in Mathematics and Fundamental Applications in 1968. She obtained three more degrees from the University of Nancy: a DEA of Mathematics in 1969, a Postgraduate Doctorate in Computer Science in 1971, and a Doctorate of State in 1980. Career In 1973, while still studying at the University of Nancy, Gaudel began working as a researcher at the French Institute for Research in Computer Science and Automation (INRIA). From 1981 to the beginning of 1984, Gaudel managed the Software Engineering group at the industrial research centre of Alcatel-Alsthom in Marcoussis, France. In 1984, she became a professor at the University of Paris-Sud at Orsay. Her work there focused on software testing, particularly testing based on formal specifications. In the 1980s and 1990s, Gaudel helped to develop the PLUSS language, which is used for software specifications, and the ASSPEGIQUE specification environment. She worked on the theoretical and practical side of computer science, developing a theory of software testing, formal testing, and applying her insights to real-world industrial problems. Her research group also developed the LOFT system for selecting test data. In the 2000s, Gaudel worked on three main projects. She tested software specified in the Circus language with researchers from the University of York, researched approximate software verification, and developed algorithms for random software testing and analysis. Gaudel retired from the University of Paris-Sud in March 2007 but continues to be a member of a number of programme committees, including serving as chair for several conferences on formal testing. She edits for the journals The Science of Computer Programming and Formal Aspects of Computing and continues to be active in the scientific community. Awards and honors Doctor Honoris Causa from EPFL, Switzerland, 1995 Silver Medal of the CNRS, 1996 Knight of the Legion of Honour, 2011 Honorary Member of the Société Informatique de France, 2013 Doctor Honoris Causa from the University of York, UK, 2013 Selected publications Gaudel has authored or co-authored numerous publications during her time at the University of Paris-Sud and since retirement. Some of the most cited ones are listed below: G. Bernot, M.-C. Gaudel and B. Marre. (1991): "Software Testing Based on Formal Specifications: A theory and a tool," Software Engineering Journal, vol. 9, no. 6, pp. 387-405. M.-C. Gaudel. (1995): "Testing can be formal, too", Colloquium on Trees in Algebra and Programming, pp. 82-96. L. Bougé, N. Choquet, L. Fribourg, and M.-C. Gaudel. (1986): "Test sets generation from algebraic specifications using logic programming", Journal of Systems and Software, vol. 6, no. 4, pp. 343-360. References 1946 births Living people Nancy-Université alumni University of Paris faculty Chevaliers of the Légion d'honneur French computer scientists French women computer scientists
59227784
https://en.wikipedia.org/wiki/Manchester%20%281805%20ship%29
Manchester (1805 ship)
Manchester was originally built at Falmouth in 1805, and served the Post Office Packet Service. Hence, she was generally referred to as a packet ship, and often as a Falmouth packet. In 1813 an American privateer captured her after a single-ship action, but the British Royal Navy recaptured her quickly. She returned to the packet trade until 1831 when she became a whaler, making one whaling voyage to the Seychelles. From 1835 she was a merchantman, trading between London and Mauritius. She was last listed in 1841. Career Because packet ships sailed under contract with the Royal Mail, they did not carry marine insurance. Lloyd's Register first listed Manchester only in 1812. At that time it showed her master as Elphinstone, her owner as Carne & Co., and her trade as Falmouth–Cadiz. Still, there were earlier references in Lloyd's List. For instance, on 20 June 1806, Lloyd's List reported that the "Manchester Packet" had arrived at Falmouth from Tortola, which she had left on 18 May. On 9 September 1810 Captain Richard L. Davies sailed Manchester from Falmouth for Brazil. She was at Madeira on 21–22 September, and Bahia on 12–23 October. She arrived at Rio de Janeiro on 2 December and left on 20 December. She returned to Falmouth on 13 January 1813. Cadiz experienced a terrible gale between 27 and 29 March 1811. Many vessels were damaged, including the "Manchester Packet", which lost her foremast, bowsprit, etc. Capture and recapture On 24 June 1813 Manchester encountered the American privateer Yorktown, of 500 tons (bm), 16 guns, and 116 men. After a 20-hour running fight and 67 minute close engagement, Manchester struck at , after first having thrown her mails overboard. She had had a passenger and two crew members slightly wounded. Three days earlier Yorktown had captured Lavinia, Connell, master, from Saint Johns, Newfoundland, to Oporto. Yorktown put prize crews on Manchester, Lavinia, and some other prizes, and sent them to America. On the 27th, Yorktown captured Apollo, Aikin, master, at . Apollo had been sailing from New Providence to London. Yorktown put on Apollo all the prisoners from the prizes she had taken and sent Apollo to England. Apollo arrived at Falmouth on 9 July. According to the London Gazette, , , and recaptured Manchester on 18 August. Other records confirm that Manchester was a packet brig, R. Elphinstone, master, and represented a recapture. Manchester had been a prize to the American privateer Yorktown. also recaptured Lavinia. On 5 September 1814 Captain Robert P.R. Elphinston sailed from Falmouth for Brazil. She was at Madeira on 8–9 November. She sailed on to Rio de Janeiro, which she left on 29 November. On 20 December she fought off an American privateer at , about 200 miles SSE of Salvador. She then put into Bahia for repairs. She left Bahia on 3 January 1815 and arrived back at Falmouth on 16 March. Captain Elphinston sailed Manchester from Falmouth on 18 January 1816, bound for Brazil. She was at Madeira on 14–15 February, and Bahia on 10–13 March. She arrived at Rio de Janeiro on 22 March and left on 7 April. She returned to Falmouth on 15 June. Captain Elphinston sailed Manchester from Falmouth on 19 August 1817, bound for Brazil. She was at Madeira on 4–5 September, and Bahia on 14–16 October. She left Rio de Janeiro on 13 November. She was at Fowey on 15 January 1818 and returned to Falmouth on 18 January. Captain Elphinston sailed Manchester from Falmouth on 21 December 1820, bound for Brazil. She was at Madeira on 15–16 January 1821, and reached Pernambuco on 6 February. She was at Rio de Janeiro between 17 February and 4 March. She arrived back at Falmouth on 2 June. In February 1822 "Manchester Packet" was refloated without damage, after going ashore at Falmouth in a gale. Captain Elphinston sailed Manchester from Falmouth on 11 August 1822, bound for Brazil. She was at Madeira on 21–22 August, and reached Pernambuco on 18 September. She left Rio de Janeiro on 21 October. She arrived back at Falmouth on 9 December. Captain Elphinston sailed Manchester from Falmouth on 8 June 1823, bound for Brazil. She was at Madeira on 16–17 June and Teneriffe on 19 June. She was at Rio de Janeiro between 1 and 10 August. She left Bahia on 23 August and Pernambuco on 1 September. She arrived back at Falmouth on 11 October. Whaler By 1831 Blyth & Co. had acquired Manchester. She had undergone a repair in 1830 and her new owners decided to employ her as a whaler in the Southern Whale Fishery, under the command of Captain Brown. Brown sailed Manchester from London in 1832, bound for the Seychelles. Manchester returned on 2 May 1834 with at least 700 barrels of whale oil. Lloyd's Register for 1833 showed her trade as London–New South Wales, but they may represent only a part of her voyage. Lloyd's Register for 1834 showed Manchesters master changing from Brown to Livesay, and her trade as London–Mauritius. In 1834 the British East India Company (EIC) had lost the last vestiges of its control over the trade between England and the Far East and had exited the shipping business. Manchester, therefore, did not need a license from the EIC to trade with Mauritius. Merchantman Fate Manchester was last listed in 1841 with data unchanged from 1840. Notes, citations, and references Notes Citations References 1805 ships Age of Sail merchant ships of England Captured ships Whaling ships Packet (sea transport) Falmouth Packets
104587
https://en.wikipedia.org/wiki/Muscle%20Shoals%2C%20Alabama
Muscle Shoals, Alabama
Muscle Shoals is the largest city in Colbert County, Alabama, United States. It is located along the Tennessee River in the northern part of the state and, as of the 2010 census, the population of Muscle Shoals was 13,146. The estimated population in 2019 was 14,575. Both the city and the Florence-Muscle Shoals Metropolitan Area (including four cities in Colbert and Lauderdale counties) are commonly called "the Shoals". Northwest Alabama Regional Airport serves the Shoals region, located in the northwest section of the state. Since the 1960s, the city has been known for music. Local studios and artists developed the "Muscle Shoals Sound", including FAME Studios in the late 1950s and Muscle Shoals Sound Studio in 1969. They produced hit records that shaped the national and international history of popular music. Due to its strategic location along the Tennessee River, Muscle Shoals had long been territory of historic Native American tribes. In the late 18th and early 19th centuries, as Europeans entered the area in greater number, it became a center of historic land disputes. The new state of Georgia had ambitions to anchor its western claims (to the Mississippi River) by encouraging European-American development here, but that project did not succeed. In 1922 Muscle Shoals was the site of an attempted community development project by Henry Ford. He wanted to take over a dam and other infrastructure built by the War Department but did not like the terms offered. Due to Ford's influence in the area, some streets were named after streets in Detroit, Michigan. As in Detroit, Woodward Avenue is the name of the main road through the city. Ford conceived of a 75-mile industrial corridor from Decatur, Alabama, to the tri-state border of Pickwick Lake. Under President Franklin D. Roosevelt's administration during the Great Depression, the Tennessee Valley Authority was established to create infrastructure and jobs, resulting in electrification of a large rural area along the river. The Ford Motor Company did build and operate a plant for many years in the Listerhill community, three miles east of Muscle Shoals; it closed in 1982 as part of industrial restructuring when jobs moved out of the country. Etymology There are several explanations as to how the city got its name. One is that it was named for a former natural feature of the Tennessee River, a shallow zone where mussels were gathered, and settlers named as Muscle Shoals. When the area was first settled, the distinct spelling of "mussel" to refer to the shellfish had not yet been locally adopted. Cherokee people knew of this place as ᏓᎫᎾᏱ, "the place of clams or mussels". History Like other areas along waterways, this was important to indigenous peoples for The area of Muscle Shoals was a part of the historic Cherokee hunting grounds dating to at least the early eighteenth century, if not earlier. Many Cherokee fought against the rebels during the late American Revolutionary War, hoping to expel them from their territories. After the Revolution, Cherokee attitudes toward the new U.S. republic were divided, as settlers increasingly encroached on their territory. An anti-American faction, dubbed the Chickamauga, separated from more conciliatory Cherokees, and moved into present-day south-central and southeastern Tennessee. Most of this band settled along the Chickamauga River, from which their name was derived. They claimed Muscle Shoals as part of their domain. When Anglo-Americans attempted to settle the region in the 1780s and 1790s, the Chickamaugas bitterly resisted them. The Upper Creek, residing in what is now north and central Alabama, also resented any European or Euro-American presence in the region. A major incident occurred in 1790, when U.S. President George Washington sent an expedition under Major John Doughty in an attempt to establish a fort and trading post at Muscle Shoals. This expedition was nearly annihilated by a Chickamauga and Creek party sent to destroy it, and the administration abandoned the project. Anglo-American settlers in Tennessee continued to agitate for control of this region. The site was particularly desirable, as it controlled access to fine cotton-producing land immediately to its south. In 1797, John Sevier, the first governor of Tennessee, complained to Andrew Jackson that "The prevention of a settlement at or near the Muscle Shoals is a manifest injury done the whole western country." At Sevier's behest, Jackson attempted to persuade Congress and President John Adams to fund a new expedition to take control of the site, but to no avail. U.S. officials finally took control of the region in the wake of the U.S. invasion of Creek country during the War of 1812. Jackson and General John Coffee obtained cession of the land from both the Cherokee and Creek (who had continued to dispute possession) by treaty, without permission from the federal government. Secretary of War William H. Crawford refused to recognize the cession, and reconfirmed Cherokee ownership, leading to personal enmity between him and Jackson. The political struggle over the lands was eventually won by Jackson and his backers, who gained passage in Congress of the Indian Removal Act in 1830. When Jackson, as president, implemented the policy of Indian Removal, Muscle Shoals was used as a site from which to exile the Upper Creek to Indian Territory (now Oklahoma). During World War I President Wilson authorized a dam on the Tennessee River just downstream of Muscle Shoals to help power nitrate plants for munitions. The first plant started producing nitrates two weeks after the armistice, but the dam was not completed until 1924. Meanwhile, in 1922 Henry Ford tried to buy the nitrate works and the unfinished dam. The Michigan car manufacturer and industrialist proposed leasing the uncompleted hydro-electric dam at Muscle Shoals on the Tennessee River in Alabama. The US War Department had begun the project during World War I, and engineers estimated a cost of $40 million to complete. At this time, public projects were financed either through raising taxes—which, Congress was unwilling to do at the time- or by issuing bonds. For the Muscle Shoals project, the proposal was for 30-year bonds at 4% interest. Ford and his friend and fellow inventor Thomas Edison balked at the idea that the US government should have to pay $48 million in interest on top of the $40 million they would have to pay back—all for a project that would benefit the public (the argument being that the hydro-electric dam and accompanying fertilizer plants would create jobs and revitalize the area). Responding to the bond issue, Edison remarked: “Any time we wish to add to the national wealth, we are compelled to add to the national debt.” Edison and Ford hoped that a new monetary system could be created where dollar bills were issued directly to workers and manufacturer, with the money being backed by the goods they produced rather than the gold and silver held in bank vaults. Congress eventually rejected Ford's idea. The project of area development based on hydroelectric power languished until the Great Depression. President Franklin D. Roosevelt's administration created the Tennessee Valley Authority in 1933 to construct needed infrastructure and install an electrical system in the rural area, using newly generated electricity from the dam complex. Music Residents in Muscle Shoals created two studios that have worked with numerous artists to record many hit songs from the 1960s to today. These are FAME Studios, founded by Rick Hall, where Arthur Alexander, Percy Sledge, Aretha Franklin, Wilson Pickett, Otis Redding and numerous others recorded; and Muscle Shoals Sound Studio, founded by the musicians known as The Swampers. They worked with Bob Dylan, Paul Simon, Rod Stewart, the Rolling Stones, The Allman Brothers, and others. In addition to the city being home to country music band Shenandoah, it has been a destination of numerous artists to write and record. Both FAME Studios and Muscle Shoals Sound Studio are still in operation in the city. They recorded such recent hit songs such as "Before He Cheats" by Carrie Underwood and "I Loved Her First" by Heartland, continuing the city's musical legacy. George Michael recorded an early, unreleased version of "Careless Whisper" with Jerry Wexler in Muscle Shoals in 1983. Bettye LaVette recorded her Grammy nominated album "Scene of the Crime" at FAME in 1972. The original Muscle Shoals Sound Studios were located at 3614 Jackson Highway in Sheffield but that site was closed in 1979 when the studio relocated to 1000 Alabama Avenue in Sheffield. The studio in the Alabama Avenue building closed in 2005; it houses a movie production company, which also hosts tours and concerts at the venue. Muscle Shoals encouraged the cross-pollination of musical styles: black artists from the area, such as Arthur Alexander and James Carr, used white country music styles in their work, and white artists from the Shoals frequently borrowed from the blues/gospel influences of their black contemporaries, creating a distinct sound. Sam Phillips, founder of Sun Records, was born in and lived in the area. He stated that the Muscle Shoals radio station WLAY (AM), which played both "white" and "black" music, and where he worked as a disc jockey in the 1940s, influenced his merging of these sounds at Sun Records with Elvis Presley, Jerry Lee Lewis, and Johnny Cash. Rolling Stone editor David Fricke wrote that if one wanted to play a single recording that would "epitomize and encapsulate the famed Muscle Shoals Sound", that record would be "I'll Take You There" by The Staple Singers in 1972. After hearing that song, American songwriter Paul Simon phoned his manager and asked him to arrange a recording session with the musicians who had performed it. Simon was surprised to learn that he would have to travel to Muscle Shoals to work with the artists. After arriving in the small town, he was introduced to the Muscle Shoals Rhythm Section ("The Swampers") who had recorded this song with Mavis Staples. Expecting black musicians (the original Rhythm Section consisted only of white musicians), and assuming that he had been introduced to the office staff, Simon politely asked to "meet the band". Once things were sorted out, Simon recorded a number of tracks with the Muscle Shoals band, including "Kodachrome" and "Loves Me Like a Rock". When Bob Dylan told his record label that he intended to record Christian music, the label executives insisted that if he planned to pursue the project, he must, at least, record the work in Muscle Shoals. They believed this would provide the work "some much-needed credibility". Dylan had not previously expressed a religious attitude, and the executives feared that his new work might be taken as satirical. Recording in the Bible Belt, they thought, might avert a disaster. Dylan subsequently recorded two Christian albums at Muscle Shoals Sound Studios, Slow Train Coming (1979) and Saved (1980). In the early 21st century, Florence native Patterson Hood, son of "Swamper" David Hood, found fame as a member of the alternative rock group Drive-By Truckers. Siblings and Muscle Shoals natives Angela Hacker (winner) and Zac Hacker (second place) were the top two finishing finalists on the 2007 season of Nashville Star, a country-music singing competition. In 2008, State Line Mob, a Southern rock duo group formed by singer and songwriters Phillip Crunk (Florence native) and Dana Crunk (Rogersville native), released their first CD, Ruckus, and won two Muscle Shoals Music Awards for 2008 for (Best New Artist) and Best New Country Album) of the year. Band of Horses recorded a portion of their album Infinite Arms at Muscle Shoals. Artists signed to the FAME label in 2017 include Holli Mosley, Dylan LeBlanc, Jason Isbell, Angela Hacker, Gary Nichols, and James LeBlanc. Although Muscle Shoals is no longer the "Hit Recording Capital of the World" (as it was in the 1960s and 1970s), the music continues. Groups and artists include Drive-By Truckers, The Civil Wars, Dylan LeBlanc, Gary Nichols, Jason Isbell, State Line Mob, Eric "Red Mouth" Gebhardt, Fiddleworms, and BoomBox. A number of rock, R&B and country music celebrities have homes in the area surrounding Muscle Shoals (Tuscumbia), or riverside estates along the Tennessee River. They may be seen performing in area nightclubs, typically rehearsing new material. Sister city Florence, Alabama, is frequently referred to as "the birthplace of the Blues". W. C. Handy was born in Florence and is generally regarded as the "Father of the Blues". Every year since 1982, the W. C. Handy Music Festival is held in the Florence/Sheffield/Muscle Shoals area, featuring blues, jazz, country, gospel, rock music and R&B. The roster of jazz musicians known as the "Festival All-Stars", or as the W. C. Handy Jazz All-Stars, includes musicians from all over the United States, such as guitarist Mundell Lowe, drummer Bill Goodwin, pianist/vocalist Johnny O'Neal, vibraphone player Chuck Redd, pianist/vocalist Ray Reach, and flutist Holly Hofmann. On January 6, 2010, Muscle Shoals was added to the Mississippi Blues Trail. After FAME studio founder Rick Hall died in early 2018, The New Yorker concluded its retrospective with this comment: "Muscle Shoals remains remarkable not just for the music made there but for its unlikeliness as an epicenter of anything; that a tiny town in a quiet corner of Alabama became a hotbed of progressive, integrated rhythm and blues still feels inexplicable. Whatever Hall conjured there—whatever he dreamt, and made real—is essential to any recounting of American ingenuity. It is a testament to a certain kind of hope." Al.com commented that Hall is survived by his family "and a Muscle Shoals music legacy like no other". An editorial in the Anniston Star concludes with this epitaph, "If the world wants to know about Alabama – a state seldom publicized for anything but college football and embarrassing politics – the late Rick Hall and his legacy are worthy models to uphold". 3614 Jackson Highway Studio The original location of Muscle Shoals Sound Studios in Sheffield has been listed on the National Register of Historic Places since June 2006. From the early 2000s to 2013, it had been partly restored and open for tours. In 2013, the documentary Muscle Shoals raised public interest in a major restoration of the original studio. In the same year, the Muscle Shoals Music Foundation was formed to raise funds to purchase the building and to complete major renovations. In June 2013, the owner of the property since 1999 sold it without the historic recording equipment to the Foundation. A grant from Beats Electronics, a manufacturer of headphones owned by Apple Inc., and founded by Dr. Dre and Jimmy Iovine, provided an essential $1 million. The state tourism director said in 2015 that the 2013 film Muscle Shoals had been a significant influence. "The financial support from Beats is a direct result of their film." Additional donations were made by other groups and individuals. , tours were visiting the partly-restored studio on Jackson Highway. It was closed when major restoration work started in September 2015. Muscle Shoals Sound Studio reopened as a finished tourist attraction on January 9, 2017. Owned and operated by the foundation, the interior is reminiscent of the 1970s, with relevant recording equipment and paraphernalia. There are plans for future recording projects. Even before the Jackson Highway studio reopened, The Alabama Tourism Department named Muscle Shoals Sound Studio as the state's top attraction in 2017. The Swampers The members of the Muscle Shoals Rhythm Section were Pete Carr (lead guitar), Jimmy Johnson (rhythm guitar), Roger Hawkins (drums), David Hood (bass guitar) and Barry Beckett (keyboards). Affectionately called The Swampers, the Muscle Shoals Rhythm Section was a local group of first-call studio musicians (initially working at FAME and then at Muscle Shoals Sound Studios) who were available for back-up. They were given the nickname The Swampers by music producer Denny Cordell during the Leon Russell sessions because of their "funky, soulful Southern 'swamp' sound". In the song "Sweet Home Alabama" by Lynyrd Skynyrd, a verse states: Muscle Shoals has got the Swampers. And they've been known to pick a song or two. Lord, they get me off so much, They pick me up when I'm feelin' blue. Now, how 'bout you? When Lynyrd Skynyrd recorded at Muscle Shoals Sound Studios once early in their career, they saw the various gold and platinum records on the walls bearing the words "To The Swampers", and later included it in the song as a tribute. Geography Muscle Shoals is located on the south bank of the Tennessee River at . According to the U.S. Census Bureau, the city has a total area of , of which , or 0.13%, is water. The local hardiness zone is 7b. Interactive Map | USDA Plant Hardiness Zone Map Demographics 2020 census As of the 2020 United States census, there were 16,275 people, 5,371 households, and 3,738 families residing in the city. 2010 census As of the census of 2010, there were 13,146 people, 5,321 households, and 3,769 families residing in the city. The population density was 845.4 per square mile (326.4/km2). There were 5,653 housing units at an average density of 363.5 per square mile (140.4/km2). The racial makeup of the city was 80.6% White, 15.3% Black or African American, 0.3% Native American, 0.9% Asian, 1.3% from other races, and 1.6% Hawaiian or Pacific Islander. Hispanic or Latino of any race were 2.7% of the population. There were 5,321 households, out of which 31.1% had children under the age of 18 living with them, 54.4% were married couples living together, 12.9% had a female householder with no husband present, and 29.2% were non-families. 26.2% of all households were made up of individuals, and 11.3% had someone living alone who was 65 years of age or older. The average household size was 2.44 and the average family size was 2.93. In the city, the population was spread out, with 23.6% under the age of 18, 8.1% from 18 to 24, 24.9% from 25 to 44, 27.3% from 45 to 64, and 16.0% who were 65 years of age or older. The median age was 40.1 years. For every 100 females, there were 90.5 males. For every 100 females age 18 and over, there were 91.9 males. The median income for a household in the city was $48,134, and the median income for a family was $60,875. Males had a median income of $41,061 versus $37,576 for females. The per capita income for the city was $23,237. About 8.3% of families and 10.6% of the population were below the poverty line, including 19.9% of those under age 18 and 4.8% of those age 65 or over. 2000 census As of the census of 2000, there were 11,924 people, 4,710 households and 3,452 families residing in the city. The population density was 979.7 per square mile (378.3/km2). There were 5,010 housing units at an average density of 411.6 per square mile (158.9/km2). The racial makeup of the city was 83.88% White, 14.16% Black or African American, 0.38% Native American, 0.56% Asian, 0.31% from other races, and 0.70% from two or more races. Hispanic or Latino of any race were 1.16% of the population. There were 4,710 households, out of which 34.7% had children under the age of 18 living with them, 59.4% were married couples living together, 11.3% had a female householder with no husband present, and 26.7% were non-families. 23.8% of all households were made up of individuals, and 8.2% had someone living alone who was 65 years of age or older. The average household size was 2.48 and the average family size was 2.95. In the city, the population was spread out, with 24.8% under the age of 18, 8.6% from 18 to 24, 29.6% from 25 to 44, 23.9% from 45 to 64, and 13.1% who were 65 years of age or older. The median age was 37 years. For every 100 females, there were 88.9 males. For every 100 females age 18 and over, there were 86.1 males. The median income for a household in the city was $40,210, and the median income for a family was $48,113. Males had a median income of $38,063 versus $21,933 for females. The per capita income for the city was $21,113. About 5.4% of families and 7.3% of the population were below the poverty line, including 8.1% of those under age 18 and 7.2% of those age 65 or over. Schools The Muscle Shoals City School District is currently led by Superintendent Dr. Chad Holden. There are seven schools in the district: Muscle Shoals High School Muscle Shoals Career Academy Muscle Shoals Middle School McBride Elementary School Highland Park Elementary School Webster Elementary School Howell Graves Preschool Transportation The city is served by Northwest Alabama Regional Airport, which is one mile east from the town and is served by one commercial airline. Representation in other media Muscle Shoals is where The Black Keys filmed their music video for the song "Lonely Boy" (2011). It was recorded outside a motel, and stars a local security guard from the city named Derrick T. Tuggle, who is dancing and lip-syncing the song. Muscle Shoals (2013) is an American documentary film about FAME Studios and Muscle Shoals Sound Studio in this city. Directed by Greg 'Freddy' Camalier, the film was released by Magnolia Pictures. Notable people Jason Allen, former University of Tennessee and former NFL player Boyd Bennett, rockabilly singer Levi Colbert, Chickasaw Bench Chief Rece Davis, ESPN commentator (QB for the Trojans' football squad) Alecia Elliott, country music singer Dennis Homan, Alabama All-America wide receiver and Dallas Cowboys' player Patterson Hood, singer-songwriter, co-founder of the Drive-By Truckers Ozzie Newsome, American football player, general manager & executive VP for the Baltimore Ravens Gary Nichols, country music singer Leigh Tiffin, American football placekicker Chris Tompkins, songwriter Steve Trash, magician, environmental activist, children's entertainer. Kim Tribble, country music songwriter John Wyker, musician Rachel Wammack, country music singer-songwriter Donna Godchaux, singer for the Grateful Dead from 1972-1979 References External links Article about Muscle Shoals written by Ernest Hemingway City of Muscle Shoals official website Muscle Shoals City Schools Shoals Music Magazine, publication dedicated to covering the Muscle Shoals Sound Cities in Alabama Cities in Colbert County, Alabama Florence–Muscle Shoals metropolitan area Populated places established in 1918 Muscle Shoals National Heritage Area Alabama populated places on the Tennessee River
609188
https://en.wikipedia.org/wiki/Kinemage
Kinemage
A kinemage (short for kinetic image) is an interactive graphic scientific illustration. It often is used to visualize molecules, especially proteins although it can also represent other types of 3-dimensional data (such as geometric figures, social networks, or tetrahedra of RNA base composition). The kinemage system is designed to optimize ease of use, interactive performance, and the perception and communication of detailed 3D information. The kinemage information is stored in a text file, human- and machine-readable, that describes the hierarchy of display objects and their properties, and includes optional explanatory text. The kinemage format is a defined chemical MIME type of 'chemical/x-kinemage' with the file extension '.kin'. Early history Kinemages were first developed by David Richardson at Duke University School of Medicine, for the Protein Society's journal Protein Science that premiered in January 1992. For its first 5 years (1992–1996), each issue of Protein Science included a supplement on floppy disk of interactive, kinemage 3D computer graphics to illustrate many of the articles, plus the Mage software (cross-platform, free, open-source) to display them; kinemage supplementary material is still available on the journal web site. Mage and RasMol were the first widely used macromolecular graphics programs to support interactive display on personal computers. Kinemages are used for teaching, and for textbook supplements, individual exploration, and analysis of macromolecular structures. Research uses More recently, with the availability of a much wider variety of other molecular graphics tools, presentation use of kinemages has been overtaken by a wide variety of research uses, concomitant with new display features and with the development of software that produces kinemage-format output from other types of molecular calculations. All-atom contact analysis adds and optimizes explicit hydrogen atoms, and then uses patches of dot surface to display the hydrogen bond, van der Waals, and steric clash interactions between atoms. The results can be used visually (in kinemages) and quantitatively to analyze the detailed interactions between molecular surfaces, most extensively for the purpose of validating and improving the molecular models from experimental x-ray crystallography data. Both Mage and KiNG (see below) have been enhanced for kinemage display of data in higher than 3 dimensions (moving between views in various 3-D projections, coloring and selecting candidate clusters of datapoints, and switching to a parallel coordinates representation), used for instance for defining clusters of favorable RNA backbone conformations in the 7-dimensional space of backbone dihedral angles between one ribose and the next. Online web use KiNG is an open-source kinemage viewer, written in the programming language Java by Ian Davis and Vincent Chen, that can work interactively either standalone on a user machine with no network connection, or as a web service in a web page. The interactive nature of kinemages is their primary purpose and attribute. To appreciate their nature, the demonstration KiNG in browser has two examples that can be moved around in 3D, plus instructions for how to embed a kinemage on a web page. The figure below shows KiNG being used to remodel a lysine sidechain in a high-resolution crystal structure. KiNG is one of the viewers provided on each structure page at the Protein Data Bank site, and displays validation results in 3D on the MolProbity site. Kinemages can also be shown in immersive virtual reality systems, with the open-source KinImmerse software. All of the kinemage display and all-atom contact software is available free and open-source on the kinemage web site. See also Molecular graphics Ribbon diagram Comparison of software for molecular mechanics modeling References External links , Duke University original, with examples and software kinemage example in a browser RCSB Protein Data Bank MolProbity: structure validation, with KiNG on-line kinemages Chemistry software