id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
1513755
https://en.wikipedia.org/wiki/Multiuser%20DOS
Multiuser DOS
Multiuser DOS is a real-time multi-user multi-tasking operating system for IBM PC-compatible microcomputers. An evolution of the older Concurrent CP/M-86, Concurrent DOS and Concurrent DOS 386 operating systems, it was originally developed by Digital Research and acquired and further developed by Novell in 1991. Its ancestry lies in the earlier Digital Research 8-bit operating systems CP/M and MP/M, and the 16-bit single-tasking CP/M-86 which evolved from CP/M. When Novell abandoned Multiuser DOS in 1992, the three master value-added resellers (VARs) DataPac Australasia, Concurrent Controls and Intelligent Micro Software were allowed to take over and continued independent development into Datapac Multiuser DOS and System Manager, CCI Multiuser DOS, and IMS Multiuser DOS and REAL/32. The FlexOS line, which evolved from Concurrent DOS 286 and Concurrent DOS 68K, was sold off to Integrated Systems, Inc. (ISI) in July 1994. Concurrent CP/M-86 The initial version of CP/M-86 1.0 (with BDOS 2.x) was adapted and became available to the IBM PC in 1982. It was commercially unsuccessful as IBM's PC DOS 1.0 offered much the same facilities for a considerably lower price. Like PC DOS, CP/M-86 did not fully exploit the power and capabilities of the new 16-bit machine. It was soon supplemented by an implementation of CP/M's multitasking 'big brother', MP/M-86 2.0, since September 1981. This turned a PC into a multiuser machine capable of supporting multiple concurrent users using dumb terminals attached by serial ports. The environment presented to each user made it seem as if they had the entire computer to themselves. Since terminals cost a fraction of the then-substantial price of a complete PC, this offered considerable cost savings, as well as facilitating multi-user applications such as accounts or stock control in a time when PC networks were rare, very expensive and difficult to implement. CP/M-86 1.1 (with BDOS 2.2) and MP/M-86 2.1 were merged to create Concurrent CP/M-86 3.0 (also known as CCP/M-86) with BDOS 3.0 in late 1982. Kathryn Strutynski, the project manager for CP/M-86, was also the project manager for Concurrent CP/M-86. One of its designers was Francis "Frank" R. Holsworth. Initially, this was a single-user operating system supporting true multi-tasking of up to four (in its default configuration) CP/M-86 compatible programs. Like its predecessors it could be configured for multi-processor support (see e.g. Concurrent CP/M-86/80) and also added "virtual screens" letting an operator switch between the interactions of multiple programs. Later versions supported dumb terminals and so could be deployed as multiuser systems. Concurrent CP/M-86 3.1 (BDOS 3.1) shipped on 21 February 1984. Adaptations Concurrent CP/M-86 with Windows In February 1984 Digital Research also offered a version of Concurrent CP/M-86 with windowing capabilities named Concurrent CP/M with Windows for the IBM Personal Computer and Personal Computer XT. Concurrent CP/M-86/80 This was an adaptation of Concurrent CP/M-86 for the LSI-M4, LSI Octopus and CAL PC computers. These machines had both 16-bit and 8-bit processors, because in the early days of 16-bit personal computing, 8-bit software was more available and often ran faster than the corresponding 16-bit software. Concurrent CP/M-86/80 allowed users to run both CP/M (8-bit) and CP/M-86 (16-bit) applications. When a command was entered, the operating system ran the corresponding application on either the 8-bit or the 16-bit processor, depending on whether the executable file had a .COM or .CMD extension. It emulated a CP/M environment for 8-bit programs by translating CP/M system calls into CP/M-86 system calls, which were then executed by the 16-bit processor. Concurrent DOS In August 1983, Bruce Skidmore, Raymond D. Pedrizetti, Dave Brown and Gordon Edmonds teamed up to create PC-MODE, an optional module for Concurrent CP/M-86 3.1 (with BDOS 3.1) to provide basic compatibility with PC DOS 1.1 (and MS-DOS 1.1). This was shown publicly at COMDEX in December 1983 and shipped in March 1984 as Concurrent DOS 3.1 (a.k.a. CDOS with BDOS 3.1) to hardware vendors. Simple DOS applications, which did not directly access the screen or other hardware, could be run. For example, although a console program such as PKZIP worked perfectly and offered more facilities than the CP/M-native ARC archiver, applications which performed screen manipulations, such as the WordStar word processor for DOS, would not, and native Concurrent CP/M (or CP/M-86) versions were required. While Concurrent DOS 3.1 up to 4.1 had been developed in the US, OEM adaptations and localizations were carried out by DR Europe's OEM Support Group in Newbury, UK, since 1983. Digital Research positioned Concurrent DOS 4.1 with GEM as alternative for IBM's TopView in 1985. Concurrent PC DOS Concurrent DOS 3.2 (with BDOS 3.2) in 1984 was compatible with applications for CP/M-86 1.x, Concurrent CP/M-86 3.x and PC DOS 2.0. It was available for many different hardware platforms. The version with an IBM PC compatible BIOS/XIOS was named Concurrent PC DOS 3.2. Kathryn Strutynski was the product manager for Concurrent PC DOS. Concurrent DOS 68K and FlexOS 68K Efforts being part of a cooperation with Motorola since 1984 led to the development of Concurrent DOS 68K in Austin, Texas, as a successor to CP/M-68K written in C. One of its main architects was Francis "Frank" R. Holsworth (using siglum FRH). Concurrent DOS 68K 1.0 became available for OEM evaluation in early 1985. The effort received considerable funding worth several million dollars from Motorola and was designed for their 68000/68010 processors. Like the earlier GEMDOS system for 68000 processors it initially ran on the Motorola VME/10 development system. Concurrent DOS 68K 1.20/1.21 was available in April 1986, offered for about to OEMs. This system evolved into FlexOS 68K in late 1986. Known versions include: Concurrent DOS 68K 1.0 (1985) Concurrent DOS 68K 1.1 Concurrent DOS 68K 1.20 (April 1986, 1986-05-27) Concurrent DOS 68K 1.21 (1986) Concurrent DOS 286 and FlexOS 286 In parallel to the Concurrent DOS 68K effort, Digital Research also previewed Concurrent DOS 286 in cooperation with Intel in January 1985. This was based on MP/M-286 and Concurrent CP/M-286, on which Digital Research had worked on since 1982. Concurrent DOS 286 was a complete rewrite in the C language based on a new system architecture with dynamically loadable device drivers instead of a static BIOS or XIOS. One of its main architects was Francis "Frank" R. Holsworth. The operating system would function strictly in 80286 native mode, allowing protected mode multi-user, multitasking operation while running 8086 emulation. While this worked on the B-1 step of prototype chip samples, Digital Research, with evaluation copies of their operating system already shipping in April, discovered problems with the emulation on the production level C-1 step of the processor in May, which would not allow Concurrent DOS 286 to run 8086 software in protected mode. The release of Concurrent DOS 286 had been scheduled for late May, but was delayed until Intel could develop a new version of the chip. In August, after extensive testing E-1 step samples of the 80286, Digital Research said that Intel had corrected all documented 286 errata, but that there were still undocumented chip performance problems with the prerelease version of Concurrent DOS 286 running on the E-1 step. Intel said that the approach Digital Research wished to take in emulating 8086 software in protected mode differed from the original specifications; nevertheless they incorporated into the E-2 step minor changes in the microcode that allowed Digital Research to run emulation mode much faster (see LOADALL). These same limitations affected FlexOS 286 version 1.x, a reengineered derivation of Concurrent DOS 286, which was developed by Digital Research's new Flexible Automation Business Unit in Monterey, California, since 1986. Later versions added compatibility with PC DOS 2.x and 3.x. Known versions include: Concurrent DOS 286 1.0 (1985) Concurrent DOS 286 1.1 (1986-01-07) Concurrent DOS 286 1.2 (1986) FlexOS 286 1.3 (November 1986) FlexOS 286 1.31 (May 1987) Concurrent DOS XM and Concurrent DOS 386 The OEM Support Group was relocated into Digital Research's newly created European Development Centre (EDC) in Hungerford, UK in 1986, which started to take over further development of the Concurrent DOS family since Concurrent DOS 4.11, including siblings like DOS Plus and successors. Developed in Hungerford, UK, versions 5 and 6 (Concurrent DOS XM, with XM standing for Expanded Memory) could bank switch up to 8 MB of EEMS to provide a real-mode environment to run multiple CP/M-86 and DOS programs concurrently and support up to three users (one local and up to two hooked up via serial terminals). In 1987, Concurrent DOS 86 was rewritten to become Concurrent DOS 386, still a continuation of the classical XIOS & BDOS architecture. This ran on machines equipped with the Intel 80386 and later processors, using the 386's hardware facilities for virtualizing the hardware, allowing most DOS applications to run unmodified under Concurrent DOS 386, even on terminals. The OS supported concurrent multiuser file access, allowing multiuser applications to run as if they were on individual PCs attached to a network server. Concurrent DOS 386 allowed a single server to support a number of users on dumb terminals or inexpensive low-specification PCs running terminal emulation software, without the need for expensive workstations and then-expensive network cards. It was a true multiuser system; several users could use a single database with record locking to prevent mutual interference. Concurrent DOS 6.0 represented also the starting point for the DR DOS family, which was carved out of it. Known versions include: DR Concurrent PC DOS XM 5.0 (BDOS 5.0) DR Concurrent DOS XM 5.0 (BDOS 5.0, October 1986) DR Concurrent DOS XM 5.1 (BDOS 5.1?, January 1987) DR Concurrent DOS XM 5.2 (BDOS 5.2?, September 1987) DR Concurrent DOS XM 6.0 (BDOS 6.0, 1987-11-18), 6.01 (1987) DR Concurrent DOS XM 6.2 (BDOS 6.2), 6.21 DR Concurrent DOS 386 1.0 (BDOS 5.0?, 1987) DR Concurrent DOS 386 1.1 (BDOS 5.2?, September 1987) DR Concurrent DOS 386 2.0 (BDOS 6.0, 1987-11-18), 2.01 DR Concurrent DOS 386 3.0 (BDOS 6.2, December 1988, January 1989), 3.01 (1989-05-19), 3.02 (1989) Concurrent PC DOS XM 5.0 emulated IBM PC DOS 2.10, whereas Concurrent DOS XM 6.0 and Concurrent DOS 386 2.0 were compatible with IBM PC DOS 3.30. Adaptations Known CCI Concurrent DOS adaptations by Concurrent Controls, Inc. include: CCI Concurrent DOS 386 1.12 (BDOS 5.0?, October 1987) CCI Concurrent DOS 386 2.01 (BDOS 6.0?, May 1988) CCI Concurrent DOS 386 3.01 (BDOS 6.2?, March 1989) CCI Concurrent DOS 386 3.02 (April 1990) CCI Concurrent DOS 386 3.03 (March 1991) CCI Concurrent DOS 386 3.04 (July 1991) aka "CCI Concurrent DOS 4.0" CCI Concurrent DOS 3.05 R1 (1992-02), R2 (1992), R3+R4 (1992), R5+R6 (1992), R7+R8 (1993), R9+R10 (1993), R11 (August 1993) CCI Concurrent DOS 3.06 R1 (December 1993), R2+R3 (1994), R4+R5+R6 (1994), R7 (July 1994) CCI Concurrent DOS 3.07 R1 (March 1995), R2 (1995), R3 (1996), R4 (1996), R5 (1997), R6 (1997), R7 (June 1998) CCI Concurrent DOS 3.08 CCI Concurrent DOS 3.10 R1 (2003-10-05) Other adaptations include: Apricot Concurrent DOS 386 2.01 (1987) for Apricot Quad Version Level 4.3 Multiuser DOS Later versions of Concurrent DOS 386 incorporated some of the enhanced functionality of DR's later single-user PC DOS clone DR DOS 5.0, after which the product was given the more explanatory name "Multiuser DOS" (a.k.a. MDOS), starting with version 5.0 (with BDOS 6.5) in 1991. Multiuser DOS suffered from several technical limitations that restricted its ability to compete with LANs based on PC DOS. It required its own special device drivers for much common hardware, as PC DOS drivers were not multiuser or multi-tasking aware. Driver installation was more complex than the simple PC DOS method of copying the files onto the boot disk and modifying CONFIG.SYS appropriately it was necessary to relink the Multiuser DOS kernel (known as a nucleus) using the SYSGEN command. Multiuser DOS was also unable to use many common PC DOS additions such as network stacks, and it was limited in its ability to support later developments in the PC-compatible world, such as graphics adaptors, sound cards, CD-ROM drives and mice. Although many of these were soon rectified for example, graphical terminals were developed, allowing users to use CGA, EGA and VGA software it was less flexible in this regard than a network of individual PCs, and as the prices of these fell, it became less and less competitive, although it still offered benefits in terms of management and lower total cost of ownership. As a multi-user operating system its price was higher than a single-user system, of course, and it required special device drivers, unlike single-user multitasking DOS add-ons such as Quarterdeck's DESQview. Unlike MP/M, it never became popular for single-user but multitasking use. When Novell acquired Digital Research in 1991 and abandoned Multiuser DOS in 1992, the three Master VARs DataPac Australasia, Concurrent Controls and Intelligent Micro Software were allowed to license the source code of the system to take over and continue independent development of their derivations in 1994. Known versions include: DR Multiuser DOS 5.00 (1991), 5.01 Novell DR Multiuser DOS 5.10 (1992-04-13), 5.11 Novell DR Multiuser DOS 5.13 (BDOS 6.6, 1992) All versions of Digital Research and Novell DR Multiuser DOS reported themselves as "IBM PC DOS" version 3.31. Adaptations DataPac Australasia Known versions by DataPac Australasia Pty Limited include: Datapac Multiuser DOS 5.0 Datapac Multiuser DOS 5.1 (BDOS 6.6) Datapac System Manager 7.0 (1996-08-22) In 1997, Datapac was bought by Citrix Systems, Inc., and System Manager was abandoned soon after. In 2002 the Sydney-based unit was spun out into Citrix' Advanced Products Group. Concurrent Controls Known CCI Multiuser DOS versions by Concurrent Controls, Inc. (CCI) include: CCI Multiuser DOS 7.00 CCI Multiuser DOS 7.10 CCI Multiuser DOS 7.21 CCI Multiuser DOS 7.22 R1 (September 1996), R2 (1996), R3 (1997), R4 GOLD/PLUS/LITE (BDOS 6.6, 1997-02-10), R5 GOLD (1997), R6 GOLD (1997), R7 GOLD (June 1998), R8 GOLD, R9 GOLD, R10 GOLD, R11 GOLD (2000-09-25), R12 GOLD (2002-05-15), R13 GOLD (2002-07-15), R14 GOLD (2002-09-13), R15 GOLD, R16 GOLD (2003-10-10), R17 GOLD (2004-02-09), R18 GOLD (2005-04-21) All versions of CCI Multiuser DOS report themselves as "IBM PC DOS" version 3.31. Similar to SETVER under DOS, this can be changed using the Multiuser DOS utility. In 1999, CCI changed its name to Applica, Inc. In 2002 Applica Technology became Aplycon Technologies, Inc. Intelligent Micro Software, Itera and Integrated Solutions DOS 386 Professional IMS Multiuser DOS Known adaptations of IMS Multiuser DOS include: IMS Multiuser DOS Enhanced Release 5.1 (1992) IMS Multiuser DOS 5.11 IMS Multiuser DOS 5.14 IMS Multiuser DOS 7.0 IMS Multiuser DOS 7.1 (BDOS 6.7, 1994) All versions of IMS Multiuser DOS report themselves as "IBM PC DOS" version 3.31. REAL/32 Intelligent Micro Software Ltd. (IMS) of Thatcham, UK, acquired a license to further develop Multiuser DOS from Novell in 1994 and renamed their product REAL/32 in 1995. Similar to FlexOS/4690 OS before, IBM in 1995 licensed REAL/32 7.50 to bundle it with their 4695 POS terminals. IMS REAL/32 versions: IMS REAL/32 7.50 (BDOS 6.8, 1995-07-01), 7.51 (BDOS 6.8), 7.52 (BDOS 6.9), 7.53 (BDOS 6.9, 1996-04-01), 7.54 (BDOS 6.9, 1996-08-01) IMS REAL/32 7.60 (BDOS 6.9, February 1997), 7.61, 7.62, 7.63 IMS REAL/32 7.70 (November 1997), 7.71, 7.72, 7.73, 7.74 (1998) IMS REAL/32 7.80, 7.81 (February 1999), 7.82, 7.83 (BDOS 6.10) IMS REAL/32 7.90 (1999), 7.91, 7.92 ITERA IMS REAL/32 7.93 (June 2002), 7.94 (BDOS 6.13, 2003-01-31) Integrated Solutions IMS REAL/32 7.95 REAL/32 7.50 to 7.74 report themselves as "IBM PC DOS" version 3.31, whereas 7.80 and higher report a version of 6.20. LBA and FAT32 support was added with REAL/32 7.90 in 1999. On 19 April 2002, Intelligent Micro Software Ltd. filed for insolvency and was taken over by one of its major customers, Barry Quittenton's Itera Ltd. This company was dissolved on 2006-03-28. As of 2010 REAL/32 was supplied by Integrated Solutions of Thatcham, UK, but the company, at the same address, was later listed as builders. REAL/NG REAL/NG was IMS' attempt to create the "Next Generation" of REAL/32, also named "REAL/32 for the internet age". REAL/NG promised "increased range of hardware from PCs to x86 multi-processor server systems". Advertised feature list, as of 2003: Runs with Red Hat 7.3 or later version of Linux Backward compatible with DOS and REAL/32 Max 65535 virtual consoles; each of these can be a user No Linux expertise required Administration/setup/upgrade by web browser (local and remote) Supplied with TCP/IP Linux-/Windows-based terminal emulator for the number of users purchased Print and file sharing built in Drive mapping between Linux and REAL/NG servers built in User hardware support Increased performance Vastly increased TPA Multi-processor support Improved hardware support Built-in firewall support Very low cost per seat Low total cost of ownership Supplied on CD Supplied with a set of Red Hat CDs By 10 December 2003, IMS made "REALNG V1.60-V1.19-V1.12" available, which, based on the Internet Archive, seems to be the latest release. By 2005, the realng.com website was mirroring the IMS main website, and had no mention of REAL/NG, only REAL/32. Application software While the various releases of this operating system had increasing ability to run DOS programs, software written for the platform could take advantage of its features by using function calls specifically suitable for multiuser operation. It used pre-emptive multitasking, preventing badly-written applications from delaying other processes by retaining control of the processor. To this day, Multiuser DOS is supported by popular SSL/TLS libraries such as wolfSSL. The API provided support for blocking and non-blocking message queues, mutual-exclusion queues, the ability to create sub-process threads which executed independently from the parent, and a method of pausing execution which did not waste processor cycles, unlike idle loops used by single-user operating systems. Applications were started as "attached" to a console. However, if an application did not need user interaction it could "detach" from the console and run as a background process, later reattaching to a console if needed. Another key feature was that the memory management supported a "shared" memory model for processes (in addition to the usual models available to normal DOS programs). In the shared memory model the "code" and "data" sections of a program were isolated from each other. Because the "code" contained no modifiable data, code sections in memory could be shared by several processes running the same program, thereby reducing memory requirements. Programs written, or adapted, for any multitasking platform need to avoid the technique used by single-tasking systems of going into endless loops until interrupted when, for example, waiting for a user to press a key; this wasted processor time that could be used by other processes. Instead, Concurrent DOS provided an API call which a process could call to "sleep" for a period of time. Later versions of the Concurrent DOS kernel included Idle Detection, which monitored DOS API calls to determine whether the application was doing useful work or in fact idle, in which case the process was suspended allowing other processes to run. Idle Detection was the catalyst for the patented DR-DOS Dynamic Idle Detection power management feature invented in 1989 by Roger Alan Gross and John P. Constant and marketed as BatteryMAX. See also CP/M MP/M Concurrent DOS V60 FlexOS DR DOS PC DOS – IBM's OEM version of (single-user) MS-DOS MS-DOS 4.0 (multitasking) PC-MOS/386 – unrelated multitasking DOS clone VM/386 – unrelated multitasking DOS environment Virtual DOS machine Multiuser DOS Federation Timeline of operating systems List of mergers and acquisitions by Citrix References Further reading External links https://web.archive.org/web/20190401161050/http://www.imsltd.com/ former Intelligent Micro Software (IMS) website (vendors of IMS Multiuser DOS, IMS REAL/32, and REAL/NG) https://web.archive.org/web/20010515195706/http://www.lii.com/ former Logan Industries (LLI) website (IMS REAL/32 US distributor up to 2002-05-01) https://web.archive.org/web/20071213140207/http://www.conctrls.com/ former Concurrent Controls website (CCI Multiuser DOS) https://web.archive.org/web/*/https://applica.com Applica, Inc. website https://web.archive.org/web/20040412051935/http://www.aplycon.com/ former Aplycon Technologies, Inc. website CP/M variants Disk operating systems DOS variants Real-time operating systems Digital Research operating systems Novell operating systems Microcomputer software Discontinued operating systems Proprietary operating systems
16633326
https://en.wikipedia.org/wiki/SpectSoft
SpectSoft
SpectSoft is a software development company started in 1997 to create tools for the professional film and video markets. They are located in Oakdale, Stanislaus County, California. SpectSoft develops Linux based uncompressed recording, playback, processing systems that are designed to capture and play images and sound from digital cinematography cameras, decks, Telecine and other devices. Products support compressed and uncompressed SD, HD, 2K, 4K and beyond images in 2D or 3D. Products CanvasRT Real time video processing, VTR emulation, 3D (Stereoscopic) Support and Dailies creation RaveHD A Linux-based SD and HD uncompressed video disk recorder. Rave2K A Linux-based SD, HD, Dual Link-HD and 2K Image, uncompressed video disk recorder. DiceHD A SD and HD, 6 channel video graphics overlay device. Articles Linux Devices Credits The Spiderwick Chronicles Sin City Starship Troopers 2 References External links SpectSoft website RaveHD Software companies based in California Entertainment companies based in California Companies based in Stanislaus County, California Oakdale, California Software companies established in 1997 1997 establishments in California Software companies of the United States
63044859
https://en.wikipedia.org/wiki/Eneida
Eneida
Eneida (Ukrainian for "Aeneid") is a Ukrainian burlesque poem, written by Ivan Kotliarevsky in 1798. This mock-heroic poem is considered to be the first literary work published wholly in the modern Ukrainian language. Although Ukrainian was an everyday language to millions of people in Ukraine, it was officially discouraged from literary use in the area controlled by Imperial Russia. Eneida is a parody of Virgil's Aeneid, where Kotliarevsky transformed the Trojan heroes into Zaporozhian Cossacks. Critics believe that it was written in the light of the destruction of Zaporizhian Host by the order of Catherine the Great. The poem was written during the formation of romanticism and nationalism in Europe. At that time, part of the Ukrainian elite was gripped by nostalgia for the Cossack state, which was liquidated by Russia in 1775–1786. The first three parts of the poem were published in 1798 in St. Petersburg, without the author's knowledge. The complete Eneida was published after Kotliarevsky's death in 1842. The poem is in top-100 list by "From Skovoroda to modern time: 100 most important creative art in Ukrainian". Synopsis І After the destruction of Troy by the Greeks, Aeneas (Enei) fled with a troop of Trojans by sea. Juno, who did not love Aeneas, the son of Venus, ran to the wind god Aeolus to raise a storm and drown the Trojans. Aeolus let loose the winds and made a terrible storm. But Aeneas gave a bribe to the god of the sea, Neptune, and the storm subsided. Venus, worried about her son, went to complain about Juno to Zeus. He said that the fate of Aeneas had already been decided — he would go to Rome to "build a strong kingdom", "drive the whole world into serfdom" and "they will all be leaders". After long wanderings, the Trojans reached Carthage, where Dido ruled. The queen fell in love with Aeneas and walked with him so that he forgot about his main goal — the construction of Rome. Zeus, accidentally looking at the earth from Olympus, saw this, got angry and sent Mercury to remind Aeneas of his appointment. Aeneas and the Trojans fled Carthage at night, and Dido burned herself with grief. ІІ The Trojans sailed the sea and landed in Sicily, where King Acest ruled. The Sicilians received them hospitably. Aeneas decided to hold a wake (ceremony) for his father Anchises. During the Trojan feast and games, Juno sent her maid to earth, who persuaded the Trojan women to burn the boats. There was a big fire. Aeneas became angry and went to curse the gods, asking for rain. The rain went down and some of the ships survived. Aeneas went to bed in grief and saw his father in a dream. Anchises promised that all would be well and asked to visit him in hell. III Leaving Sicily, the Trojans sailed on the sea for a long time, until they landed at Cumae. Aeneas went looking for a way to hell and met Sibyl the soothsayer. She promised to take Aeneas to hell in exchange for a bribe to the sun god Phoebus and a gift for herself. They both went down the street to hell, where Drowsiness, Yawning, and Death lived, and behind them stood the plague, war, cold, famine, and other calamities. Across the Styx, the mythical ferryman Charon transported Aeneas and Sibyl to hell. At the entrance they were met by a terrible Cerberus, to whom the soothsayer threw bread. There they saw sinners tormented in hell: lords, liars, stingy, stupid parents… Aeneas met in hell Dido and the slain fellow Trojans. Finally he met his father Anchises, who said that Aeneas will found «a great and zealous family,» that «will rule the whole world.» IV After boarding the boats, the Trojans, led by Aeneas, sail on. The guide sailor sees an island ruled by the cruel queen Circe, who turns people into animals. The island could not be bypassed. Aeneas turns to Aeolus and asks to avert trouble. Aeolus helps and the army continues its journey. Aeneas and his Trojans sail to the island ruled by the Latin king. Together with his wife Amata, he is going to marry his daughter Lavinia to King Turn. Meanwhile, Aeneas sends soldiers on reconnaissance. They tell him that the locals speak Latin. Aeneas and his army learn Latin in a week. Then Aeneas sends the king inns and he receives the Cossacks with honors, wishing that Aeneas became his son-in-law. Meanwhile, Juno, seeing that Aeneas is already allowing himself too much, decides to give him a good lesson for his impudent behavior. The goddess sends Erinys Telphousia, who inhabits first Amata and then visit Turnus. Turnus sees a dream in which his future bride chooses Aeneas as her fiancé. Offended, he sends a letter to the Latin king, declaring war. Aeneas' army accidentally drops greyhounds on Amata's nanny's dog. In turn, she begins to turn people against Aeneas. The Latin people are preparing for war. V Aeneas is thinking about how to defeat Turnus, because the Olympic gods were in no hurry to help. Aeneas fell asleep, and in a dream an old man advises Aeneas to make friends with the Arcadians, who were enemies of the Latins. Thus he decides to seek help from the Arcadians (Evander is the king of the Arcadians, Pallant is his son). Aeneas sacrifices to the gods and goes to Evander. He agrees to help and sends his son Pallant with the army. Venus asks Vulcan the blacksmith to make her son Aeneas a strong weapon. Juno sends a maid to warn Turnus about a possible attack by Aeneas and advises to strike first. He besieges the Trojan fortress, but can not take it. Then he burns the Trojan fleet. Venus complains to Cybella (mother of the gods), and she, in turn, complains to Zeus. The supreme god turns the ships of the Trojans into sirens, and the Rutuls flee in fear. Then there was silence again. Nyz and Evrial, young warriors, are on guard. Nyz offers to get into the Rutul camp and beat the enemies. He wants to do it himself, because Evrial has an old mother, and he has no one. However, his comrade does not agree, and they go together. Nyz and Evrial showed great courage, cut out many enemies, and when they returned, they came across Latins going to their camp. The young men try to hide in the woods, but the Latins tracked them down, surrounded the forest, from which "you can not slip away", and began to look for a "brave couple". When Evrial was caught, Nyz climbed a willow tree, dropped his spear and betrayed himself. When Evrial was caught, Nyz climbed a willow tree, dropped his spear, and thus revealed himself. Colonel Wolsent executed Evrial, and Nyz thrust his sword into the enemy and fell in battle. A fierce battle begins. Turnus goes with the army to the assault fortress, and the Trojans bravely defend themselves; Juno intervenes again and defends Turnus. The Rutuls are beating the Trojans and they already want to leave the fortress. Then the artillery chief begins to embarrass them, to remind them that Aeneas "considers us soldiers, the grandchildren of the most glorious grandfathers". The embarrassed Trojans rallied and went on the offensive, and Turnus fled. VI Angered at the gods for their intervention, Zeus forbids the gods to do anything for either party. Venus comes to the supreme god, begins to flatter him and complain about Juno. Juno, hearing this, starts a quarrel. At this time, Aeneas sails on a ship with Pallant to help the Trojans. When everyone is asleep, he thinks about how to defeat Turnus. Suddenly he sees a mavka in the water. She tells him that Turnus and his soldiers have already started fighting the Trojans and nearly burned their fleet. Aeneas rushes to the rescue and immediately rushes into battle. Turnus kills the brave Pallant. Jul told Aeneas what had happened in his absence. Zeus, drunk, went to apologize to his wife Juno. She deceived God with her cunning and put him to sleep. Juno turned into the image of Aeneas and lured Turnus to the ship so that he would sail home and not die. The next day, Aeneas buried the dead. Envoys from Latin came to him. He tells them that he is not fighting against the Latins, but against Turnus, and offers to arrange a one-on-one duel with him. The ambassadors liked it and they retold the words of Aeneas to Latin and Turnus. The latter reluctantly prepares for a duel. Amata opposes the marriage of her daughter and Turnus, because she is secretly in love with him. The next day, both sides took up positions to watch the battle. Juno sends his sister, Juturna, to Turna's aid. She starts a fight between the Trojans and the Rutuls again. Aeneas wounded. Venus collects for him all sorts of potions that help heal the wound. The next day, both sides took up positions to watch the battle. Juno sends Juturna to help Turnus. She starts a fight between the Trojans and the Rutuls again. Aeneas was wounded during the skirmish. Venus collects for him all sorts of potions that help heal the wound. Thinking that Turn is dead, Amata decides to hang himself. This news frightened everyone. Aeneas goes to a duel with Turn, knocks him out of the sword. Juno, with the hand of Juturna, gives Turnus another sword. For this, Zeus quarrels with Juno and says: "We have already told all the gods: Aeneas will be with us in Olympus to eat the same pies that I tell you to bake." The duel continues. Having knocked Turnus to the ground, Aeneas is going to kill him, but Rutul's words touched his heart. Suddenly he notices Pallant's armor on Turnus' body and kills him. English translation Partial translations of Eneida date back to 1933 when a translation of first few stanzas of Kotliarevsky's Eneida by Wolodymyr Semenyna was published in the American newspaper of Ukrainian diaspora Ukrainian Weekly on October 20, 1933. Another partial translation was published by University of Toronto Press in 1963 in the anthology Ukrainian Poets 1189–1962, by C. H. Andrusyshen and Watson Kirkconnell. However, the first full English translation of Kotliarevsky's magnum opus Eneida was published only in 2006 in Canada by a Ukrainian-Canadian Bohdan Melnyk, most well known for his English translation of Ivan Franko's Ukrainian fairy tale "Mykyta the Fox" (Ukrainian: Лис Микита). List of English translations Ivan Kotliarevsky. Aeneid: [Translated into English from Ukrainian by Bohdan Melnyk]. — Canada, Toronto: The Basilian Press, 2004. — 278 pages. . References History of Ukrainian literature Ukrainian poems Works based on the Aeneid Poetry based on works by Virgil
342663
https://en.wikipedia.org/wiki/BlackBerry%20Limited
BlackBerry Limited
BlackBerry Limited is a Canadian software company specializing in cybersecurity. Originally known as Research In Motion (RIM), it developed the BlackBerry brand of interactive pagers, smartphones, and tablets. It transitioned to a cybersecurity enterprise software and services company under Chief Executive Officer John S. Chen. Its products are used by various businesses, car manufacturers, and government agencies to prevent hacking and ransomware attacks. They include BlackBerry Cylance's artificial intelligence based cyber-security solutions, the BlackBerry AtHoc emergency communication system (ECS) platform; the QNX real-time operating system; and BlackBerry Enterprise Server (BlackBerry Unified Endpoint Manager), a Unified Endpoint Management (UEM) platform. BlackBerry was founded in 1984 as Research In Motion by Mike Lazaridis and Douglas Fregin. In 1992, Lazaridis hired Jim Balsillie, and Lazaridis and Balsillie served as co-CEOs until January 22, 2012, when Thorsten Heins became president and CEO. In November 2013, John S. Chen took over as CEO. His initial strategy was to subcontract manufacturing to Foxconn, and to focus on software technology. Currently, his strategy includes forming licensing partnerships with device manufacturers such as TCL Communication and unifying BlackBerry's software portfolio. On the 4th of January 2022, BlackBerry decommissioned the infrastructure and operating system used by their non-Android phones. History 1984–2001: Early years and growth Research In Motion Limited was founded in March 1984 by Mike Lazaridis and Douglas Fregin. At the time, Lazaridis was an engineering student at the University of Waterloo while Fregin was an engineering student at the University of Windsor. In 1988, RIM became the first wireless data technology developer in North America and the first company outside Scandinavia to develop connectivity products for Mobitex wireless packet-switched data communications networks. In 1995, RIM introduced the DigiSync Film KeyKode Reader. In 1991, RIM introduced the first Mobitex protocol converter. In 1992, RIM introduced the first Mobitex point-of-sale solution, a protocol converter box that interfaced with existing point-of-sale terminal equipment to enable wireless communication. In 1993, RIM introduced the RIMGate, the first general-purpose Mobitex X.25 gateway. In the same year, RIM launched Ericsson Mobidem AT and Intel wireless modem containing RIM modem firmware. In 1994, RIM introduced the first Mobitex mobile point-of-sale terminal. In the same year, RIM received the Emmy Award for Technical Innovation and the KPMG High Technology Award. In 1995, RIM introduced Freedom, the first Type II PCMCIA radio modem for Mobitex. In 1995, RIM was financed by Canadian institutional and venture capital investors through a private placement in the privately held company. Working Ventures Canadian Fund Inc. led the first venture round with a C$5,000,000 investment with the proceeds being used to complete the development of RIM's two-way paging system hardware and software. A total of C$30,000,000 in pre-IPO financing was raised by the company prior to its initial public offering on the Toronto Stock Exchange in January 1998 under the symbol RIM. In 1996, RIM introduced the Interactive Pager, the first two-way messaging pager, and the RIM 900 OEM radio modem. The company worked with RAM Mobile Data and Ericsson to turn the Ericsson-developed Mobitex wireless data network into a two-way paging and wireless e-mail network. Pivotal in this development was the release of the Inter@ctive Pager 950, which started shipping in August 1998. About the size of a bar of soap, this device competed against the Skytel two-way paging network developed by Motorola. In 1999, RIM introduced the BlackBerry 850 pager. Named in reference to the resemblance of its keyboard's keys to the druplets of the blackberry fruit, the device could receive push email from a Microsoft Exchange Server using its complementary server software, BlackBerry Enterprise Server (BES). The introduction of the BlackBerry set the stage for future enterprise-oriented products from the company, such as the BlackBerry 957 in April 2000, the first BlackBerry smartphone. The BlackBerry OS platform and BES continued to increase in functionalitywhile the incorporation of encryption and S/MIME support helped BlackBerry devices gain increased usage by governments and businesses. During fiscal 1999-2001, total assets declared in the RIM's balance sheet grew eight-fold due to massive capacity expansion. 2001–2011: Global expansion and competition RIM soon began to introduce BlackBerry devices aimed towards the consumer market as well, beginning with the BlackBerry Pearl 8100the first BlackBerry phone to include multimedia features such as a camera. The introduction of the Pearl series was highly successful, as was the subsequent Curve 8300 series and Bold 9000. Extensive carrier partnerships fuelled the rapid expansion of BlackBerry users globally in both enterprise and consumer markets. Despite the arrival of the first Apple iPhone in 2007, BlackBerry sustained unprecedented market share growth well into 2011. The introduction of Apple's iPhone on the AT&T network in the fall of 2007 in the United States prompted RIM to produce its first touchscreen smartphone for the competing network in 2008the BlackBerry Storm. The Storm sold well but suffered from mixed to poor reviews and poor customer satisfaction. The iPhone initially lagged behind the BlackBerry in both shipments and active users, due to RIM's head start and larger carrier distribution network. In the United States, the BlackBerry user base peaked at approximately 21 million users in the fall of 2010. That quarter, the company's global subscriber base stood at 36 million users. As the iPhone and Google Android accelerated growth in the United States, the BlackBerry began to turn to other smartphone platforms. Nonetheless, the BlackBerry line as a whole continued to enjoy success, spurred on by strong international growth. As of December 1, 2012, the company had 79 million BlackBerry users globally with only 9 million remaining in the United States. Even as the company continued to grow worldwide, investors and media became increasingly alarmed about the company's ability to compete with devices from rival mobile operating systems iOS and Android. CNN cited BlackBerry as one of six endangered US-Canadian brands. Analysts were also worried about the strategic direction of the co-CEOs' management structure. The company also lost Larry Conlee, who had served as COO of engineering and manufacturing from 2001 to 2009, to retirement. Conlee was not only key to the company's platform manufacturing strategy, he was also a pragmatic taskmaster who ensured deadlines were met, plus Conlee had the clout to push back if Lazaridis had unrealistic expectations. Following numerous attempts to upgrade their existing Java platform, the company made numerous acquisitions to help it create a new, more powerful BlackBerry platform, centered around its recently acquired real-time operating system QNX. In March 2011, Research In Motion Ltd.'s then-co-CEO Jim Balsillie suggested during a conference call that the "launch of some powerful new BlackBerrys" (eventually released as BlackBerry 10) would be in early 2012. However analysts were "worried that promoting the mysterious, supposedly game-changing devices too early might hurt sales of existing BlackBerrys" (similar to the Osborne effect). The initial launch date was seen in retrospect as too ambitious, and hurt the company's credibility at a time when its existing aging products steadily lost market share. On September 27, 2010, RIM announced the long-rumoured BlackBerry PlayBook tablet, the first product running on the new QNX platform known as BlackBerry Tablet OS. The BlackBerry PlayBook was officially released to U.S. and Canadian consumers on April 19, 2011. The PlayBook was criticized for being rushed to market in an incomplete state and sold poorly. Following the shipments of 900,000 tablets during its first three quarters on market, slow sales and inventory pileups prompted the company to reduce prices and to write down the inventory value by $485 million. Primary competition The primary competitors of the BlackBerry are smartphones running Android and the Apple iPhone. For a number of years, the BlackBerry was the leading smartphone in many markets, particularly the United States. The arrival of the Apple iPhone and later Google's Android platform caused a slowdown in BlackBerry growth and a decline in sales in some markets, most notably the United States. This led to negative media and analyst sentiment over the company's ability to continue as an independent company. When Apple's iPhone was first introduced in 2007, it generated substantial media attention, with numerous media outlets calling it a "BlackBerry killer". While BlackBerry sales continued to grow, the newer iPhone grew at a faster rate and the 87 percent drop in BlackBerry's stock price between 2010 and 2013 is primarily attributed to the performance of the iPhone handset. The first three models of the iPhone (introduced in 2007) generally lagged behind the BlackBerry in sales, as RIM had major advantages in carrier and enterprise support; however, Apple continued gaining market share. In October 2008, Apple briefly passed RIM in quarterly sales when they announced they had sold 6.9 million iPhones to the 6.1 million sold by RIM, comparing partially overlapping quarters between the companies. Though Apple's iPhone sales declined to 4.3 million in the subsequent quarter and RIM's increased to 7.8 million, for some investors this indicated a sign of weakness. Apple's iPhone began to sell more phones quarterly than the BlackBerry in 2010, brought on by the release of the iPhone 4. In the United States, the BlackBerry hit its peak in September 2010, when almost 22 million users, or 37% of the 58.7 million American smartphone users at the time, were using a BlackBerry. BlackBerry then began to decline in use in the United States, with Apple's installed base in the United States finally passing BlackBerry in April 2011. Sales of the iPhone continued to accelerate, as did the smartphone market, while the BlackBerry began to lose users continuously in the United States. By February 2016, only 1.59 million (0.8%) of the 198.9 million smartphone users in the United States were running BlackBerry compared to 87.32 million (43.9%) on an iPhone. Google's Android mobile operating system, running on hardware by a range of manufacturers including Sony, Motorola, HTC, Samsung, LG and many others ramped up the competition for BlackBerry. In January 2010, barely 3 million (7.1%) of the 42.7 million Smartphones in use at the time in the United States were running Android, compared to 18 million BlackBerry devices (43%). Within a single year Android had passed the installed base of the BlackBerry in the United States. By February 2016, only 1.59 million (0.8%) of the 198.9 million smartphone users in the United States were running BlackBerry compared to 104.82 million (52.7%) running Android. While RIM's secure encrypted network was attractive to corporate customers, their handsets were sometimes considered less attractive to consumers than iPhone and Android smartphones. Developers often developed consumer applications for those platforms and not the BlackBerry. During 2010s, even enterprise customers had begun to adopt BYOD policies due to employee feedback. The company also faced criticism that its hardware and operating system were outdated and unappealing compared to the competition, as well as that the browsing capabilities were poorer. 2011–2015: Strategic changes and restructuring Slowing growth prompted the company to undertake a lay-off of 2,000 employees in the summer of 2011. In September 2011, the company's BlackBerry Internet Service suffered a massive outage, impacting millions of customers for several days. The outage embarrassingly occurred as Apple prepared to launch the iPhone 4S, causing fears of mass defections from the platform. Shortly afterwards, in October 2011, RIM unveiled BBX, a new platform for future BlackBerry smartphones that would be based on the same QNX-based platform as the PlayBook. However, due to an accusation of trademark infringement regarding the name BBX, the platform was renamed BlackBerry 10. The task proved to be daunting, with the company delaying the launch in December 2011 to some time in 2012. On January 22, 2012, Mike Lazaridis and Jim Balsillie resigned as the CEOs of the company, handing the reins over to executive Thorsten Heins. On March 29, 2012, the company reported its first net loss in years. Heins set about the task of restructuring the company, including announcing plans to lay off 5,000 employees, replacing numerous executives, and delaying the new QNX-based operating system for phones ("BlackBerry 10") a second time into January 2013. BlackBerry 10 After much criticism and numerous delays, RIM officially launched BlackBerry 10 and two new smartphones based on the platform, the BlackBerry Z10 and Q10, on January 30, 2013. The BlackBerry Z10, the first BlackBerry smartphone running BlackBerry 10, debuted worldwide in January 2013, going on sale immediately in the U.K. with other countries following. A marked departure from previous BlackBerry phones, the Z10 featured a fully touch-based design, a dual-core processor, and a high-definition display. BlackBerry 10 had 70,000 applications available at launch, which the company expected would rise to 100,000 by the time the device made its debut in the United States. In support of the launch, the company aired its first Super Bowl television advertisement in the U.S. and Canada during Super Bowl XLVII. In discussing the decision to create a proprietary operating system instead of adopting an off-the-shelf platform such as Android, Heins noted, "If you look at other suppliers' ability to differentiate, there's very little wiggle room. We looked at it seriouslybut if you understand what the promise of BlackBerry is to its user base it's all about getting stuff done. Games, media, we have to be good at it but we have to support those guys who are ahead of the game. Very little time to consume and enjoy contentif you stay true to that purpose you have to build on that basis. And if we want to serve that segment we can't do it on a me-too approach." Chief Operating Officer Kristian Tear remarked "We want to regain our position as the number one in the world", while Chief Marketing Officer Frank Boulben proclaimed "It could be the greatest comeback in tech history. The carriers are behind us. They don't want a duopoly" (referring to Apple and Samsung). During the BlackBerry 10 launch event, the company also announced that it would change its public brand from Research In Motion to BlackBerry. The name change was made to "put the BlackBerry brand at the centre" of the company's diverse brands, and because customers in some markets "already know the company as BlackBerry". While a shareholder vote on an official name change to BlackBerry Limited will be held at its next annual general meeting, its ticker symbols on the TSX and NASDAQ changed to BB and BBRY respectively on February 4, 2013. On August 12, 2013, the company announced that it was open to being purchased and stated in an official news release to Canada's securities administrators: The company’s board of directors has formed a special committee to explore strategic alternatives to enhance value and increase scale in order to accelerate BlackBerry 10 deployment. These alternatives could include, among others, possible joint ventures, strategic partnerships or alliances, a sale of the Company or other possible transactions. Prem-Watsa/Fairfax Deal Canada Pension Plan Investment Board's CEO Mark Wiseman stated that he would consider investing in BlackBerry if the company became private. Also on August 12, 2013, foremost shareholder Prem Watsa resigned from BlackBerry's board. On September 20, 2013, the company announced it would lay off 4,500 staff and take a CAD$1 billion operating loss. Three days later, the company announced that it had signed a letter of intent to be acquired by a consortium led by Prem Watsa-owned Fairfax Financial Holdings for a $9 per share deal. This deal was also confirmed by Watsa. On September 29, 2013, the company began operating a direct sales model for customers in the United States, where unlocked Q10 and Z10 smartphones were sold directly from the BlackBerry website. On October 15, 2013, the company published an open letter in 30 publications in nine countries to reassure customers that BlackBerry would continue to operate. Anthony Michael Sabino, St. John's University business professor, stated in the Washington Post: "This is BlackBerry’s last-ditch attempt to simply survive in the face of crushing competition in a market it essentially invented." John Chen joins BlackBerry On November 4, 2013, the Fairfax Prem Watsa deal was scrapped in favor of a US$1 billion cash injection which, according to one analyst, represented the level of confidence BlackBerry's largest shareholder had in the company. At the same time, BlackBerry installed John Chen as CEO to replace the laid-off Heins. According to the Globe and Mail, BlackBerry's hope was that Chen, with his reputation as a turnaround artist, could save the company. "John Chen knows how to manage a mobile company, and perhaps most importantly, can make things happen in the industry," J. Gold Associates Principal Analyst Jack Gold told the publication. "We have begun moving the company to embrace a multi-platform, BYOD world by adopting a new mobility management platform and a new device strategy," Chen explained in an open letter published shortly after his appointment. "I believe in the value of this brand. With the right team and the right strategy in place, I am confident that we will rebuild BlackBerry for the benefit of all our constituencies." In April 2014, Chen spoke of his turnaround strategy in an interview with Reuters, explaining that he intended to invest in or partner with other companies in regulated industries such as healthcare, financial, and legal services. He later clarified that BlackBerry's device division remained part of his strategy and that his company was also looking to invest in "emerging solutions such as machine to machine technologies that will help power the backbone of the Internet of Things." He would later expand on this idea at a BlackBerry Security Summit in July 2014. In May 2014, the low-cost BlackBerry Z3 was introduced onto the Indonesian market, where the brand had been particularly popular. The budget handset was produced in partnership with Taiwanese manufacturer Foxconn Technology Group, which handled the design and distribution of the product. A New York Times analysis stated that the model was an attempt by Chen to generate revenue while he tried "to shift the organization’s focus to services and software." An analyst with London's ABI Research said: "John Chen is just sustaining the handset business as he sorts out the way ahead." As part of the localization effort for the promotion of the Z3, the handset's back panel was engraved with the word "Jakarta", but skepticism still emerged, as the handset was still more than twice as expensive as Android models in Indonesia at the time of release. 2015–present: Software transition In the first quarter of the 2015 fiscal year, Chen stated: "This is, of course, the very beginning of our task and we hope that we will be able to report better results going forward ... We feel pretty good about where we are." Quartz reported that stock was up by 30 percent, compared to the same period in the previous year, while Chen expressed enthusiasm for the release of two new handsets, both with keyboards and touch screens, in the second half of 2014. Chen did not provide sales figures for the Z3 phone in Indonesia. In September 2015, Chen unveiled the BlackBerry Priv, a keyboard-slider smartphone utilizing the Android operating system with BlackBerry-developed software enhancements, including a secure bootloader, full-disk encryption, integrity protection, and the BlackBerry HUB. In 2020, BlackBerry signed a new licensing agreement for smartphones with the US-based startup company, OnwardMobility. The company never released a device before shutting down in 2022. As of June 2021, Cybersecurity ($107 million) and IoT ($43 million) revenue accounted for a combined 86% of Q1 2022 earnings ($174 million). Chen reiterated: "Now, we are pivoting the organization more heavily toward the market by creating two business units, Cybersecurity and IoT ... we will provide revenue and gross margin by business unit as well as other selected metrics. We believe that this additional color will help investor gain better understanding of the underlying performance of the business units, ultimately driving shareholder value." Strategic acquisitions During this time, BlackBerry also expanded its software and services offerings with several key acquisitions. These included file security firm WatchDox, crisis communications leader AtHoc, and rival EMM vendor Good Technology. The products offered by these firms were gradually re-branded and integrated into BlackBerry's own portfolio. Trefis, an analyst team and Forbes contributor, called Good "a nice strategic fit for BlackBerry's software business", noting that the acquisition would "help improve BlackBerry's cross-platform EMM support and bring in a relatively large and diverse customer base, while also helping drive incremental revenue growth". It also noted that the acquisition – the largest in BlackBerry's history – indicated the company's commitment to a software-focused turnaround plan. It remained ambivalent about the company's outlook overall. In January 2016, Chen stated that BlackBerry did not plan on developing any new devices running BlackBerry 10 and that the company would release two new Android devices at most during 2016. BlackBerry also announced the release of the Good Secure EMM Suites, consolidating WatchDox and Good Technology's products into several tiered offerings alongside its existing software. Hardware licensing partnerships BlackBerry announced the DTEK50, a mid-range Android smartphone, on July 26, 2016. Unlike the Priv, the DTEK50 was a re-branded version of an existing smartphone, the Alcatel Idol 4 as manufactured by TCL Corporation, one of the company's hardware partners. It was to be the second-last phone ever developed in-house at BlackBerry, followed by the DTEK60 in October 2016 - on September 28, 2016, BlackBerry announced that it would cease in-house hardware development to focus on software, delegating development, design, and manufacturing of its devices to third-party partners. The first of these partners was BB Merah Putih, a joint venture in Indonesia. Chen stated that the company was "no longer just about the smartphone, but the smart in the phone". On December 15, 2016, BlackBerry announced that it had reached a long-term deal with TCL to continue producing BlackBerry-branded smartphones for sale outside of Bangladesh, India, Indonesia, Nepal, and Sri Lanka. This partnership was followed by an agreement with Optiemus Infracom on February 6, 2017 to produce devices throughout India and neighbouring markets including Sri Lanka, Nepal, and Bangladesh. Since the partnerships were announced, TCL has released the BlackBerry KeyONE and BB Merah Putih has released the BlackBerry Aurora. Cybersecurity consulting In February 2016, BlackBerry acquired UK-based cybersecurity firm Encription, with the intention of branching out into the security consulting business. It later released BlackBerry SHIELD, an IT risk assessment program for its corporate clients. In April 2017, BlackBerry's cybersecurity division partnered with Allied World Assurance Company Holdings, a global insurance and reinsurance provider. This agreement saw BlackBerry's SHIELD self-assessment tool integrated into Allied World's FrameWRX cyber risk management solution. BlackBerry Secure On December 8, 2016, BlackBerry announced the release of BlackBerry Secure. Billed as a "comprehensive mobile security platform for the Enterprise of Things", BlackBerry Secure further deepens the integration between BlackBerry's acquisitions and its core portfolio. According to Forbes, it brings all of BlackBerry's products "under a single umbrella". On February 7, 2017, BlackBerry announced the creation of the BBM Enterprise SDK, a Communication-Platform-as-a-Service development tool. The Enterprise SDK allows developers to incorporate BBM Enterprise's messaging functionality into their applications. It was released to BlackBerry's partners on February 21, 2017, and officially launched on June 12, 2017. Also in February 2017, analyst firm 451 Research released a report on BlackBerry's improved financial position and product focus. The report identified BlackBerry's position in the Internet of Things and its device licensing strategy as strengths. The BBM Enterprise SDK was also highlighted, alongside several challenges still facing the company. Financials Until 2013, the number of active BlackBerry users increased over time. For the fiscal period in which the Apple iPhone was first released (in 2007), RIM reported a user base of 10.5 million BlackBerry subscribers. At the end of 2008, when Google Android first hit the market, RIM reported that the number of BlackBerry subscribers had increased to 21 million. In the fourth quarter of fiscal year ended March 3 , 2012, RIM shipped 11.1 million BlackBerry smartphones, down 21 percent from the previous quarter and it was the first decline in the quarter covering Christmas since 2006. For its fourth quarter, RIM announced a net loss of US$125 million (the last loss before this occurred in the fourth quarter of the fiscal year 2005). RIM's loss of market share accelerated in 2011, due to the rapidly growing sales of Samsung and HTC Android handsets; RIM's annual market share in the U.S. dropped to just 3 percent, from 9 percent. In the quarter ended June 28, 2012, RIM announced that the number of BlackBerry subscribers had reached 78 million globally. Furthermore, RIM reported its first quarter revenue for the 2013 fiscal year, showing that the company incurred a GAAP net loss of US$518 million for the quarter, and announced a plan to implement a US$1 billion cost-saving initiative. The company also announced the delay of the new BlackBerry 10 OS until the first quarter of 2013. After the release of the Apple iPhone 5 in September 2012, RIM CEO Thorsten Heins announced that the number of global users was up to 80 million, which sparked a 7% jump in shares. On December 2, 2012, the company reported a decline in revenue of 5% from the previous quarter and 47% from the same period the previous year. The company reported a GAAP profit of US$14 million (adjusted net loss of US$115 million), which was an improvement over previous quarters. The company also grew its cash reserves to US$2.9 billion, a total that was eventually increased to nearly US$600 million in the quarter. The global subscriber base of BlackBerry users declined slightly for the first time to 79 million, after peaking at an all-time high of 80 million the previous quarter. In September 2013, the company announced that its growing BBM instant messaging service will be available for Android and iPhone devices. BlackBerry stated that the service has 60 million monthly active customers who send and receive more than 10 billion messages a day. The "BBM Channels" enhancement is expected in late 2013, whereby conversations are facilitated between users and communities, based on factors such as common interests, brands, and celebrities. On September 28, 2013, media reports confirmed that BlackBerry lost US$1.049 billion during the second fiscal quarter of 2013. In the wake of the loss, Heins stated: "We are very disappointed with our operational and financial results this quarter and have announced a series of major changes to address the competitive hardware environment and our cost structure." Between 2010 and 2013, the stock price of the company dropped by 87 percent due to the widespread popularity of the iPhone. Goldman Sachs estimated that, in June 2014, BlackBerry accounted for 1 percent share of smartphone sales, compared to a peak of around 20 percent in 2009. With the release of its financial results for the first fiscal quarter of 2015 in June 2014, Chen presented a more stable company that had incurred a lower amount of loss than previous quarters. The New York Times described "a smaller-than-expected quarterly loss", based on the June 19, 2014 news release: Revenue for the first quarter of fiscal 2015 was $966 million, down $10 million or 1% from $976 million in the previous quarter ... During the first quarter, the Company recognized hardware revenue on approximately 1.6 million BlackBerry smartphones compared to approximately 1.3 million BlackBerry smartphones in the previous quarter. Ian Austin of the New York Times provided further clarity on BlackBerry's news release: "Accounting adjustments enabled BlackBerry to report a $23 million, or 4 cents a share, profit for its last quarter. Without those noncash charges, however, the company lost $60 million, or 11 cents a share, during the period." Following the news release, Chen stated that BlackBerry is comfortable with its position, and it is understood that his plan for the company mainly involves businesses and governments, rather than consumers. Organizational changes Leadership changes The company was often criticized for its dual CEO structure. Under the original organization, Mike Lazaridis oversaw technical functions, while Jim Balsillie oversaw sales/marketing functions. Some saw this arrangement as a dysfunctional management structure and believed RIM acted as two companies, slowing the effort to release the new BlackBerry 10 operating system. On June 30, 2011, an investor push for the company to split its dual-CEO structure was unexpectedly withdrawn after an agreement was made with RIM. RIM announced that after discussions between the two groups, Northwest & Ethical Investments would withdraw its shareholder proposal before RIM's annual meeting. On January 22, 2012, RIM announced that its CEOs Balsillie and Lazaridis had stepped down from their positions. They were replaced by Thorsten Heins. Heins hired investment banks RBC Capital Markets and JP Morgan to seek out potential buyers interested in RIM, while also redoubling efforts on releasing BlackBerry 10. On March 29, 2012, RIM announced a strategic review of its future business strategy that included a plan to refocus on the enterprise business and leverage on its leading position in the enterprise space. Heins noted, "We believe that BlackBerry cannot succeed if we tried to be everybody's darling and all things to all people. Therefore, we plan to build on our strength." Balsillie resigned from the board of directors in March 2012, while Lazaridis remained on the board as vice chairman. Following the assumption of role as CEO, Heins made substantial changes to the company's leadership team. Changes included the departures of Chief Technology Officer David Yach; Chief Operating Officer Jim Rowan; Senior Vice President of Software Alan Brenner; Chief Legal Officer, Karima Bawa; and Chief Information Officer Robin Bienfait. Following the leadership changes, Heins hired Kristian Tear to assume the role of Chief Operating Officer, Frank Boulben to fill the Chief Marketing Officer role and appointed Dan Dodge, the CEO of QNX, to take over as Chief Technology Officer. On July 28, 2012, Steven E. Zipperstein was appointed as the new Vice President and Chief Legal Officer. On March 28, 2013, Lazardis relinquished his position as vice chairman and announced his resignation from the board of directors. Later in the year, Heins was replaced by John S. Chen, who assumed the CEO role in the first week of November. Chen's compensation package mainly consists of BlackBerry shares—a total of 13 million—and he will be entitled to the entire number of shares after he has served the company for five years. Heins received an exit package of $22 million. Chen has a reputation as a "turnaround" CEO, turning the struggling enterprise software and services organization Sybase into enough of a success to sign a merger with SAP in 2010. Chen was open about his plans for BlackBerry upon joining the company, announcing his intent to move away from hardware manufacturing to focus on enterprise software such as QNX, BlackBerry UEM, and AtHoc. He has firm views on net neutrality and lawful access, and has been described by former colleagues as a "quick thinker who holds people accountable". Workforce reductions In June 2011, RIM announced its prediction that Q1 2011 revenue would drop for the first time in nine years, and also unveiled plans to reduce its workforce. In July 2011, the company cut 2,000 jobs, the biggest lay-off in its history and the first major layoff since November 12, 2002 when the company laid off 10% of its workforce (200 employees). The lay-off reduced the workforce by around 11%, from 19,000 employees to 17,000. On June 28, 2012, the company announced a planned workforce reduction of 5,000 by the end of its fiscal 2013, as part of a $1 billion cost savings initiative. On July 25, 2013, 250 employees from BlackBerry's research and development department and new product testing were laid off. The layoffs were part of the turnaround efforts. On September 20, 2013, BlackBerry confirmed that the company will have a massive layoff of 4,500 employees by the end of 2013. This would be approximately 40 percent of the company's workforce. BlackBerry had at its peak about 20,000 employees, but when CEO John Chen joined BlackBerry in 2013 there were additional layoffs in February 2015 to compete with smartphones, so the total employees numbered 6,225. On July 21, 2015, BlackBerry announced an additional layoff of an unspecified number of employees, with another 200 laid off in February 2016. As of August 2017, the company had 4,044 employees. Stock fluctuations In June 2011, RIM stock fell to its lowest point since 2006. On December 16, 2011, RIM shares fell to their lowest price since January 2004. Overall in 2011, the share price tumbled 80 percent from January to December, causing its market capitalization to fall below book value. By March 2012, shares were worth less than $14, from a high of over $140 in 2008. From June 2008 to June 2011, RIM's shareholders lost almost $70 billion, or 82 percent, as the company's market capitalization dropped from $83 billion to $13.6 billion, the biggest decline among communications-equipment providers. Shares price fell further on July 16, closing at $7.09 on the Toronto Stock Exchange, the lowest level since September 8, 2003, after a jury in California said RIM must pay $147.2 million as a result of a patent infringement judgment that was subsequently overturned. On November 22, 2012, shares of RIM/BlackBerry surged 18%, the largest gain of the stock in over three years. This was due to National Bank of Canada analyst Kris Thompson's announcement that the new BB10 devices were expected to sell better than anticipated; along with raising the target stock price. On June 28, 2013, after BlackBerry announced net losses of approximately $84 million, its shares plunged 28%. On April 12, 2017, shares surged more than 19% as BlackBerry won an arbitration case against Qualcomm. It was decided that BlackBerry had been overpaying the company in royalty payments, and BlackBerry was awarded $814.9 million. Rumoured Samsung buyout On January 14, 2015, in the final hour of trading in U.S. markets, Reuters published that Samsung was in talks with BlackBerry to buy the latter for between $13.35 and $15.49 per share. The article caused shares of BlackBerry to rally 30%. Later that evening, BlackBerry issued a press release denying the media reports. Samsung responded, saying that the report is "groundless". Mobile OS transition BlackBerry OS (Java) The existing Java-based BlackBerry OS was intended to operate under much different, simpler conditions such as low powered devices, narrow network bandwidth, and high-security enterprises. However, as the needs of the mobile user evolved, the aging platform struggled with emerging trends like mobile web browsing, consumer applications, multimedia and touch screen interfaces. Users could experience performance issues, usability problems and instability. The company tried to enhance the aging platform with a better web browser, faster performance, a bundled application store and various touch screen enhancements, but ultimately decided to build a new platform with QNX at its core. While most other operating systems are monolithicthe malfunction of one area would cause the whole system to crash – QNX is more stable because it uses independent building blocks or "kernels", preventing a domino effect if one kernel breaks. RIM's final major OS release before the release of BlackBerry 10, was BlackBerry 7, which was often criticized as dated and referred to as a temporary stopgap. BlackBerry Tablet OS (QNX) The BlackBerry PlayBook was the first RIM product whose BlackBerry Tablet OS was built on QNX, launched in April 2011 as an alternative to the Apple iPad. However, it was criticized for having incomplete software (it initially lacked native email, calendaring and contacts) and a poor app selection. It fared poorly until prices were substantially reduced, like most other tablet computers released that year (Android tablets such as the Motorola Xoom and Samsung Galaxy Tab, and the HP TouchPad). The BlackBerry Tablet OS received a major update in February 2012, as well as numerous minor updates. BlackBerry 10 (QNX) BlackBerry 10, a substantially updated version of BlackBerry Tablet OS intended for the next generation BlackBerry smartphones, was originally planned for release in early 2012. The company delayed the product several times, remembering the criticism faced by the BlackBerry Playbook launch and citing the need for it to be perfect in order to stand a chance in the market. The most recent model with this OS was the BlackBerry Leap. Android In September 2015, BlackBerry announced the Priv, a handset running Android 5.1.1 "Lollipop" (and compatible with an upgrade to Android Marshmallow). It is the first phone by the company not to run an in-house built operating system. BlackBerry's Android is almost stock Android, with their own tweaks to improve productivity and security. BlackBerry has implemented some of the features of BlackBerry 10 within Android like BlackBerry Hub, BlackBerry Virtual Keyboard, BlackBerry Calendar, BlackBerry Contacts app etc. On April 1, 2016, BlackBerry reported that it sold 600,000 phones in its fiscal fourth quarter, amid expectations of 750,000–800,000 handset sales for the first full quarter of reporting since the Priv's release. On July 26, 2016, a new, mid-range model with only an on-screen keyboard was introduced, the unusually slim BlackBerry DTEK50, powered by the then latest version of Android (6.0, Marshmallow) and featuring a 5.2-inch full high-definition display. BlackBerry chief security officer David Kleidermacher stressed data security during the launch, indicating that this model included built-in malware protection and encryption of all user information. By then, the BlackBerry Classic, which used the BlackBerry 10 OS, had been discontinued. In July 2016, industry observers expected the company to announce two additional smartphones over the subsequent 12 months, presumably also with the Android OS. However, BlackBerry COO Marty Beard told Bloomberg that "The company's never said that we would not build another BB10 device." At MWC Barcelona 2017, TCL announced the BlackBerry KeyOne. The KEYone is the last phone designed in-house by BlackBerry. Acquisitions Through the years, particularly as the company evolved towards its new platform, BlackBerry has made numerous acquisitions of third-party companies and technology. Slipstream Data Inc. Slipstream Data Inc was a network optimization/data compression/network acceleration software company. BlackBerry acquired the company as a wholly owned subsidiary on July 11, 2006. The company continues to operate out of Waterloo. Certicom Certicom Corp. is a cryptography company founded in 1985 by Gordon Agnew, Ron Mullin and Scott Vanstone. The Certicom intellectual property portfolio includes over 350 patents and patents pending worldwide that cover key aspects of elliptic-curve cryptography (ECC). The National Security Agency (NSA) has licensed 26 of Certicom's ECC patents as a way of clearing the way for the implementation of elliptic curves to protect U.S. and allied government information. Certicom's current customers include General Dynamics, Motorola, Oracle, Research In Motion and Unisys. On January 23, 2009, VeriSign entered into an agreement to acquire Certicom. Research In Motion put in a counter-offer, which was deemed superior. VeriSign did not match this offer, and so Certicom announced an agreement to be acquired by RIM. Upon the completion of this transaction, Certicom became a wholly owned subsidiary of RIM, and was de-listed from the Toronto Stock Exchange on March 25, 2009. Dash Navigation In June 2009, RIM announced they would acquire Dash Navigation, makers of the Dash Express. Torch Mobile In August 2009, RIM acquired Torch Mobile, developer of Iris Browser, enabling the inclusion of a WebKit-based browser on their BlackBerry devices, which became the web browser in subsequent Java-based operating systems (BlackBerry 6, BlackBerry 7) and operating systems (QNX based BlackBerry Tablet OS and BlackBerry 10). The first product to contain this browser, the BlackBerry Torch 9800, was named after the company. DataViz On September 8, 2010, DataViz, Inc. sold their office suite Documents To Go and other assets to Research In Motion for $50 million. Subsequently, the application which allows users to view and edit Microsoft Word, Microsoft Excel and Microsoft PowerPoint was bundled on BlackBerry Smartphones and tablets. Viigo On March 26, 2010, the company announced its acquisition of Viigo, a Toronto-based company that developed the popular Viigo for BlackBerry applications, which aggregated news content from around the web. Terms of the deal were not disclosed. QNX RIM reached an agreement with Harman International on April 12, 2010, for RIM to acquire QNX Software Systems. The acquired company was to serve as the foundation for the next generation BlackBerry platform that crossed devices. QNX became the platform for the BlackBerry PlayBook and BlackBerry 10 Smartphones. The Astonishing Tribe The Astonishing Tribe (TAT), a user interface design company based in Malmö, Sweden, was acquired by the company on December 2, 2010. With a history of creating user interfaces and applications for mobile, TAT contributed heavily to the user experience of BlackBerry 10 as well as the development of its GUI framework, Cascades. JayCut In July 2011, RIM brought on JayCut, a Sweden-based company that is an online video editor. JayCut technology was incorporated into the media software of BlackBerry 10. Paratek Microwave In March 2012, RIM acquired Paratek Microwave, bringing their adaptive RF Tuning technology into BlackBerry handsets. Tungle.me On September 18, 2012, it was announced that the RIM social calendaring service, Tungle.me would be shut down on December 3, 2012. RIM acquired Tungle.me in April 2011. Newbay In July 2011, RIM acquired NewBay, an Irish-based company that is an online video, pics and tool for media networks editor. RIM subsequently sold NewBay to Synchronoss in December 2012 for $55.5 million. Scoreloop On June 7, 2011, Scoreloop was acquired by BlackBerry for US$71 million. It provided tools for adding social elements to any game (achievements/rewards etc.) and was central to the BlackBerry 10's Games app. On December 1, 2014, all Scoreloop services were shut down. Gist Gist was acquired in February 2011, by BlackBerry. Gist is a tool that helps users to organise and view all their contacts in one place. Gist's services closed down on September 15, 2012, in order for the company to focus on BlackBerry 10. Scroon BlackBerry Ltd. acquired Scroon in May 2013. The French startup manages Facebook, Twitter and other social-media accounts for large clients like luxury-good maker LVMH Moet Hennessy Louis Vuitton SA, wireless operator Orange SA (ORA) and Warner Bros. Entertainment. The deal was publicly announced in November 2013. According to Scroon founder, Alexandre Mars, he had not disclosed the purchase by BlackBerry before because of the "delicate media buzz" around the company. Scroon is part of BlackBerry's strategy to profit from the BlackBerry Messenger instant-messaging service by utilizing the newly unveiled BBM Channels. Financial terms were not disclosed. Movirtu Movirtu was acquired in September 2014, by BlackBerry. Movirtu is a U.K. startup that allows multiple phone numbers to be active on a single mobile device. At the time of the acquisition BlackBerry announced they would expand this functionality beyond BlackBerry 10 to other mobile platforms such as Android and iOS. Secusmart Secusmart was acquired in September 2014. The German-based company was one of the steps to position BlackBerry as the most secure provider in the mobile market. Secusmart had the agreement to equip the German Government with high secure mobile devices that encrypt voice as well as data on BlackBerry 10 devices. Those phones are currently in use by Angela Merkel and most of the ministers as well as several Departments and the Parliament. WatchDox WatchDox was an Israel-based Enterprise File Synchronization and Sharing company which specialized in securing access to documents on a cloud basis. BlackBerry acquired the company in April 2015. On December 8, 2016, BlackBerry renamed WatchDox to BlackBerry Workspaces. In August 2019, BlackBerry closed down its Israel development center. AtHoc On July 22, 2015, BlackBerry announced that it had acquired AtHoc, a provider of secure, networked emergency communications. Good Technology On September 4, 2015, BlackBerry announced the acquisition of mobile security provider Good Technology for $425 million. On December 8, 2016, it rebranded Good's products and integrated them into the BlackBerry Enterprise Mobility Suite, a set of tiered software offerings for its enterprise customers. Encription On February 24, 2016, BlackBerry acquired UK-based cyber security consultancy Encription. Cylance On November 16, 2018 Cylance was purchased for US$1.4 billion by BlackBerry Limited in an all cash deal. The technology behind Cylance would enable BlackBerry to add artificial intelligence capabilities to its existing software products for IoT applications and other services. Cylance will run as a separate division within BlackBerry Limited's current operations. Software BlackBerry Unified Endpoint Manager (UEM) An Enterprise Mobility Management platform that provides provisional and access control over smartphones, tablets, laptops, and desktops with support for all major platforms including iOS, Android (including Android for Work and Samsung KNOX), BlackBerry 10, Windows 10, and Mac OS. UEM (formerly known as BES) also acts as a unified management console and server for BlackBerry Dynamics, BlackBerry Workspaces, and BlackBerry 2FA. BlackBerry Dynamics (Formerly Good Dynamics) A Mobile Application Management platform that manages and secures app data through application virtualization. The BlackBerry Dynamics suite of apps includes email, calendar, contacts, tasks, instant messaging, browsing, and document sharing. The BlackBerry Dynamics SDK allows developers to utilize the platform's security, and add functionality from BlackBerry's other solutions into their applications. BlackBerry Workspaces (Formerly WatchDox) An Enterprise File Synchronization and Sharing (EFSS) platform, Workspaces provides file-level digital rights management controls alongside file synchronization and sharing functionality. BlackBerry 2FA (Formerly Strong Authentication) A two-factor, certificate-based VPN authentication solution that allows users to authenticate without requiring PINs or passwords. BBM Enterprise An IP-based enterprise instant messaging platform that provides end-to-end encryption for voice, video, and text-based communication. On February 7, 2017, Blackberry released the BBM Enterprise SDK, a "Communications Platform as a Service" kit that allows developers to incorporate BBM Enterprise's messaging capabilities into their own applications. Said capabilities include secure messaging, voice, video, file sharing, and presence information. BlackBerry AtHoc An emergency communication system, AtHoc provides two-way messaging and notifications across a range of devices and platforms. On May 17, 2017, BlackBerry released AtHoc Account to help businesses more easily keep track of their staff in an emergency. SecuSUITE An anti-eavesdropping solution that provides voice, data, and SMS encryption. BlackBerry QNX A real-time embedded operating system, QNX drives multiple software systems in modern auto vehicles, and forms the basis of solutions like BlackBerry Radar, an IoT-based asset tracking system for the transportation industry. Patent litigation Since the turn of the century, RIM has been embroiled in a series of suits relating to alleged patent infringement. Glenayre Electronics In 2001, Research In Motion sued competitor Glenayre Electronics Inc. for patent infringement, partly in response to an earlier infringement suit filed by Glenayre against RIM. RIM sought an injunction to prevent Glenayre from infringing on RIM's "Single Mailbox Integration" patent. The suit was ultimately settled in favour of RIM. Good Technology In June 2002, Research In Motion filed suit against 2000 start-up and competitor Good Technology. RIM filed additional complaints throughout the year. In March 2004, Good agreed to a licensing deal, thereby settling the outstanding litigation. Handspring On September 16, 2002, Research In Motion was awarded a patent pertaining to keyboard design on hand-held e-mail devices. Upon receiving the patent, it proceeded to sue Handspring over its Treo device. Handspring eventually agreed to license RIM's patent and avoid further litigation in November of the same year. NTP During the appeals, RIM discovered new prior art that raised a "substantial new question of patentability" and filed for a reexamination of the NTP patents in the United States Patent and Trademark Office. That reexamination was conducted separately to the court cases for infringement. In February 2006, the USPTO rejected all of NTP's claims in three disputed patents. NTP appealed the decision, and the reexamination process was still ongoing as of July 2006 (See NTP, Inc. for details). On March 3, 2006, RIM announced that it had settled its BlackBerry patent dispute with NTP. Under the terms of the settlement, RIM agreed to pay NTP US$612.5 million in a "full and final settlement of all claims". In a statement, RIM said that "all terms of the agreement have been finalized and the litigation against RIM has been dismissed by a court order this afternoon. The agreement eliminates the need for any further court proceedings or decisions relating to damages or injunctive relief." Xerox On July 17, 2003, while still embroiled in litigation with NTP and Good Technology, RIM filed suit against Xerox in the U.S. District of Hartford, Connecticut. The suit was filed in response to discussions about patents held by Xerox that might affect RIM's business and also asked that patents held by Xerox be invalidated. Visto On May 1, 2006, RIM was sued by Visto for infringement of four patents. Though the patents were widely considered invalid and in the same veins as the NTP patents – with a judgement going against Visto in the U.K. – RIM settled the lawsuit in the United States on July 16, 2009, with RIM agreeing to pay Visto US$267.5 million plus other undisclosed terms. Motorola On January 22, 2010, Motorola requested that all BlackBerry smartphones be banned from being imported into the United States for infringing upon five of Motorola's patents. Their patents for "early-stage innovations", including UI, power management and WiFi, are in question. RIM countersued later the same day, alleging anti-competitive behaviour and that Motorola had broken a 2003 licensing agreement by refusing to extend licensing terms beyond 2008. The companies settled out of court on June 11, 2010. Eatoni On December 5, 2011, Research In Motion obtained an order granting its motion to dismiss plaintiff Eatoni's claims that RIM violated Section 2 of the Sherman Antitrust Act and equivalent portions of New York's Donnelly Act. Eatoni alleged that RIM's alleged infringement of plaintiff's '317 patent constituted an antitrust violation. Eatoni Ergonomics, Inc. v. Research In Motion Corp., No. 08-Civ. 10079 (WHP) (S.D.N.Y. , 2011), Memorandum and Order, p. 1 (Pauley, J.). Mformation In July 2012, a U.S. federal court jury awarded damages (later overturned) of $147 million against Research In Motion. The jury decided that Research In Motion had violated patents of Mformation and calculated damages of $8 each on 18.4 million units for royalties on past sales of devices to nongovernment U.S. customers only, not including future royalty payments inside and outside the U.S. On August 9, 2012, that verdict was overturned on appeal. RIM had argued that Mformation's patent claims were invalid because the processes were already being used when Mformation filed its patent application. Judge James Ware said Mformation failed to establish that RIM had infringed on the company's patent. Qualcomm On May 26, 2017, BlackBerry announced that it had reached an agreement with Qualcomm Incorporated resolving all amounts payable in connection with the interim arbitration decision announced on April 12, 2017. Following a joint stipulation by the parties, the arbitration panel has issued a final award providing for the payment by Qualcomm to BlackBerry of a total amount of U.S.$940,000,000 including interest and attorneys' fees, net of certain royalties due from BlackBerry for calendar 2016 and the first quarter of calendar 2017. Facebook On March 8, 2018, Blackberry Limited sued Facebook Inc. in federal court in Los Angeles. According to BlackBerry Limited, Facebook has built swaths of its empire on the messaging technology which was originally developed by them during the time when the Facebook chief, Mark Zuckerberg, was still living in a Harvard University dorm room. Blackberry Limited alleges that many features of the Facebook messaging service infringe on Blackberry patents. In January 2021, BlackBerry shares jumps 20% after settling patent dispute with Facebook. Controversies Stock option scandal settlement In 2007, co-CEO Jim Balsillie was forced to resign as chairman as the company announced a $250-million earnings restatement relating to mistakes in how it granted stock options. Furthermore, an internal review found that hundreds of stock-option grants had been backdated, timed to a low share price to make them more lucrative. In January 2009, Canadian regulators stated that they were seeking a record penalty of US$80 million from the top two executives, co-CEOs Jim Balsillie and Mike Lazaridis. Furthermore, the Ontario Securities Commission (OSC) has pushed for Balsillie to pay the bulk of any penalty and relinquish his seat on RIM's board of directors for a period of time. On February 5, 2009, several executives and directors of Research In Motion agreed to pay the penalties to settle an investigation into the backdating of stock options. The Ontario Securities Commission approved the arrangement in a closed-door meeting. Under the terms of a settlement agreement with the OSC, RIM co-chief executive officers Jim Balsillie and Mike Lazaridis, as well as chief operating officer Dennis Kavelman, will jointly pay a total of C$68-million to RIM to reimburse the company for losses from the backdating and for the costs of a long internal investigation. The three are also required to pay C$9-million to the OSC. Initially, Balsillie had stepped down from RIM's board of directors temporarily for a year and remained in his executive role. Balsille left the Board in January 2012 and stepped down from his executive role in March 2012. Environmental record In November 2011, RIM was ranked 15th out of 15 electronics manufacturers in Greenpeace’s re-launched Guide to Greener Electronics. The guide ranks manufacturers according to their policies and practices to reduce their impact on the climate, produce greener products, and make their operations more sustainable. RIM appeared for the first time in 2011 with a score of 1.6 out of 10. In the Energy section the company was criticized by Greenpeace for not seeking external verification for its data on greenhouse gas (GHG) emissions, for not having a clean electricity plan and for not setting a target to reduce GHG emissions. RIM performed badly in the Products category, only scoring points for the energy efficiency of its products as it reports that its BlackBerry charger gets the European Commission IPP 4-star rating. Meanwhile, on Sustainable Operations the company scored well for its stance on conflict minerals and received points for its Paper Procurement Policy and its mail-back programme for e-waste. Nevertheless, RIM was given no points for the management of GHG emissions from its supply chain. In its 2012 report on progress relating to conflict minerals, the Enough Project rated RIM the sixth highest of 24 consumer electronics companies. Anonymous open letter to management On June 30, 2011, an alleged anonymous senior RIM employee penned an open letter to the company's senior management. The writer's main objective was getting co-CEOs Mike Lazaridis and Jim Balsillie to seriously consider his or her suggestions and complaints on the current state and future direction of the company. Service outages On October 10, 2011, RIM experienced one of the worst service outages in the company's history. Tens of millions of BlackBerry users in Europe, the Middle East, Africa, and North America were unable to receive or send emails and BBM messages through their phones. The outage was caused as a result of a core switch failure, "A transition to a back-up switch did not function as tested, causing a large backlog of data, RIM said." Service was restored on October 13, with RIM announcing a $100 package of free premium apps for users and enterprise support extensions. Government access to communication After a four-year stand-off with the Indian government over access to RIM's secure networks, the company demonstrated a solution that can intercept consumer email and messaging traffic between BlackBerry handsets, and make these encrypted communications available to Indian security agencies. There continues to be no access to secure encrypted BlackBerry enterprise communications or corporate emails, except through the Canadian Mutual Legal Assistance in Criminal Matters Act. See also List of multinational corporations List of mobile phone brands by country BlackBerry (article about the brand of electronic devices) BlackBerry Mobile List of BlackBerry products List of mergers and acquisitions by BlackBerry Index of articles related to BlackBerry OS Science and technology in Canada References Further reading External links Hoovers - BlackBerry Ltd. profile 1984 establishments in Ontario Canadian brands Canadian companies established in 1984 Companies listed on the Toronto Stock Exchange Electronics companies established in 1984 Electronics companies of Canada Mobile phone manufacturers Multinational companies headquartered in Canada 1998 initial public offerings Companies based in Waterloo, Ontario Shorty Award winners Academy Award for Technical Achievement winners
84539
https://en.wikipedia.org/wiki/Toma%C5%BE%20Pisanski
Tomaž Pisanski
Tomaž (Tomo) Pisanski (born May 24, 1949 in Ljubljana, Yugoslavia, which is now in Slovenia) is a Slovenian mathematician working mainly in discrete mathematics and graph theory. He is considered by many Slovenian mathematicians to be the "father of Slovenian discrete mathematics." Biography As a high school student, Pisanski competed in the 1966 and 1967 International Mathematical Olympiads as a member of the Yugoslav team, winning a bronze medal in 1967. He studied at the University of Ljubljana where he obtained a B.Sc, M.Sc and PhD in mathematics. His 1981 PhD thesis in topological graph theory was written under the guidance of Torrence Parsons. He also obtained an M.Sc. in computer science from Pennsylvania State University in 1979. Currently, Pisanski is a professor of discrete and computational mathematics and Head of the Department of Information Sciences and Technology at University of Primorska in Koper. In addition, he is a Professor at the University of Ljubljana Faculty of Mathematics and Physics (FMF). He has been a member of the Institute of Mathematics, Physics and Mechanics (IMFM) in Ljubljana since 1980, and the leader of several IMFM research projects. In 1991 he established the Department of Theoretical Computer Science at IMFM, of which he has served as both head and deputy head. He has taught undergraduate and graduate courses in mathematics and computer science at the University of Ljubljana, University of Zagreb, University of Udine, University of Leoben, California State University, Chico, Simon Fraser University, University of Auckland and Colgate University. Pisanski has been an adviser for M.Sc and PhD students in both mathematics and computer science. Notable students include John Shawe-Taylor (B.Sc in Ljubljana), Vladimir Batagelj, Bojan Mohar, Sandi Klavžar, and Sandra Sattolo (M.Sc in Udine). Research Pisanski’s research interests span several areas of discrete and computational mathematics, including combinatorial configurations, abstract polytopes, maps on surfaces, chemical graph theory, and the history of mathematics and science. In 1980 he calculated the genus of the Cartesian product of any pair of connected, bipartite, d-valent graphs using a method that was later called the White–Pisanski method. In 1982 Vladimir Batagelj and Pisanski proved that the Cartesian product of a tree and a cycle is Hamiltonian if and only if no degree of the tree exceeds the length of the cycle. They also proposed a conjecture concerning cyclic Hamiltonicity of graphs. Their conjecture was proved in 2005. With Brigitte Servatius he is the co-author of the book Configurations from a Graphical Viewpoint (2013). Selected publications Pisanski, T. Genus of Cartesian products of regular bipartite graphs, Journal of Graph Theory 4 (1), 1980, 31-42. doi:10.1002/jgt.3190040105 Graovac, A., T. Pisanski. On the Wiener index of a graph, Journal of Mathematical Chemistry 8 (1),1991, 53-62. doi:10.1007/BF01166923 Boben, M., B. Grunbaum, T. Pisanski, A. Zitnik, Small triangle-free configurations of points and lines, Discrete & Computational Geometry 35 (3), 2006, 405-427. doi:10.1007/s00454-005-1224-9 Conder, M., I. Hubard, T. Pisanski. Constructions for chiral polytopes, Journal of the London Mathematical Society 77 (1), 2007, 115-129. doi:10.1112/jlms/jdm093 Pisanski, T. A classification of cubic bicirculants, Discrete Mathematics 307 (3-5), 2007, 567-578. doi:10.1016/j.disc.2005.09.053 Professional life From 1998-1999, Pisanski was chairman of the Society of Mathematicians, Physicists and Astronomers of Slovenia (DMFA Slovenije); he was appointed an honorary member in 2015. He is a founding member of the International Academy of Mathematical Chemistry, serving as its Vice President from 2007-2011. In 2008, together with Dragan Marušič, he founded Ars Mathematica Contemporanea, the first international mathematical journal to be published in Slovenia. In 2012 he was elected to the Academia Europaea. He is currently president of the Slovenian Discrete and Applied Mathematics Society (SDAMS), the first Eastern European mathematical society not wholly devoted to theoretical mathematics to be accepted as a full member of the European Mathematical Society (EMS). Awards and honors In 2005, Pisanski was decorated with the Order of Merit (Slovenia), and in 2015 he received the Zois award for exceptional contributions to discrete mathematics and its applications. In 2016, he received the Donald Michie and Alan Turing Prize for lifetime achievements in Information Science in Slovenia. References External links Pisanski's CV International Academy of Mathematical Chemistry - List of Members Slovenian Academy of Engineering - List of Members Images of Knowledge: Tomaž Pisanski - RTV radio interview 8th European Congress of Mathematics website Maps ∩ Configurations ∩ Polytopes ∩ Molecules ⊆ Graphs: The mathematics of Tomaž Pisanski on the occasion of his 70th birthday Ars Mathematica Contemporanea website Slovenian Society for Discrete and Applied Mathematics (SDAMS) website 1949 births 20th-century mathematicians 21st-century mathematicians Graph theorists Living people Pennsylvania State University alumni Slovenian mathematicians Slovenian computer scientists Scientists from Ljubljana Mathematical chemistry University of Ljubljana alumni Members of Academia Europaea University of Ljubljana faculty University of Primorska faculty University of Zagreb faculty Montanuniversität Leoben faculty California State University, Chico faculty Simon Fraser University faculty University of Auckland faculty Colgate University faculty International Mathematical Olympiad participants Computational chemists
3897604
https://en.wikipedia.org/wiki/OpenOSPFD
OpenOSPFD
OpenOSPFD is an ISC licensed implementation of the Open Shortest Path First Protocol. It is a network routing software suite which allows ordinary general purpose computers to be used as routers exchanging routes with other computer systems speaking the OSPF protocol. OpenOSPFD was developed by Esben Nørby and Claudio Jeker, for the OpenBSD project. It is a companion daemon of OpenBGPD. The software was developed as an alternative to packages such as Quagga, a routing software suite which is licensed under the GPL. OpenOSPFD is developed on OpenBSD, and ports exist for FreeBSD and NetBSD. Goals The design goals of OpenOSPF include being secure (non-exploitable), reliable, lean and easy to use. The configuration language is intended to be both powerful and easy enough for most users. External links openospfd for FreeBSD OpenOSPFd and FreeBSD Routing with OpenBSD using OpenOSPFD and OpenBGPD - Paper (pdf) by Claudio Jeker (2006) OpenOSPF Presentation - by Claudio Jeker BSD software OSPFD Free routing software OpenBSD software using the ISC license
12837452
https://en.wikipedia.org/wiki/Ilium/Olympos
Ilium/Olympos
Ilium/Olympos is a series of two science fiction novels by Dan Simmons. The events are set in motion by beings who appear to be ancient Greek gods. Like Simmons' earlier series, the Hyperion Cantos, it is a form of "literary science fiction"; it relies heavily on intertextuality, in this case with Homer and Shakespeare as well as references to Marcel Proust's À la recherche du temps perdu (or In Search of Lost Time) and Vladimir Nabokov's novel Ada or Ardor: A Family Chronicle. As with most of his science fiction and in particular with Hyperion, Ilium demonstrates that Simmons writes in the soft science fiction tradition of Ray Bradbury and Ursula K. Le Guin. Ilium is based on a literary approach similar to most of Bradbury's work, but describes larger segments of society and broader historical events. As in Le Guin's Hainish series, Simmons places the action of Ilium in a vast and complex universe made of relatively plausible technological and scientific elements. Yet Ilium is different from any of the works of Bradbury and Le Guin in its exploration of the very far future of humanity, and in the extra human or post-human themes associated with this. It deals with the concept of technological singularity where technological change starts to occur beyond the ability of humanity to presently predict or comprehend. The first book, Ilium, received the Locus Award for Best Science Fiction novel in 2004. Plot introduction The series centers on three main character groups: that of the scholic Hockenberry, Helen and Greek and Trojan warriors from the Iliad; Daeman, Harman, Ada and the other humans of Earth; and the moravecs, specifically Mahnmut the Europan and Orphu of Io. The novels are written in first-person, present-tense when centered on Hockenberry's character, but features third-person, past-tense narrative in all other instances. Much like Simmons' Hyperion where the characters' stories are told over the course of the novels and the actual events serve as a frame, the three groups of characters' stories are told over the course of the novels and their stories do not begin to converge until the end. Characters in Ilium/Olympos Old-style humans The "old-style" humans of Earth exist at what the post-humans claimed would be a stable, minimum herd population of one million. In reality, their numbers are much smaller than that, around 300,000, because each woman is allowed to have only one child. Their DNA incorporates moth genetics which allows sperm-storage and the choice of father-sperm years after sexual intercourse has actually occurred. This reproductive method causes many children not to know their father, as well as helps to break incest taboos in that the firmary, which controls the fertilization, protects against a child of close relatives being born. The old style humans never appear any older than about 40 since every twenty years they are physically rejuvenated. Ada: the owner of Ardis Hall and Harman's lover. She is just past her first twenty. She hosts Odysseus/Noman for his time on Earth. Daeman: a pudgy man approaching his second twenty. Both a ladies' man and a lepidopterist. Also terrified of dinosaurs. At the start of Ilium he is a pudgy, immature man-child who wishes to have sex with his cousin (as incest taboos have all but ceased to exist in his society), Ada (whom he had a brief relationship with when she was a teenager), but by the end of the tale he is a mature leader who is very fit and strong. Mother's name is Marina. Hannah: Ada's younger friend. Both inventor and artist. Develops a romantic interest in Odysseus. Harman: Ada's lover. 99 years old. Only human with the ability to read, other than Savi. Savi: the Wandering Jew. The only old-style human not gathered up in the final fax 1,400 years earlier. She has survived the years by spending most of them sleeping in cryo crèches and spending only a few months awake at a time every few decades. Moravecs Named after the roboticist Hans Moravec, they are autonomous, sentient, self-evolving biomechanical organisms that dwell on the Jovian moons. They were seeded throughout the outer Solar System by humans during the Lost Age. Most moravecs are self-described humanists and study Lost Age culture, including literature, television programs and movies. Mahnmut the Europan: explorer of Europa's oceans and skipper of the submersible, The Dark Lady. An amateur Shakespearean scholar. Orphu of Io: a heavily armored, 1,200-year-old hard-vac moravec that is shaped not unlike a crab. Weighing eight tons and measuring six meters in length, Orphu works in the sulfur-torus of Io, and is a Proust enthusiast. rockvecs: a subgroup of the moravecs, the rockvecs live on the Asteroid Belt and are more adapted for combat and hostile environments than the moravecs. Scholics Dead scholars from previous centuries that were rebuilt by the Olympian gods from their DNA. Their duties are to observe the Trojan War and report the discrepancies that occur between it and Homer's Iliad. Dr. Thomas Hockenberry: Ph.D. in classical studies and a Homeric scholar. Died of cancer in 2006 and is resurrected by the Olympian Gods as a scholic. Lover of Helen of Troy. Dr. Keith Nightenhelser: Hockenberry's oldest friend and a fellow scholic. (The real Nightenhelser was Simmons' roommate at Wabash College and is currently a professor at DePauw University.) Others Achaeans and Trojans: the heroes and minor characters are drawn from Homer's epics, as well as the works of Virgil, Proclus, Pindar, Aeschylus, Euripides, and classical Greek mythology. Ariel: a character from The Tempest and the avatar of the evolved, self-aware biosphere. Using locks of Harman's hair, Daeman's hair, and her own hair, Savi makes a deal with Ariel in order that they might pass without being attacked by the calibani. Caliban: a monster, son of Sycorax and servant of Prospero, whom John Clute describes as "a cross between Gollum and the alien of Alien." He is cloned to create the calibani, weaker clones of himself. Caliban speaks in strange speech patterns, with much of his dialogue taken from the dramatic monologue "Caliban upon Setebos" by Robert Browning. Simmons chooses not to portray Caliban as the "oppressed but noble native soul straining under the yoke of capitalist-colonial-imperialism" that current interpretations employ to portray him, which he views as "a weak, pale, politically correct shadow of the slithery monstrosity that made audiences shiver in Shakespeare's day ... Shakespeare and his audiences understood that Caliban was a monsterand a really monstrous monster, ready to rape and impregnate Prospero's lovely daughter at the slightest opportunity." Odysseus: Odysseus after his Odyssey, ten years older than the Odysseus who fights in the Trojan War. In Olympos, he adopts the name Noman, which is a reference to the name Odysseus gives to Polyphemus the Cyclops on their encounter, in Greek, Outis (), meaning "no man" or "nobody". Olympian Gods: former post-humans who were transformed into gods by Prospero's technology. They do not remember the science behind their technology, save for Zeus and Hephaestus, and they are described both as preliterate and post-literate, for which reason they enlist the services of Thomas Hockenberry and other scholics. They dwell on Olympus Mons on Mars and use quantum teleportation in order to get to the recreation of Troy on an alternate Earth. Though the events of the Trojan War are being recreated with the knowledge of Homer's Iliad, the only ones who know its outcome are the scholics and Zeus as Zeus has forbidden the other gods from knowing. post-humans: former humans who enhanced themselves far beyond the normal bounds of humanity and dwelt in orbital rings above the Earth until Prospero turned some into Olympian gods. The others were slaughtered by Caliban. They had no need of bodies, but when they took on human form they only took on the shape of women. Prospero: a character from The Tempest who is the avatar of the self-aware, post-Internet logosphere, a reference to Vladimir Vernadsky's idea of the noosphere. Setebos: Sycorax and Caliban's god. The god is described as "many-handed as a cuttlefish" in reference to "Caliban upon Setebos" by Robert Browning and is described by Prospero as being an "arbitrary god of great power, a September eleven god, an Auschwitz god." Sycorax: a witch and Caliban's mother. Also known as Circe or Demyx or Calypso. The Quiet: an unknown entity (presumably, God, from the Demogorgon's speeches and the words of Prospero) said to incarnate himself in different forms all across the universe. He is Setebos' nemesis, which could create a kind of God-Against-the Devil picture as Setebos is the background antagonist and Prospero and Ariel, servants of The Quiet, are the background protagonists. zeks: the Little Green Men of Mars. A chlorophyll-based lifeform that comes from the Earth of an alternate universe. Their name comes from a slang term related to the Russian word sharashka, which is a scientific or technical institute staffed with prisoners. The prisoners of these Soviet labor camps were called zeks. (This description of the origin of the term is a mistake of the author. Not only sharashka prisoners were called zeks, it is a common term for all Gulag camp prisoners, derived from the word zaklyuchennyi, inmate. The camp described in the A Day in the Life of Ivan Denisovich is a regular labor camp, not a sharashka.) Science of Ilium/Olympos As much of the action derives from fiction involving gods and wizards, Simmons rationalises most of this through his use of far-future technology and science, including: String theory: interdimensional transport is conducted via Brane Holes. Nanotechnology provides the gods' immortality and powers, and many of the cybernetic functions possessed by some of the humans. Reference to Vladimir Vernadsky's idea of the noosphere is made to explain the origins of powerful entities such as Ariel and Prospero, the former arising from a network of datalogging mote machines, and the latter of whom derives from a post-Internet logosphere. Quantum theory and Quantum Gravity are also used to account for a number of other things, from Achilles' immortality (his mother, Thetis, set the quantum probability for his death to zero for all means of death other than by Paris' bow) to teleportation and shapeshifting powers. ARNists use recombinant DNA techniques to resurrect long-dead and prehistoric animals. Pantheistic solipsism is used to explain how 'mythical' characters have entered the "real" world. Weapons Old style humansOther than flechette rifles scavenged from caches, crossbows are the main form of weapon as old style humans have forgotten almost everything and can only build crossbows. GodsTasers, energy shields and titanium lances. MoravecsWeapons of mass destruction including the Device, ship-based weapons, kinetic missiles. Miscellaneous What follows is a definition of terms that are either used within Ilium or are related to its science, technology and fictional history: ARNists: short for "recombinant RNA artists". ARNists use recombinant DNA techniques to resurrect long-dead and prehistoric animals. Simmons borrows this term from his Hyperion Cantos. E-ring/P-ring: short for "equatorial ring" or "polar ring" respectively. The rings described are not solid, but rather similar to the rings around Jupiter or Saturn: hundreds of thousands of large individual solid elements, built and occupied by the post-humans before Caliban and Prospero were stranded there and Caliban began murdering the post-humans. The rings are visible from the Earth's surface, but the old-style humans do not know exactly what they are. Faxnodes: much as the transporter of Star Trek works, the faxnode system takes a living organism, maps out its structure, breaks down its atoms and assembles a copy at the faxport at the intended destination. This copy is a facsimile, or fax, of the original. (Unlike most science-fiction transporter technology, it is revealed late in the story that the matter is not "changed into energy" or "sent" anywhere; a traveler's body is completely destroyed, and re-created from scratch at the destination.) Final fax: the 9,113 Jews of Savi's time to live through the Rubicon virus are suspended in a fax beam by Prospero and Ariel with the understanding that once the two get the Earth back into order, they will be released. Firmary: short for "infirmary". A room in the e-ring that the humans of Earth fax to every Twenty (every twentieth birthday) for physical rejuvenation, or when hurt or killed in order to be healed. If they were killed, the firmary removes all memory of their death in order to lessen the psychological impact of the event. Global Caliphate: an empire that, among other things, attempts to destroy the Jewish population of Earth. They released the Rubicon virus to kill all Jews on Earth as well as programmed the voynix to kill any remaining Jews who escaped the infection. Quantum theory and quantum gravity: used to account for a number of other things, including Achilles' immortality (in that Thetis set the quantum probability for his death to zero for all other means of death other than by Paris' bow), teleportation, and shapeshifting powers. Rubicon virus: created by the Global Caliphate and released with the intention of exterminating those of Jewish descent. It had the reverse effect, killing eleven billion people (ninety-seven percent of the world's population), but Israeli scientists were able to develop an inoculation against the virus and inoculate their own people's DNA, but did not have the time to save the rest of humanity. Turin cloth: a cloth used by the people of Earth that, when draped over the eyes, allows them to view the events of the Trojan War, which they believe is just a drama being created for their entertainment. Named after the Shroud of Turin Voynix: named after the Voynich manuscript. The voynix are biomechanical, self-replicating, programmable robots. They originated in an alternate universe, and were brought into the Ilium universe before 3000 A.D. The Global Caliphate somehow gained access to these proto-voynix and after replicating three million of them, battled the New European Union around 3000 A.D. In 3200 A.D., the Global Caliphate upgraded the voynix and programmed them to kill Jews. Using time travel technology acquired from the French (previously used to investigate the Voynich Manuscript and which resulted in the destruction of Paris), the Global Caliphate sent the voynix forward in time to 4600 A.D. Upon their arrival they begin to replicate rapidly in the Mediterranean Basin. As the post-human operations there were put at risk, Prospero and Sycorax created the calibani to fend off the voynix, and eventually Prospero reprogrammed them into inactivation. After the final fax, they were reprogrammed to serve the new old-style humans. Literary and cultural influences Simmons references such historical figures, fictional characters and works as Christopher Marlowe, Bram Stoker's Dracula, Plato, Gollum, the Disney character Pluto, Samuel Beckett, and William Butler Yeats' "The Second Coming", among others. As well as referencing these works and figures, he uses others more extensively, shaping his novel by the examples he chooses, such as 9/11 and its effects on the Earth and its nations. Ilium is thematically influenced by extropianism, peopled as it is with post-humans of the far future. It therefore continues to explore the theme pioneered by H. G. Wells in The Time Machine, a work which is also referenced several times in Simmons' work. One of the most notable references is when the old woman Savi calls the current people of Earth eloi, using the word as an expression of her disgust of their self-indulgent society, lack of culture and ignorance of their past. Ilium also includes allusions to the work of Nabokov. The most apparent of these are the inclusion of Ardis Hall and the names of Ada, Daeman and Marina, all borrowed from Ada or Ardor: A Family Chronicle. The society that the old-style humans live in also resembles that of Antiterra, a parallel of our Earth circa 19th century, which features a society in which there exists a lack of repression and Christian morality, shown by Daeman's intent to seduce his cousin. Simmons also includes references to Nabokov's fondness for butterflies, such as the butterfly genetics incorporated in the old-style humans and Daeman's enthusiasm as a lepidopterist. Mahnmut of Europa is identified as a Shakespearean scholar as in the first chapter he is introduced where he analyzes Sonnet 116 in order to send it to his correspondent, Orphu of Io, and it is here that Shakespeare's influence on Ilium begins. Mahnmut's submersible is named The Dark Lady, an allusion to a figure in Shakespeare's sonnets. There is also, of course, The Tempests presence in the characters of Prospero, Ariel and Caliban. There are also multiple references to other Shakespeare works and characters such as Falstaff, Henry IV, Part I and Twelfth Night. Shakespeare himself even makes an appearance in a dream to Mahnmut and quotes from Sonnet 31. Proustian memory investigations had a heavy hand in the novel's making, which helps explain why Simmons chose Ada or Ardor: A Family Chronicle over something more well-understood of Nabokov's, such as Pale Fire. Ada or Ardor was written in such a structure as to mimic someone recalling their own memories, a subject which Proust explores in his work À la recherche du temps perdu. Orphu of Io is more interested in Proust than Mahnmut's Shakespeare, as he considers Proust "perhaps the ultimate explorer of time, memory, and perception." Simmons' portrayal of Odysseus speaking to the old-style humans at Ardis Hall is also reminiscent of the Bibles Jesus teaching his disciples. Odysseus is even addressed as "Teacher" by one of his listeners in a way reminiscent of Jesus being addressed as "Rabbi," which is commonly translated as "Teacher". Movie adaptation In January 2004, it was announced that the screenplay he wrote for his novels Ilium and Olympos would be made into a film by Digital Domain and Barnet Bain Films, with Simmons acting as executive producer. Ilium is described as an "epic tale that spans 5,000 years and sweeps across the entire solar system, including themes and characters from Homer's The Iliad and Shakespeare's The Tempest." Awards and recognition IliumLocus Award winner, Hugo Award nominee, 2004 OlymposLocus Award shortlist, 2006 References Science fiction book series Works by Dan Simmons Science fantasy novels Novels set on Mars Classical mythology in popular culture Greek and Roman deities in fiction Nanotechnology in fiction Quantum fiction Resurrection in fiction Teleportation in fiction Biological weapons in popular culture Self-replicating machines in fiction Novels about time travel Novels based on the Iliad Novels based on the Odyssey Modern adaptations of the Odyssey Modern adaptations of the Iliad
38751971
https://en.wikipedia.org/wiki/FORAN%20System
FORAN System
The FORAN System is an integrated CAD/CAM/CAE system developed by SENER for the design and production of practically any naval ship and offshore unit. It is a multidisciplinary and integrated system that can be used in all the ship design and production phases and disciplines. The System collects all the information in a single database. FORAN is mainly focused on the design of: Merchants, Ro-Pax, Ro-Ro, bulk carriers, chemical tankers, container ships and cement and oil tankers. Navy vessels (surface ships and submarines), in which the systems allows designers to carry out configuration control, analyze different design alternatives (prototypes), handle advanced hull forms and manage materials and special standards, as well as introducing customized criteria. Specific vessels, tugs and workboats, hotel vessels, fishing vessels, fish transport vessels, oceanographic vessels, etc. For use in the offshore industry such as floating platforms (both anchored and fixed), staff transportation services, anchor vessels and vessels for applications such as supply, rescue, firefighting or anti-pollution. The latest version of the integrated CAD/CAM/CAE system is FORAN V80; however, FORAN V70 is still used widely. FORAN V70 Common tools: this version supports Unicode characters; this functionality enables entering text and generating information in languages using non Latin characters such as Chinese, Russian or Korean. FORAN dialogues and menu names can also be translated. Moreover, this update includes FVIEWER, a virtual reality module that replace the former VISUAL3D module. This application takes advantage of state-of-the-art graphic cards capabilities and allows the management of huge amount of information data. Drafting: This update includes also a new 2D environment, based in the QCAD application and compatible with AutoCAD, developed to be used in the modules for the norms and structure standards definition (FNORM), and in the General Arrangement module (FGA) and for the definition of electrical and P&I diagrams. It also includes developments for the interim products drawings, symbolic drawings and the 3D model views drawings. Project: One of the most relevant developments of the new version is the General Arrangement module (FGA) for spaces and general ship arrangement definition, both in 2D or 3D environments, with all data stored in the FORAN database. The application allows generating the general arrangement drawing in an efficient way. On the other hand, the module for the probabilistic damage stability calculations (FSUBD) offers now the possibility to consider intermediate stages of flooding, according to SOLAS standard. The automatic assignment of spaces to subzones has also been improved. Hull structure: A FNORM module for the definition of standards of structure is provided with a user interface, including multi dock windows and snap points; furthermore, it allows adding geometrical restrictions and includes the possibility of layer management. The increase of the lengths of the identifications and descriptions of blocks, materials and geometrical norms, as well as the hierarchical structure for the definition of the standards and geometrical norms are other capabilities of this module. Moreover, following features must be highlighted: hull structure modeling: an algorithm to represent corrugated parts more accurately, commands for checking the edge preparation of plates and profiles, options for the definition of face bars and an algorithm to represent more accurately curved shell and deck plates. Regarding profiles and plates nesting, the NEST module allows the nesting of identical parts assigned to different interim products and keeps information to recognize each individual part. Outfitting: FORAN V70 includes piping design tools for pipeline routing. Some important characteristics featured in the new update regard the polygonal lines, which are no more needed, the pipelines that are now routed dynamically displaying the pipeline as a solid model with significant snap points of the model. The version incorporates design functionalities adapted to the production circumstances in each shipyard, such as a command for smart splitting of pipe segments based on the standard pipe length defined in the components library, or checking utilities to control the spool fabrication restrictions before generating drawings and greater flexibility for the creation of sets of piping elements. Electrical design: The electrical design module allows now to generate cable conduits for special non-standard cross-section cableways and to define conduits with cables inside cable trays, considering them in the cross-section filling calculations. In the cable routing the definition of cable splitting has also been improved and the management of cables partially routed has a better functionality in the connection between cables and terminal blocks. Product lifecycle management (PLM): FORAN V70 allows the integration with different PLM Systems, thanks to a neutral solution built with standards based on CORBA and web services. FORAN FVIEWER VR The use of a Virtual Reality (VR) environment offers significant advantages in shipbuilding. The most important advantage is the possibility to review the model and to find out errors at early design stages, with an important cost reduction. VR allows an intuitive and quick evaluation of the model, queries, measurement of distances, ergonomic studies, collision detection, design changes evaluation, simulation of mounting, dismantling and operation tasks, etc. The viewer, called FORAN FVIEWER VR, which is part of the FORAN System, includes stereoscopic capability as feature, which means that it allows the 3D navigation around the model of a ship, being possible to use it with tracking devices. The module can be used during a 3D navigation with a great user-model interaction. The solution can be used in any kind of VR room as well as in portable solutions, workstations, etc. Head Mounted Display (HMD) The development of applications in a Virtual Reality environment within the shipbuilding industry is booming nowadays. After the development of the second generation of the viewer for interactive navigation through the ship 3D model -FORAN FVIEWER VR- SENER and Ingevideo have developed a Head Mounted Display (HMD).A Virtual Reality HMD is a device that lets the user to view and interact with 3D simulation environments, in this case applied to 3D models generated in FORAN. The possibility of working in virtual reality environments provides benefits and cost reductions, since it allows virtual screening in an interactive and intuitive way. FORAN system references The FORAN system is used in more than 150 ship design offices and shipyards in 30 countries. Some of its most significant references are: Strategic projection ship for Navantia. Cruise salvage vessel for the China Ship Development and Design Center (CSDDC). CVF, aircraft carrier for the Royal Navy (client: BAE Systems Babcock Marine). “Bahía Uno”, oil tanker for Astilleros de Murueta. “Ruiloba”, container ship for 1,350 TEU for the client Hijos de J. Barreras Shipyard. Frigate F-310 built at the Navantia shipyards for the Norwegian Royal Navy. Semi-submersible platform GM 400 for Global Maritime. See also Comparison of computer-aided design editors List of 3D computer graphics software List of 3D rendering software List of 3D modeling software References Marine engineering Virtual reality Computer-aided design software Computer-aided manufacturing software Computer-aided engineering software Product lifecycle management
61963
https://en.wikipedia.org/wiki/List%20of%20Greek%20mythological%20figures
List of Greek mythological figures
The following is a list of gods, goddesses and many other divine and semi-divine figures from ancient Greek mythology and ancient Greek religion. Immortals The Greeks created images of their deities for many purposes. A temple would house the statue of a god or goddess, or multiple deities, and might be decorated with relief scenes depicting myths. Divine images were common on coins. Drinking cups and other vessels were painted with scenes from Greek myths. Major gods and goddesses Greek primordial deities Titans and Titanesses The Titan gods and goddesses are depicted in Greek art less commonly than the Olympians. Gigantes The Gigantes were the offspring of Gaia (Earth), born from the blood that fell when Uranus (Sky) was castrated by their Titan son Cronus, who fought the Gigantomachy, their war with the Olympian gods for supremacy of the cosmos, they include: Alcyoneus (Ἀλκυονεύς), a giant usually considered to be one of the Gigantes, slain by Heracles. Chthonius (Χθόνιος). Damysus (Δάμυσος), the fastest of all the Giants in the Greek mythology. Enceladus (Ἐγκέλαδος), typically slain by Athena, said to be buried under Mount Etna in Sicily. Mimas (Μίμας), according to Apollodorus, he was killed by Hephaestus, or by others Zeus or Ares. Pallas (Πάλλας), according to Apollodorus, he was flayed by Athena, who used his skin as a shield. Polybotes (Πολυβώτης), typically slain by Poseidon. Porphyrion (Πορφυρίων), one of the leaders of the Gigantes, typically slain by Zeus. Thoas/Thoon (Θόων), he was killed by the Moirai. For a more complete list of Gigantes see Giants (Greek mythology)#Named Giants. Other "giants" Aloadae (Ἀλῳάδαι), twin giants who attempted to climb to Olympus by piling mountains on top of each other. Otus or Otos (Ότος). Ephialtes (Εφιάλτης). Anax (Αναξ) was a giant of the island of Lade near Miletos in Lydia, Anatolia. Antaeus (Ἀνταῖος), a Libyan giant who wrestled all visitors to the death until he was slain by Heracles. Antiphates (Ἀντιφάτης), the king of the man-eating giants known as Laestrygones which were encountered by Odysseus on his travels. Argus Panoptes (Ἄργος Πανόπτης), a hundred-eyed giant tasked with guarding Io. Asterius (Αστεριος), a Lydian giant. Cacus (Κακος), a fire-breathing Latin giant slain by Heracles. Cyclopes (Hesiodic), three one-eyed giants who forged the lightning-bolts of Zeus, Trident of Poseidon and Helmet of Hades. Arges (Ἄργης). Brontes (Βρόντης). Steropes (Στερόπης). Cyclopes (Homeric), a tribe of one-eyed, man-eating giants who herded flocks of sheep on the island of Sicily. Polyphemus (Πολύφημος), a Cyclops who briefly captured Odysseus and his men, only to be overcome and blinded by the hero. The Gegenees (Γηγενέες), a tribe of six-armed giants fought by the Argonauts on Bear Mountain in Mysia. Geryon (Γηρυων), a three-bodied giant who dwelt on the sunset isle at the ends of the earth. He was slain by Heracles when the hero arrived to fetch the giant's cattle as one of his twelve labours. The Hekatoncheires (Ἑκατόγχειρες), or Centimanes (Latin), the Hundred-Handed Ones, giant gods of violent storms and hurricanes. Three sons of Uranus and Gaia, each with his own distinct characters. Briareus (Βριάρεως) or Aigaion (Αἰγαίων), The Vigorous. Cottus (Κόττος), The Furious. Gyges (Γύγης), The Big-Limbed. The Laestrygonians (Λαιστρυγόνες), a tribe of man-eating giants encountered by Odysseus on his travels. Orion (Ὠρίων), a giant huntsman whom Zeus placed among the stars as the constellation of Orion. Talos (Τάλως), a giant forged from bronze by Hephaestus, and given by Zeus to his lover Europa as her personal protector. Tityos (Τίτυος), a giant slain by Apollo and Artemis when he attempted to violate their mother Leto. Typhon (Τυφῶν), a monstrous immortal storm-giant who attempted to launch an attack on Mount Olympus but was defeated by the Olympians and imprisoned in the pits of Tartarus. Personified concepts Chthonic deities Sea deities Sky deities Aeolus (Aiolos) (Αίολος), god of the winds Aether (Αιθήρ), primeval god of the upper air Alectrona (Αλεκτρονα), solar goddess of the morning or waking up Anemoi, (Άνεμοι), gods of the winds Aparctias (Απαρκτίας), another name for the north wind (not identified with Boreas) Apheliotes (Αφηλιώτης), god of the east wind (when Eurus is considered southeast) Argestes (Αργέστης), another name for the west or northwest wind Boreas (Βορέας), god of the north wind and of winter Caicias (Καικίας), god of the northeast wind Circios (Κίρκιος) or Thraskias (Θρασκίας), god of the north-northwest wind Euronotus (Ευρονότος), god of the southeast wind Eurus (Εύρος), god of the unlucky east or southeast wind Lips (Λίψ), god of the southwest wind Notus (Νότος) god of the south wind Skeiron (Σκείρων), god of the northwest wind Zephyrus (Ζέφυρος), god of the west wind Arke (Άρκη), messenger of the Titans and sister of Iris Astraios (Ἀστραῖος), god of stars and planets, and the art of astrology The Astra Planeti (Αστρα Πλανετοι), gods of the five wandering stars or planets Stilbon (Στιλβών), god of Hermaon, the planet Mercury Eosphorus (Ηωσφόρος), god of Venus the morning star Hesperus (Ἓσπερος), god of Venus the evening star Pyroeis (Πυρόεις), god of Areios, the planet Mars Phaethon (Φαέθων), god of Dios, the planet Jupiter Phaenon (Φαίνων), god of Kronion, the planet Saturn Aurai (Αὖραι), nymphs of the cooling breeze Aura (Αὖρα), goddess of the breeze and the fresh, cool air of early morning Chione (Χιόνη), goddess of snow and daughter of Boreas Eos (Ἠώς), goddess of the dawn Ersa (Ἕρση), goddess of the morning dew Helios (Ἥλιος), god of the sun and guardian of oaths Hemera (Ημέρα), primeval goddess of the day Hera (Ήρα), queen of the gods The Hesperides, (´Εσπερίδες), nymphs of the evening and sunset Iris (Ίρις), goddess of the rainbow and divine messenger Men (Μήν), a lunar deity worshiped in the western interior parts of Anatolia Nephele (Νεφέλη), cloud nymph Nyx, (Νύξ), goddess of night Pandia (Πανδία), daughter of Selene and Zeus The Pleiades (Πλειάδες), goddesses of the star cluster Pleiades and were associated with rain Alcyone (Αλκυόνη) Sterope (Στερόπη) Celaeno (Κελαινώ) Electra (Ηλέκτρα) Maia (Μαία) Merope (Μερώπη) Taygete (Ταϋγέτη) Sabazios (Σαβάζιος), the nomadic horseman and sky father god of the Phrygians and Thracians Selene (Σελήνη), goddess of the moon Uranus (Ουρανός), primeval god of the heavens Zeus (Ζεύς), King of Heaven and god of the sky, clouds, thunder, and lightning Rustic deities Aetna (Αἴτνη), goddess of the volcanic Mount Etna in Sicily Amphictyonis (Αμφικτυονίς), goddess of wine and friendship between nations, a local form of Demeter Anthousai (Ανθούσαι), flower nymphs Aristaeus (Ἀρισταῖος), god of bee-keeping, cheese-making, herding, olive-growing, and hunting Attis (Άττις), vegetation god and consort of Cybele Britomartis (Βριτόμαρτις), Cretan goddess of hunting and nets used for fishing, fowling and the hunting of small game Cabeiri (Κάβειροι), gods or spirits who presided over the Mysteries of the islands of Lemnos and Samothrace Aitnaios (Αιτναιος) Alkon (Αλκων) Eurymedon (Ευρυμεδών) Onnes (Όννης) Tonnes (Τόννης) Chloris (Χλωρίς), minor flower nymph and wife of Zephyrus Comus (Κόμος), god of revelry, merrymaking, and festivity Corymbus (Κόρυμβος), god of the fruit of the ivy The Curetes (Κουρέτες), guardians of infant Zeus on Mount Ida, barely distinguished from the Dactyls and the Corybantes Cybele (Κυβέλη), a Phrygian mountain goddess The Dactyls (Δάκτυλοι) "fingers", minor deities originally representing fingers of a hand Acmon (Ακμών) Damnameneus (Δαμναμενεύς) Delas (Δήλας) Epimedes (Επιμήδης) Heracles (not to be confused with the hero Heracles) Iasios (Ιάσιος) Kelmis (Κελμις) Skythes (Σκύθης) companions of Cybele Titias (Τιτίας) Cyllenus (Κύλληνος) Dionysus (Διόνυσος), god of wine, drunken orgies, and wild vegetation Dryades (Δρυάδες), tree and forest nymphs Gaia (Γαία), primeval goddess of the earth Epimeliades (Επιμελίδες), nymphs of highland pastures and protectors of sheep flocks Hamadryades (Αμαδρυάδες), oak tree dryades Hecaterus (Ηεκατερος), minor god of the hekateris — a rustic dance of quickly moving hands — and perhaps of the skill of hands in general Hermes (Ερμής), god of herds and flocks, of roads and boundary stones, and the god of thieves Korybantes (Κορύβαντες), the crested dancers who worshipped Cybele Damneus (Δαμνεύς) "the one who tames(?)" Idaios (Ιδαίος) "of Mount Ida" Kyrbas (Κύρβας), whose name is probably a variant of Korybas, singular for "Korybantes" Okythoos (Ωκύθοος) "the one running swiftly" Prymneus (Πρυμνεύς) "of lower areas(?)" Pyrrhichos (Πυρῥιχος), god of the rustic dance Ma, a local goddess at Comana in Cappadocia Maenades (μαινάδες), crazed nymphs in the retinue of Dionysus Methe (Μέθη), nymph of drunkenness Meliae (Μελίαι), nymphs of honey and the ash tree Naiades (Ναιάδες), fresh water nymphs Daphne (Δάφνη) Metope (Μετώπη) Minthe (Μίνθη) The Nymphai Hyperboreioi (Νύμφαι Υπερβόρειοι), who presided over aspects of archery Hekaerge (Εκαέργη), represented distancing Loxo (Λοξώ), represented trajectory Oupis (Ουπις), represented aim Oreades (Ὀρεάδες), mountain nymphs Adrasteia (Αδράστεια), a nursemaid of the infant Zeus Echo (Ηχώ), a nymph cursed never to speak except to repeat the words of others The Ourea (Ούρος), primeval gods of mountains The Palici (Παλικοί), a pair of rustic gods who presided over the geysers and thermal springs in Sicily Pan (Πάν), god of shepherds, pastures, and fertility Potamoi (Ποταμοί), river gods Achelous (Αχέλους) Acis (Άκις) Alpheus (Αλφειός) Asopus (Ασωπός) Cladeus (Κλάδεος) Eurotas (Ευρώτας) Nilus (Νείλος) Peneus (Πηνειός) Scamander (Σκάμανδρος) For a more complete list, see Potamoi#List of potamoi Priapus (Πρίαπος), god of garden fertility Satyrs (Σάτυροι) / Satyress, rustic fertility spirits Krotos (Κρότος), a great hunter and musician who kept the company of the Muses on Mount Helicon Silenus (Σειληνός), an old rustic god of the dance of the wine-press Telete (Τελέτη), goddess of initiation into the Bacchic orgies Zagreus (Ζαγρεύς), in the Orphic mysteries, the first incarnation of Dionysus Agricultural deities Adonis (Άδωνις), a life-death-rebirth deity Aphaea (Αφαία), minor goddess of agriculture and fertility Cyamites (Κυαμίτης), demi-god of the bean Demeter (Δημήτηρ), goddess of fertility, agriculture, grain, and harvest Despoina (Δέσποινη), daughter of Poseidon and Demeter, goddess of mysteries in Arcadia Dionysus (Διόνυσος), god of viticulture and wine Eunostus (Εύνοστος), goddess of the flour mill Persephone (Περσεφόνη), queen of the underworld, wife of Hades and goddess of spring growth Philomelus (Φιλόμελος), agricultural demi-god inventor of the wagon and the plough Plutus (Πλοῦτος), god of wealth, including agricultural wealth, son of Demeter Triptolemus (Τριπτόλεμος), god of farming and agriculture, he brought agriculture to Greece Health deities Apollo (Ἀπόλλων), god of disease and healing Asclepius (Ασκληπιός), god of medicine Aceso (Ἀκεσώ), goddess of the healing of wounds and the curing of illnesses Aegle (Αἴγλη), goddess of radiant good health Chiron (Χείρων), god of healing (up for debate if it is a god) Epione (Ἠπιόνη), goddess of the soothing of pain Hygieia (Ὑγεία), goddess of cleanliness and good health Iaso (Ἰασώ), goddess of cures, remedies, and modes of healing Paean (Παιάν), physician of the gods Panacea (Πανάκεια), goddess of healing Telesphorus (Τελεσφόρος), demi-god of convalescence, who "brought to fulfillment" recuperation from illness or injury Sleep deities Empusa (Ἔμπουσα), goddess of shape-shifting Epiales (Ἐφιάλτης), goddess of nightmares Hypnos (Ὕπνος) god of sleep Pasithea (Πασιθέα) goddess of relaxing meditation and hallucinations Oneiroi (Ὀνείρων) god of dreams Morpheus (μορφή) god of dreaming Charities Charites (Χάριτες), goddesses of charm, beauty, nature, human creativity, and fertility Aglaea (Αγλαΐα), goddess of beauty, adornment, splendor and glory Euphrosyne (Εὐφροσύνη), goddess of good cheer, joy, mirth, and merriment Thalia (Θάλεια), goddess of festive celebrations and rich and luxurious banquets Hegemone (Ηγεμόνη) "mastery" Antheia (Άνθεια), goddess of flowers and flowery wreaths Pasithea (Πασιθέα), goddess of rest and relaxation Cleta (Κλήτα) "the glorious" Phaenna (Φαέννα) "the shining" Eudaimonia (Ευδαιμονία) "happiness" Euthymia (Ευθυμία) "good mood" Calleis (Καλλείς) "beauty" Paidia (Παιδία) "play, amusement" Pandaisia (Πανδαισία) "banquet for everyone" Pannychis (Παννυχίς) "all-night (festivity)" Horae The Horae (Ώρες), The Hours, the goddesses of natural order Eunomia (Ευνομία), spirit of good order, and springtime goddess of green pastures Dike (Δίκη), spirit of justice, may have represented springtime growth Eirene (Ειρήνη), spirit of peace and goddess of the springtime The goddesses of springtime growth Thallo (Θαλλώ), goddess of spring buds and shoots, identified with Eirene Auxo (Αυξώ), goddess of spring growth Karpo (Καρπώ), goddess of the fruits of the earth The goddesses of welfare Pherousa (Φέρουσα) "the bringer" Euporie (Ευπορίη) "abundance" Orthosie (Ορθοσίη) "prosperity" The goddesses of the natural portions of time and the times of day Auge (Αυγή), first light of the morning Anatole (Ανατολή) or Anatolia (Ανατολία), sunrise Mousika or Musica (Μουσική), the morning hour of music and study Gymnastika, Gymnastica (Γυμναστίκή) or Gymnasia (Γυμνασία), the morning hour of gymnastics/exercise Nymphe (Νυμφή), the morning hour of ablutions (bathing, washing) Mesembria (Μεσημβρία), noon Sponde (Σπονδή), libations poured after lunch Elete, prayer, the first of the afternoon work hours Akte, Acte (Ακτή) or Cypris (Κυπρίς), eating and pleasure, the second of the afternoon work hours Hesperis (Έσπερίς), evening Dysis (Δύσις), sunset Arktos (Άρκτος), night sky, constellation The goddesses of seasons of the year Eiar (Είαρ), spring Theros (Θέρος), summer Pthinoporon (Φθινόπωρον), autumn Cheimon (Χειμών), winter Muses Other deities Mortals Deified mortals Achilles (), hero of the Trojan War Aiakos (), a king of Aegina, appointed as a Judge of the Dead in the Underworld after his death Aeolus (), a king of Thessaly, made the immortal king of all the winds by Zeus Alabandus (), he was the founder of the town of Alabanda Amphiaraus (), a hero of the war of the Seven against Thebes who became an oracular spirit of the Underworld after his death Ariadne (Αριάδνη), a Cretan princess who became the immortal wife of Dionysus Aristaeus (Ἀρισταῖος), a Thessalian hero, his inventions saw him immortalised as the god of bee-keeping, cheese-making, herding, olive-growing, and hunting Asclepius (), a Thessalian physician who was struck down by Zeus, to be later recovered by his father Apollo Attis (), a consort of Cybele, granted immortality as one of her attendants Bolina (), a mortal woman transformed into an immortal nymph by Apollo The Dioscuri (), divine twins Castor () Pollux () Endymion (), lover of Selene, granted eternal sleep so as never to age or die Ganymede (), a handsome Trojan prince, abducted by Zeus and made cup-bearer of the gods Glaucus (), the fisherman's sea god, made immortal after eating a magical herb Hemithea () and Parthenos (), princesses of the Island of Naxos who leapt into the sea to escape their father's wrath; Apollo transformed them into demi-goddesses Heracles (), ascended hero Ino (), a Theban princess who became the sea goddess Leucothea Lampsace (), a semi-historical Bebrycian princess honored as goddess for her assistance to the Greeks The Leucippides (), wives of the Dioscuri Phoebe (), wife of Pollux Hilaera (), wife of Castor Minos (), a king of Crete, appointed as a Judge of the Dead in the Underworld after his death Orithyia (), an Athenian princess abducted by Boreas and made the goddess of cold, gusty mountain winds Palaemon (), a Theban prince, made into a sea god along with his mother, Ino Philoctetes (), was the son of King Poeas of Meliboea in Thessaly, a famous archer, fought at the Trojan War Phylonoe (), daughter of Tyndareus and Leda, made immortal by Artemis Psyche (), goddess of the soul Semele (), mortal mother of Dionysus, who later was made the goddess Thyone () Tenes (), was a hero of the island of Tenedos Heroes Abderus, aided Heracles during his eighth labour and was killed by the Mares of Diomedes Achilles (Αχιλλεύς or Αχιλλέας), hero of the Trojan War and a central character in Homer's Iliad Aeneas (Αινείας), a hero of the Trojan War and progenitor of the Roman people Ajax the Great (Αίας ο Μέγας), a hero of the Trojan War and king of Salamis Ajax the Lesser (Αίας ο Μικρός), a hero of the Trojan War and leader of the Locrian army Amphitryon (Αμφιτρύων), Theban general who rescued Thebes from the Teumessian fox; his wife was Alcmene, mother of Heracles Antilochus (Ἀντίλοχος), Son of Nestor sacrificed himself to save his father in the Trojan War along with other deeds of valor Bellerophon (Βελλεροφῶν), hero who slew the Chimera Bouzyges, a hero credited with inventing agricultural practices such as yoking oxen to a plough Castor, the mortal Dioscuri twin; after Castor's death, his immortal brother Pollux shared his divinity with him in order that they might remain together Chrysippus (Χρύσιππος), a divine hero of Elis Daedalus (Δαίδαλος), creator of the labyrinth and great inventor, until King Minos trapped him in his own creation Diomedes (Διομήδης), a king of Argos and hero of the Trojan War Eleusis (Ἐλευσῖνι or Ἐλευσῖνα), eponymous hero of the town of Eleusis Eunostus, a Boeotian hero Ganymede (Γανυμήδης), Trojan hero and lover of Zeus, who was given immortality and appointed cup-bearer to the gods Hector (Ἕκτωρ), hero of the Trojan War and champion of the Trojan people Icarus (Ἴκαρος), the son of the master craftsman Daedalus Iolaus (Ἰόλαος), nephew of Heracles who aided his uncle in one of his Labors Jason (Ἰάσων), leader of the Argonauts Meleager (Μελέαγρος), a hero who sailed with the Argonauts and killed the Calydonian boar Odysseus (Ὀδυσσεύς or Ὀδυσεύς), a hero and king of Ithaca whose adventures are the subject of Homer's Odyssey; he also played a key role during the Trojan War Orpheus (Ὀρφεύς), a legendary musician and poet who attempted to retrieve his dead wife from the Underworld Pandion (Πανδίων), the eponymous hero of the Attic tribe Pandionis, usually assumed to be one of the legendary Athenian kings Pandion I or Pandion II Perseus (Περσεύς), son of Zeus and the founder-king of Mycenae and slayer of the Gorgon Medusa Theseus (Θησεύς), son of Poseidon and a king of Athens and slayer of the Minotaur Notable women Alcestis (Άλκηστις), daughter of Pelias and wife of Admetus, who was known for her devotion to her husband Amymone, the one daughter of Danaus who refused to murder her husband, thus escaping her sisters' punishment Andromache (Ανδρομάχη), wife of Hector Andromeda (Ανδρομέδα), wife of Perseus, who was placed among the constellations after her death Antigone (Αντιγόνη), daughter of Oedipus and Jocasta Arachne (Αράχνη), a skilled weaver, transformed by Athena into a spider for her blasphemy Ariadne (Αριάδνη), daughter of Minos, king of Crete, who aided Theseus in overcoming the Minotaur and became the wife of Dionysus Atalanta (Αταλάντη), fleet-footed heroine who participated in the Calydonian boar hunt and the quest for the Golden Fleece Briseis, a princess of Lyrnessus, taken by Achilles as a war prize Caeneus, formerly Caenis, a woman who was transformed into a man and became a mighty warrior Cassandra, a princess of Troy cursed to see the future but never to be believed Cassiopeia (Κασσιόπεια), queen of Æthiopia and mother of Andromeda Clytemnestra, sister of Helen and unfaithful wife of Agamemnon Danaë, the mother of Perseus by Zeus Deianeira, the third wife and unwitting killer of Heracles Electra, daughter of Agamemnon and Clytemnestra, she aided her brother Orestes in plotting revenge against their mother for the murder of their father Europa, a Phoenician woman, abducted by Zeus Hecuba (Ἑκάβη), wife of Priam, king of Troy, and mother of nineteen of his children Helen, daughter of Zeus and Leda, whose abduction brought about the Trojan War Hermione (Ἑρμιόνη), daughter of Menelaus and Helen; wife of Neoptolemus, and later Orestes Iphigenia, daughter of Agamemnon and Clytemnestra; Agamemnon sacrificed her to Artemis in order to appease the goddess Ismene, sister of Antigone Jocasta, mother and wife of Oedipus Medea, a sorceress and wife of Jason, who killed her own children to punish Jason for his infidelity Medusa, a mortal woman transformed into a hideous gorgon by Athena Niobe, a daughter of Tantalus who declared herself to be superior to Leto, causing Artemis and Apollo to kill her fourteen children Pandora, the first woman Penelope, loyal wife of Odysseus Phaedra, daughter of Minos and wife of Theseus Polyxena, the youngest daughter of Priam, sacrificed to the ghost of Achilles Semele, mortal mother of Dionysus Thrace, the daughter of Oceanus and Parthenope, and sister of Europa Kings Abas, a king of Argos Acastus, a king of Iolcus who sailed with the Argonauts and participated in the Calydonian boar hunt Acrisius, a king of Argos Actaeus, first king of Attica Admetus (Άδμητος), a king of Pherae who sailed with the Argonauts and participated in the Calydonian boar hunt Adrastus (Άδραστος), a king of Argos and one of the Seven against Thebes Aeacus (Αιακός), a king of the island of Aegina in the Saronic Gulf; after he died, he became one of the three judges of the dead in the Underworld Aeëtes, a king of Colchis and father of Medea Aegeus (Αιγεύς), a king of Athens and father of Theseus Aegimius, a king of Thessaly and progenitor of the Dorians Aegisthus (Αίγισθος), lover of Clytemnestra, with whom he plotted to murder Agamemnon and seized the kingship of Mycenae Aegyptus (Αίγυπτος), a king of Egypt Aeson, father of Jason and rightful king of Iolcus, whose throne was usurped by his half-brother Pelias Aëthlius, first king of Elis Aetolus (Αιτωλός), a king of Elis Agamemnon (Ἀγαμέμνων), a king of Mycenae and commander of the Greek armies during the Trojan War Agasthenes, a king of Elis Agenor (Αγήνωρ), a king of Phoenicia Alcinous (Αλκίνους or Ἀλκίνοος), a king of Phaeacia Alcmaeon, a king of Argos and one of the Epigoni Aleus, a king of Tegea Amphiaraus (Ἀμφιάραος), a seer and king of Argos who participated in the Calydonian boar hunt and the war of the Seven against Thebes Amphictyon (Ἀμφικτύων), a king of Athens Amphion and Zethus, twin sons of Zeus and kings of Thebes, who constructed the city's walls Amycus, son of Poseidon and king of the Bebryces Anaxagoras (Ἀναξαγόρας), a king of Argos Anchises (Αγχίσης), a king of Dardania and father of Aeneas Arcesius, a king of Ithaca and father of Laertes Argeus, a king of Argos Argus, a son of Zeus and king of Argos after Phoroneus Assaracus, a king of Dardania Asterion, a king of Crete Athamas (Ἀθάμας), a king of Orchomenus Atreus (Ἀτρεύς), a king of Mycenae and father of Agamemnon and Menelaus Augeas (Αυγείας), a king of Elis Autesion, a king of Thebes Bias, a king of Argos Busiris, a king of Egypt Cadmus, founder-king of Thebes Car, a king of Megara Catreus, a king of Crete, prophesied to die at the hands of his own son Cecrops, an autochthonous king of Athens Ceisus, a king of Argos Celeus, a king of Eleusis Cephalus, a king of Phocis who accidentally killed his own wife Cepheus, a king of Ethiopia Cepheus, a king of Tegea and an Argonaut Charnabon, a king of the Getae Cinyras, a king of Cyprus and father of Adonis Codrus, a king of Athens Corinthus, founder-king of Corinth Cranaus, a king of Athens Creon, a king of Thebes, brother of Jocasta and uncle of Oedipus Creon, a king of Corinth who was hospitable towards Jason and Medea Cres, an early Cretan king Cresphontes, a king of Messene and descendant of Heracles Cretheus, founder-king of Iolcus Criasus, a king of Argos Cylarabes, a king of Argos Cynortas, a king of Sparta Cyzicus, king of the Dolionians, mistakenly killed by the Argonauts Danaus, a king of Egypt and father of the Danaides Dardanus, founder-king of Dardania, and son of Zeus and Electra Deiphontes, a king of Argos Demophon of Athens, a king of Athens Diomedes, a king of Argos and hero of the Trojan War Echemus, a king of Arcadia Echetus, a king of Epirus Eetion, a king of Cilician Thebe and father of Andromache Electryon, a king of Tiryns and Mycenae; son of Perseus and Andromeda Elephenor, a king of the Abantes of Euboea Eleusis, eponym and king of Eleusis, Attica Epaphus, a king of Egypt and founder of Memphis, Egypt Epopeus, a king of Sicyon Erechtheus, a king of Athens Erginus, a king of Minyean Orchomenus in Boeotia Erichthonius, a king of Athens, born of Hephaestus' attempt to rape Athena Eteocles, a king of Thebes and son of Oedipus; he and his brother Polynices killed each other Eteocles, son of Andreus, a king of Orchomenus Eurotas, a king of Sparta Eurystheus, a king of Tiryns Euxantius, a king of Ceos, son of Minos and Dexithea Gelanor, a king of Argos Haemus, a king of Thrace Helenus, seer and twin brother of Cassandra, who later became king of Epirus Hippothoön, a king of Eleusis Hyrieus, a king of Boeotia Ilus, founder-king of Troy Ixion, a king of the Lapiths who attempted to rape Hera and was bound to a flaming wheel in Tartarus Laërtes, father of Odysseus and king of the Cephallenians; he sailed with the Argonauts and participated in the Calydonian boar hunt Laomedon, a king of Troy and father of Priam Lycaon of Arcadia, a deceitful Arcadian king who was transformed by Zeus into a wolf Lycurgus of Arcadia, a king of Arcadia Lycurgus, a king of Nemea, and/or a priest of Zeus at Nemea Makedon, a king of Macedon Megareus of Onchestus, a king of Onchestus in Boeotia Megareus of Thebes, a king of Thebes Melampus, a legendary soothsayer and healer, and king of Argos Melanthus, a king of Messenia Memnon, a king of Ethiopia who fought on the side of Troy during the Trojan War Menelaus, a king of Sparta and the husband of Helen Menestheus, a king of Athens who fought on the side of the Greeks during the Trojan War Midas, a king of Phrygia granted the power to turn anything to gold with a touch Minos, a king of Crete; after his death, became one of the judges of the dead in the Underworld Myles, a king of Laconia Nestor, a king of Pylos who sailed with the Argonauts, participated in the Calydonian boar hunt and fought with the Greek armies in the Trojan War Nycteus, a king of Thebes Odysseus, a hero and king of Ithaca whose adventures are the subject of Homer's Odyssey; he also played a key role during the Trojan War Oebalus, a king of Sparta Oedipus, a king of Thebes fated to kill his father and marry his mother Oeneus, a king of Calydon Oenomaus, a king of Pisa Oenopion, a king of Chios Ogygus, a king of Thebes Oicles, a king of Argos Oileus, a king of Locris Orestes, a king of Argos and a son of Clytemnestra and Agamemnon; he killed his mother in revenge for her murder of his father Oxyntes, a king of Athens Pandion I, a king of Athens Pandion II, a king of Athens Peleus, king of the Myrmidons and father of Achilles; he sailed with the Argonauts and participated in the Calydonian boar hunt Pelias, a king of Iolcus and usurper of Aeson's rightful throne Pelops, a king of Pisa and founder of the House of Atreus Pentheus, a king of Thebes who banned the worship of Dionysus and was torn apart by Maenads Periphas, legendary king of Attica who Zeus turned into an eagle Perseus (Περσεύς), founder-king of Mycenae and slayer of the Gorgon Medusa Phineus, a king of Thrace Phlegyas, a king of the Lapiths Phoenix, son of Agenor, founder-king of Phoenicia Phoroneus, a king of Argos Phyleus, a king of Elis Pirithoös, king of the Lapiths and husband of Hippodamia, at whose wedding the Battle of Lapiths and Centaurs occurred Pittheus, a king of Troezen and grandfather of Theseus Polybus of Corinth, a king of Corinth Polybus of Sicyon, a king of Sicyon and son of Hermes Polybus of Thebes, a king of Thebes Polynices, a king of Thebes and son of Oedipus; he and his brother Eteocles killed each other Priam, king of Troy during the Trojan War Proetus, a king of Argos and Tiryns Pylades, a king of Phocis and friend of Orestes Rhadamanthys, a king of Crete; after his death, he became a judge of the dead in the Underworld Rhesus, a king of Thrace who sided with Troy in the Trojan War Sarpedon, a king of Lycia and son of Zeus who fought on the side of the Greeks during the Trojan War Sisyphus, a king of Thessaly who attempted to cheat death and was sentenced to an eternity of rolling a boulder up a hill, only to watch it roll back down Sithon, a king of Thrace Talaus, a king of Argos who sailed with the Argonauts Tegyrios, a king of Thrace Telamon, a king of Salamis and father of Ajax; he sailed with the Argonauts and participated in the Calydonian boar hunt Telephus, a king of Mysia and son of Heracles Temenus, a king of Argos and descendant of Heracles Teucer, founder-king of Salamis who fought alongside the Greeks in the Trojan War Teutamides, a king of Larissa Teuthras, a king of Mysia Thersander, a king of Thebes and one of the Epigoni Theseus, a king of Athens and slayer of the Minotaur Thyestes, a king of Mycenae and brother of Atreus Tisamenus, a king of Argos, Mycenae, and Sparta Tyndareus, a king of Sparta Seers/oracles Amphilochus (Ἀμφίλοχος), a seer and brother of Alcmaeon who died in the war of the Seven against Thebes Anius, son of Apollo who prophesied that the Trojan War would be won in its tenth year Asbolus, a seer Centaur Bakis Branchus, a seer and son of Apollo Calchas, an Argive seer who aided the Greeks during the Trojan War Carnus, an Acarnanian seer and lover of Apollo Carya, a seer and lover of Dionysus Cassandra, a princess of Troy cursed to see the future but never to be believed Ennomus, a Mysian seer, killed by Achilles during the Trojan War Halitherses, an Ithacan seer who warned Penelope's suitors of Odysseus' return Helenus, seer and twin brother of Cassandra, who later became king of Epirus Iamus, a son of Apollo possessing the gift of prophecy, he founded the Iamidai Idmon, a seer who sailed with the Argonauts Manto, seer and daughter of Tiresias Melampus, a legendary soothsayer and healer, and king of Argos Mopsus, the name of two legendary seers Polyeidos, a Corinthian seer who saved the life of Glaucus Pythia, the oracle of Delphi Telemus, a seer who foresaw that the Cyclops Polyphemus would be blinded by Odysseus Theoclymenus, an Argive seer Tiresias, blind prophet of Thebes Amazons Inmates of Tartarus The Danaides, forty-nine daughters of Danaus who murdered their husbands and were condemned to an eternity of carrying water in leaky jugs Ixion, a king of the Lapiths who attempted to rape Hera and was bound to a flaming wheel in Tartarus Sisyphus, a king of Thessaly who attempted to cheat death and was sentenced to an eternity of rolling a boulder up a hill, only to watch it roll back down Tantalus, a king of Anatolia who butchered his son Pelops and served him as a meal to the gods; he was punished with the torment of starvation, food and drink eternally dangling just out of reach Minor figures See List of minor Greek mythological figures See also Classical mythology Family tree of the Greek gods List of deities List of Greek mythological creatures List of Mycenaean deities List of Philippine mythological figures List of Philippine mythological creatures List of Roman deities List of Trojan War characters References External links Lists of deities Greek mythology-related lists
33630916
https://en.wikipedia.org/wiki/Sustainability%20at%20American%20Colleges%20and%20Universities
Sustainability at American Colleges and Universities
"Sustainability," was defined as “development which implies meeting the needs of the present without compromising the ability of future generations to meet their own needs”as defined by the 1983 Brundtland Commission (formally the World Commission on Environment and Development (WCED)). As sustainability gains support and momentum worldwide, universities across the United States have expanded initiatives towards more sustainable campuses, commitments, academic offerings, and student engagement. In the past several decades, drastic changes in higher education administration, resource efficiency, food, recycling, and student projects have sprung up in colleges and universities of all types and sizes. In the U.S., the Association for the Advancement of Sustainability in Higher Education (AASHE) serves as the primary professional organization and resource hub for these universities. Specific to climate action, the 2007 American College & University Presidents' Climate Commitment (ACUPCC) was a very visible effort for colleges and universities to collaboratively address global climate change by making institutional commitments to reduce net campus greenhouse gas emissions and promote the research and educational efforts of higher education to prepare society to re-stabilize the earth's climate. Today, the ACUPCC lives on in Second Nature's Presidents' Leadership Climate Commitments and the Climate Leadership Network. There were many early leaders in college and university sustainability efforts, including: Oberlin College in Ohio had the first Leadership in Energy and Environmental Design (LEED) Gold certified music facility and Carnegie Mellon University had the first LEED dorm (Silver). Yale University in New Haven, Connecticut pledged that all new buildings would meet these same Gold standards. Princeton and Ohio University have both made strides toward cutting yearly carbon emissions on campus. Florida Gulf Coast University has implemented solar energy throughout various buildings. A number of universities across the U.S. have created bicycle rental stations for students and employees to help reduce greenhouse gas emissions from automobile traffic while helping reduce roadway congestion as well. College and university sustainability efforts can provide these higher education institutions moral and ethical fulfillment alongside financial, environmental, social, and community benefits. Likewise, these universities are responsible for training future generations in sustainable practice, with an increasing number of formal certificate, minor, and major offerings. By providing undergraduate and graduate students more options focused at the nexus of equity, environment, and economics, higher education is providing more systems thinking and approaches as part of the educational and campus experience, helping ensure the responsible stewardship of land, resources, and communities for generations to come. 2022 study has concluded that universities can play a key role in regional and global agendas with their contribution through the incorporation of sustainability strategies, since universities "can not only achieve carbon neutrality, but they can help other organisations by delivering graduates who are aware of sustainability and provide specific training towards building a sustainability culture." Climate change It has been evident that climate change has become the main focus in the pursuit of sustainable solutions. As more evidence grows on the significant changes in the weather patterns in the past few decades and the impacts it has on disrupting human and natural systems far more quickly than what has been predicted, actions need to be made in order to maintain efforts in eliminating greenhouse gas emissions. With the lack of help from the government in reinforcing policies to further reduce these emissions, colleges and universities have taken it upon themselves to sustain this world through efforts around their own campuses. Several programs have been implemented into action in order to keep track and encourage these institutions in committing to reduce greenhouse gas emissions ('mitigation'). The American College & University President's Climate Change Commitment (ACUPCC) is an effort to address the issue of global climate disruption promised by a network of colleges and universities that have made commitments to eliminate greenhouse gas emission from specific campus operations, "while promoting the research and educational efforts and to promote the research and educational efforts of higher education to equip society to re-stabilize the earth’s climate. Its mission is to accelerate progress towards climate neutrality and sustainability by empowering the higher education sector to educate students, create solutions, and provide leadership-by-example for the rest of society." With nearly 673 active signatories to date, 1343 GHG inventories submitted, and 419 submitted climate action plans – greenhouse gas emissions should no longer further to exist in the future. University of California at Berkeley “The university has made great progress on climate action by completing an inventory of greenhouse gas emissions (GHG) and formalizing its commitment to reduce these emissions to 1990 levels by 2014. The campus will meet its 2014 target through a series of mitigation strategies including energy efficiency projects, installation of on-site renewables, reducing fuel usage by the campus fleet and commuters, and educational projects led by students aimed at changing behavior. So far, strategies are working. Campus greenhouse gas emissions were down by 4.5%- reaching its lowest level since 2005.” With Berkeley's efforts to take action has implemented the CalCAP program that collaborates administration, faculty, and student working to reduce gas emissions on campus by one “conducting an annual ten-source greenhouse gas emission inventory to track progress, developing and implementing infrastructure and behavioral strategies to reduce the climate impacts of buildings and transportation, and setting and meeting a series of emission reduction targets until climate neutrality is achieved.” Clemson University Founded in 1889 in South Carolina, Clemson University has committed to reducing greenhouse emissions by 20 percent on campus by the year 2020 from 2000 levels as part of their adopted sustainable energy policy in 2008. As part of their plan to further reduce gas emissions, Clemson has adopted a Sustainable Building Policy Plan in 2004 pledging “all new construction and major renovations would achieve at least a Silver LEED certification from the U.S. Green Building Council.” Becoming part of the earliest adopters of this policy, their Advanced Materials Laboratory was the first LEED-certified building in the state. Already resulting in eight more successful projects either certified as LEED-silver or LEED-gold. “Clemson will also pursue opportunities to produce carbon-free energy through three primary approaches: Determine “big-swing” projects to source renewable/clean energy at less than or equal to current prices. Evaluate other sourcing alternatives. Evaluate possible on-site power generation and/or energy storage options including combined Heat 7 Power (CHP), including microturbines; Geothermal heat pump systems (heating & cooling); Bio-fuels; Solar thermal and photovoltaics; and Energy Storage Alternatives (batteries and thermal).” Ohio University Ohio University plans to reduce greenhouse gas emissions by 20 percent from 2004 levels by 2014. Alden Library located on campus has a program to automatically shut down all computers once the library closes, having most computers already containing energy efficient LED monitors. A campus-wide efficiency upgrade beginning in April 2000 started a 10-year campus renewal project through a performance contract with the company Vestar. Completed projects through the 10-year plan include: “Carbon-dioxide monitors were installed in Morton and Boyd Halls; Heat recovery and heat exchangers were installed in the Life Science Center, Installation of power factor correction capacitor bank—the second half of the 69 kV system had new 1,340 kVAR capacitors installed to reduce KVA billing demand costs; Heat economizers were installed in Lausche Heating Plant” just to name a few. Ohio has also successfully developed the Green House Project that offers incentives of at least $500 to landlords for improving their energy efficiency in off-campus housing which has achieved annual reductions of more than 67,000 kilowatt-hours of electricity and more than 74 tons of avoided carbon emissions. Princeton University This Ivy League school has committed to reducing its greenhouse gas emissions 1,990 levels by 2020. In doing so, Princeton is focusing on alternative technologies and fuels to decrease the emissions from the central power facility and the buildings it heats cools, and electrifies which accounts for 85 percent of the Universities emissions. Another strategy the university is planning is to expand energy conservation through retrofits in existing buildings across campus, along with designing new constructions and renovations around campus to use 50% less energy than the amount required by current energy code. With a commitment to design all projects with at least LEED silver equivalency. A new step in the climate change Princeton has proposed is applying an “internal voluntary “CO2 tax” when conducting financial cost-benefit analysis used to determine on whether to undertake energy-efficient designs and technologies.” Allowing the university to place a monetary value on environmental impact, “which in turn will increase the “savings” that we would achieve by undertaking the project.” Transportation has been another main focus considering it’s the second-largest source for the universities' campus emissions footprint. In resolving this problem Princeton has planned to replace retire campus fleet vehicles with appropriate zero or low-emission vehicles, along with encouraging walking and biking as means of commuting through incentives and enhancing bike lanes and walking paths. Yale University Also leading by example, Yale aims to achieve reducing a 43 percent reduction of greenhouse gas emissions by the year 2020, with already achieving the reduction of 7 percent since 2005. “Yale University has established a fifteen-year action plan to take responsibility for its emissions and will focus on increasing efficiency of on-campus energy production and distribution, energy conservation initiatives, testing renewable energy technologies, and requiring a LEED Gold minimum standard for new construction and large renovations.” One of the most significant contributors to Yale's greenhouse gas reduction was the reduction of its Sterling Power Plant to a combined heat and power facility, known as co-generate. Powering the university's medical campus, the Sterling Plant was originally built as a coal-burning steam plant but converted to “accommodate cleaner-burning fuels over the last 88 years.” Since then the new co-generator unit of the plant has “operated at about 79% thermal efficiency and the boiler operates at around 90% thermal efficiency.” An estimate reduction of 15,000 metric tons of carbon dioxide equivalent per year, “equivalent to the taking more than 2,600 cars off the road” due to the increase in efficiency from the plant. According to the EPA's Green Power Partnership, American University is the largest east coast school to purchase 100 percent green power and the second largest in the nation, behind the University of California, Santa Cruz (57 million kilowatt-hours of annual electricity usage). In the Washington, D.C., metropolitan area, Catholic University is the only other university green power purchaser (13 million kilowatt-hours). Energy American universities are working towards becoming more sustainable in the face of global warming. One way that many of them tackle the problem is by using more renewable energy such as solar, wind, biomass, or geothermal energy. Many are also working on getting ‘green fees’ so that they can purchase renewable energy if they do not create their own. Some universities like Middle Tennessee State University, University of North Carolina, and the Evergreen State College already have green fees in place. Other schools like the University of Florida and universities in Texas are working hard to add a green fee to their tuition so that they can improve their sustainability ratings. According to Focus.com “Of 149 schools which reported the use of renewable energy, a score of A- was awarded to Amherst College (MA), Arizona State University-Tempe (AZ), University of California-San Diego (CA), University of Colorado (CO), Dickinson College (PA), Harvard University (MA), Luther College (IA), Macalester College (MN), Middlebury College (VT), University of Minnesota (MN), University of New Hampshire (NH), the University of North Carolina at Chapel Hill (NC), Oberlin College (OH), Pacific Lutheran University (WA), Pomona College (CA), Smith College (MA), Stanford University (CA), University of Washington (WA), Wesleyan University (CT), Williams College (MA), and Yale University (CT). Another 28 schools scored B+ and 7 institutions scored a mark of D on renewable energy use." Two out of every five schools purchase renewable energy, either through the purchase of green power directly from their utility or through renewable energy credits equivalent to a percentage of their energy use. Nearly half of the schools produce renewable energy on campus, with solar, wind, bioenergy or geothermal systems in operation." Wind Wind energy is a source of renewable power which comes from air current flowing across the earth's surface. Wind turbines harvest this kinetic energy and convert it into usable power which can provide electricity for homes. Many universities have invested in wind energy because it is clean, renewable, and good for the local economy. Wind energy is known for being 'clean' because it produces no pollution or greenhouse gases. Wind is a renewable energy resource, it is inexhaustible and requires no "fuel" besides the wind that blows across the earth; it is also beneficial for the local economy because the energy is harvested and produced locally. For universities, this means that their energy costs are lowered while also being more sustainable. Yale Yale University, Connecticut, decided to reduce their energy beyond solar panels so in 2009 they installed ten 1-kilowatt micro wind turbines on top of the Engineering and Applied Science Center. At only 6.5 feet tall and weighing only 60 pounds, these ten turbines generate up to 26 megawatt-hours of electricity annually, reducing Yale's carbon footprint by 20,000 pounds a year. University of Vermont The University of Vermont has erected and installed a small-scale, 10-kilowatt wind turbine on its campus. It is expected to generate 3,000-5,000 kilowatt-hours of electricity per year, enough to power an energy-efficient home for 12 months. “The project is part of the Vermont Department of Public Service (DPS) Wind Development Program, which supports the installation of small-scale turbines to demonstrate the benefits of wind energy. Funding for the wind turbine was provided by a $30,000 matching grant from the DPS. The funds are a portion of $1.5 million in U.S. Department of Energy funds secured by Senator James Jeffords for wind projects.” University of Colorado–Boulder In 2000, Students vote to purchase renewable wind-energy credits to match power used in all major campus construction after 2000, making CU-Boulder the first university in the nation to purchase wind energy. Solar Florida Gulf Coast University Florida Gulf Coast University, located in Fort Myers, Florida, has made environmental sustainability their top priority. In January 2010, the university installed a 2-megawatt solar energy system on a 16-acre field. These 10,080 solar panels will generate enough electricity to power 600 homes. FGCU formed a public-private partnership with Regenesis Power resulting in the construction of the $17 million project. Regenesis Power is a national alternative energy company headquartered in Simi Valley, California, with regional offices in Florida. The solar energy field is projected to save the institution $22 million over a 30-year period. Its impact will be felt immediately as the electrical cost will be reduced from 10.5 cents per kilowatt-hour to two cents per kilowatt-hour. As a clean energy source, annually the solar energy field will prevent an estimated 9,000 pounds of nitrogen oxide, 14,000 pounds of sulfur dioxide, and 5.1 million pounds of carbon dioxide from being introduced into our environment. University of Florida In 2010, the University of Florida, under President Bernie Machen, installed solar panels on the roof of Powell Hall, which is home to the Florida Museum of Natural History. The 75-kilowatt panels generate one-third of the museum's electricity. University of Colorado-Boulder Solar panels installed on the University of Colorado-Boulder buildings have contributed to a 23 percent reduction in energy consumption despite a 14 percent growth in campus since 2005. Geothermal Ball State University Ball State University, located in Indiana, started using geothermal energy in place of 4 coal fired burners in 2009. The new system will save them an estimated $2 million a year. They are the first University to build a geothermal system of this size; 50 buildings spread over 660 acres of campus are being heated by this system. Although the project is estimated to have cost between $65 to $70 million, switching from coal to geothermal energy is estimated to be cutting down schools' green house gas emissions by 50 percent. This saves 75,000 tons of carbon from being released into the atmosphere annually. Lipscomb University Other schools are looking to use geothermal for cooling efforts like The University of Lipscomb in Nashville, Tennessee. The university installed a geothermal pump in March 2005. Built in the 77,000-square-foot Ezell Center, the system of 144 wells sunk 300 feet below the softball field will keep the temperature steady year-round. The geothermal pump costs the university $1.2 million but saves the university on average $90,000 dollars a year. The pump also helps the environment due to 40%-60% less energy being used compared to a standard heat pump. Green fees In fall 2005, 89% of voting students at Middle Tennessee State University supported an $8 per semester fee increase to purchase renewable energy and fund the installation of renewable energy and energy conservation technologies on campus. In January 2005, 91% of voting students at Evergreen State College supported a $1 per credit fee increase (up to $20.00 maximum per quarter) to purchase renewable energy and fund the installation of renewable energy and energy conservation technologies on campus. In February 2003, a renewable energy referendum passed at the University of North Carolina at Chapel Hill, bringing around $200,000 for renewable energy projects every year beginning 2004-2005 pending approval by the UNC Board of Trustees and the UNC system Board of Governors. The university now has a $4 per semester green fee to help them buy energy from more sustainable sources and to lower their carbon impact. Students at the University of Florida, under the club I.D.E.A.S for UF are working to get a 50 cent per credit hour fee to be able to buy renewable energy for the school. UF hopes to be carbon neutral by 2025. Students under the organization I.D.E.A.S. at the University of Central Florida have also been avidly working towards passing the Green Fee in Orlando. Working hand in hand with administration, SGA and student organizations, they are working towards passing a 75 cent per credit hour fee to provide an outlet for students and staff to implement renewable energy on campus. This fee will also help to increase energy efficiencies and reach goals set out in the university's Climate Action Plan. According to the United States Environmental Protection Agency's Green Power Partnership, American University is the largest east coast school to purchase 100 percent green power and the 2nd largest in the nation, behind University of California, Santa Cruz (57 million kilowatt-hours of annual electricity usage. Recycling and Materials Diversion Recycling is the process by which materials are processed and made into new products, after having been already used. Recycling reduces the use of raw materials, the creation and use of energy and pollution (air, water and land). Recycling is maintained and run through drop-offs for various materials, buy-back centers, curbside collection areas, etc. All of these are done in addition to landfill waste; for instance, there may be a recycling bin next to a garbage can. Across the nation, universities are making efforts to create recycling programs that will help to create a more “green” university. Almost all programs are unique, and all are working towards the goal at a fast rate. For instance, the University of Florida is working to have zero waste going into landfills by the year 2015. Reuse programs Universities and colleges everywhere are creating programs for the reuse of various materials. Reuse of materials involves either reusing something for its intended purpose, or repurposing an item for any intent. For instance, having a reusable water bottle is reuse in a perfect sense. However, an old wire reel as a table (not its original purpose) is a great example of repurposing a material. This example can be seen to the left. University of Michigan The University of Michigan in Ann Arbor, Michigan, has introduced a program called “Recycle Write!,” an initiative for students, faculty and staff to recycle pens, pencils and markers. The University Recycling Program joined forces with TerraCycle and Procurement Services to create the program. Unlike the SMART program at the University of Minnesota, Recycle Write! has no initial cost. To use the program, the university simply puts bins or boxes where they think they will get the most use with a flyer attached to them. When the bin is full, Procurement Services provides a pre-paid label to mail the contents of the box into a plant where they are re-used. However, the program doesn't just encourage the re-use of writing utensils. For every item donated, TerraCycle donates $.02 to C.S. Mott Children's Hospital, located at the University of Michigan. Cornell University At Cornell University in Ithaca, New York, the Cornell Computer Reuse Association is sending used computers and computer lab equipment all over the world to those in need. The club's mission is “to donate computers and other computer-related technology to humanitarian organizations in the developing world and in the local Ithaca community.” Since its introduction in 2007, the club has sent computers to countries including Mali, Tanzania, Nigeria, Jamaica, Nicaragua, Iraq and Afghanistan among others. All of the computers and equipment are donated to the club, where they are refurbished and shipped out. The club has sent printers, scanners, projectors, monitors, laptops, and computers to places such as Habitat for Humanity in Ithaca, New York, to Women's University in Iraq. Universities everywhere are doing their part to make a greener planet, from banning the sale of water bottles to donating computers around the world. Waste management While other universities' programs revolve around the reuse of various materials, a more common route is waste management. Waste management programs are designed to decrease the number of materials going into landfills on a daily basis. Options for management, including recycling bins and cans, are relatively easy to implement and have great success rates. University of Florida Through bins located all across campus, the University of Florida, located in Gainesville, Florida, has recycled over 200 million pounds of paper, glass, plastic, concrete, metal, and other materials since the introduction of recycling at the university in 1989. The University of Florida is also focusing on using the Three R's as part of its recycling program. Not only are bins for recycling used across campus, but also the university has implemented programs to recycle non-consumer waste. The university has employed United States Environmental Protection Agency and United States National Research Council approved methods of disposing of chemical waste for its various labs. At clinics and research labs across campus, there are also biomedical waste pick-ups. With these programs, hazardous and/or toxic materials are not going into landfills. Also, the university composts or otherwise repurposes all yard waste. Grass clippings and other trimmings are often left out as a source of nutrients for remaining landscaping. University of Minnesota The University of Minnesota has introduced the SMART (self-managed activities for recyclables and trash) program across campus to increase recycling efforts. The program is designed to take recycling to the individual – those who create waste are those responsible for recycling it. SMART operates on “The Quad System” – instead of having trash cans under desks and near shelves, there are four trash cans at each location: one for newspaper, one for office paper, one for cans and bottles, and another for trash only. Custodial workers collect from the bins once per day, instead of emptying individual trash cans from under desks. The program has taken recycling rates at the University of Minnesota from 60 to 90 percent since its introduction. Two months after the introduction of the SMART, the program published surveys for students, faculty and staff to take. Over half said that with the SMART program, they were recycling more. 86% of those polled enjoyed the SMART system. The bins cost $250,000. However, after three years, the cost of the program would be refunded because of less disposal fees and a reduction in custodial wages. Through the program, university students and employees were more aware of the waste they were creating, and the campus has become more sustainable because of it. University of Oregon The University of Oregon, located in Eugene, Oregon, has a campus recycling program that has made great strides in making a cleaner campus. With over 1500 campus collection sites, the university is meeting both its own and the state's goals on waste management and sustainability. Every day, the group meets to tally collection sheets to monitor waste management. They also have an annual campus waste analysis, where the group discusses how they can improve the program as a whole. In addition to recycling waste, the university is creating new re-use opportunities, another point in the three Rs (reduce, re-use, and recycle). The university has been increasing the use of surplus furniture and working with vendors that are using less of and more sustainable packaging. Currently, the program is working towards having compost bins and water bottle refill spout in all campus buildings. They are also working to have more sustainable department offices and buildings. Transportation and Mobility According to the Environmental Protection Agency, transportation is the second-largest contributor to greenhouse gas emissions in the United States, at about 29 percent. As a result, American colleges and universities focus on reducing transportation impacts through shared and active transportation modes, including increased walking, bicycle use, public transit including buses and light rail, carpooling and vanpooling programs, and car-sharing programs like Zipcar. Bicycle use Colleges and universities across the United States are increasingly encouraging students to ride bicycles to class to reduce CO2 emissions. Many of these colleges are implementing programs that provide students with incentives to do so. For example, Ripon College in Wisconsin provided free mountain bikes to every incoming freshman who signed a pledge that he or she would not bring a vehicle to campus for the entire 2010–11 school year. Colleges such as St. Lawrence University in New York and Castleton State College in Vermont have started programs that allow students and faculty to rent bicycles to ride on or near campus. These programs often include rental helmets and locks as well. The University of Washington's annual Ride in the Rain Challenge encourages students and faculty to ride a bicycle to campus every day, regardless of weather conditions. The university gives out awards for “Most Commute Trips”, “Most Rides in the Rain”, “Most New Riders”, and “Most Commute Miles.” Some universities, such as the University of California, approach sustainability from a leisure perspective. UC encourages bicycle riding by providing students with maps and directions for safe bike routes to popular destinations in the area. The college also auctions off abandoned bikes every semester to encourage bike use even more. This gives students the opportunity to purchase high quality bicycles at discounted prices. Public transit Buses and other modes of public transit have been always been an important part of college campuses, but awareness of carbon emissions has caused many colleges and universities to take it more seriously in recent years. Many colleges such as Cornell University provide students with free public bus passes or allow students to use student ID cards as bus passes. Students at The George Washington University in Washington, D.C., have access to campus shuttle buses as well as two Metro subway stations. Chicago's Loyola University sets an example for public transit as well with its variety of transportation options. Students at the university have access to the Chicago Transit Authority's bus and train routes and the Red Line elevated train. Students there are granted unlimited access to these resources through the CTA U-Pass program. Carpooling programs Carpooling efforts on many college campuses aim to reduce the number of vehicles on the road and reduce colleges’ carbon footprints. Colleges such as the University of Michigan and Florida International University utilize a web-based application called GreenRide that helps commuters coordinate carpools and vanpools rather than driving individually to campus. In addition, a number of other rideshare software programs have been created to help Americans locate potential carpooling arrangements. Many of these, such as Zimride, CommuteSmart, and RideFinders reach out to university and college students across the United States. Some colleges offer incentives such as discounted parking permits for participants in carpooling programs. Car sharing programs Car sharing programs like Zipcar have a presence on and near American college and university campuses. Zipcar aims to make the United States less dependent on personally-owned vehicles by providing vehicles that are available to rent by the hour. According to their website, “every Zipcar takes at least 15 personally-owned vehicles off the road.” Vehicles in these car-sharing programs tend to be small and fuel-efficient. Unlike typical rental car companies, car sharing companies often require participants in the program to be only 18 years old, rather than 21, thus encouraging a larger demographic of students to participate in the program. Companies have spread to university campuses across the United States. In 2010, Lewis & Clark College in Oregon even provided free memberships to its Uhaul U Car Share program when it was first implemented. In April 2011, Connect by Hertz (now known as Hertz on Demand), was the very first car-sharing company to bring production electric vehicles to a university or college campus. Hertz on Demand cars are Environmental Protection Agency Smartway certified, meaning that they produce fewer greenhouse gas emissions. Hertz and other car rental companies offer a variety of electric vehicles, “providing a zero emission mobility option for everyone.” Green buildings Green buildings focus on key elements of human and environmental health. This includes sustainable site development, energy and water consumption, materials selection, and indoor environmental quality. Natural light, recycled and nontoxic materials, building controls, and efficient technologies are used in green buildings. On a college or university campus, green buildings can reduce consumption, educate students, and promote sustainable practices. Additionally, many universities educate students about green buildings with courses, minors, and majors. Likewise, many universities design and build green buildings to help reduce their energy use, water consumption, and reduce their greenhouse gas emissions. Buildings are responsible for about 40% of total U.S. carbon emissions, making them a major focus for climate action. Third-party green building certification programs offer an incentive for building owners to implement green design, construction, efficient operations, and eco-friendly solutions. LEED In 1998, the United States Green Building Council created the now internationally recognized LEED green building rating system to help set green building standards and recognition. By including LEED certified green buildings, colleges and universities work to decrease carbon emissions, conserving water and energy and saving money each month. LEED's structure has evolved over time, but has six primary considerations: integrative process, sustainable sites, energy and water management, material and resource use, indoor environmental quality, and innovation. There are now hundreds of U.S. colleges and universities with at least one LEED certified building, with many campuses requiring all new buildings to pursue a LEED certification at certain levels. The LEED green building rating system and rankings are: Certified: 40 to 49 points Silver: 50 to 59 points Gold: 60 to 79 points Platinum: 80 points and above Green materials Some ways to reduce consumption, educate students, and promote sustainable practices would be: Green Roofs - a solution to the heat island effect associated with buildings. Compact fluorescent light bulbs - use less energy and give off less heat. Using recycled materials. Buying and using local materials- lower transportation costs. Low flow plumbing fixtures- waterless toilets, dual-flush toilets, and reclaimed water toilets. Low VOC paints- limit the amount of irritating emissions. Buildings currently account for 1/6th of the world's freshwater use, 1/4th of all wood harvests, 2/5th of all material flows, and 2/5th of all energy flows. By using green buildings, consumption of resources, energy, and/or waste, could be reduced by 50-60%. Some sustainable materials used in Green Building that could reduce that percentage- fly ash, seal tech block, cold-form metal framing, isosurfaces, reclaimed lumber, composting, waterless urinals, greenscreen PVC-free fabrics, climate upholstery fabrics, evergreen solar panels, dual-flush toilets, and PS400 flat wall form. Universities with LEED certifications Given that LEED certifications have been awarded for buildings since 2001, many colleges and universities have LEED and other third-party certified green buildings on their campuses. An outdated selection of these are provided below as examples. Duke University has almost 20 buildings that have received a LEED certification and another 10 that are somewhere in the process. All new construction must also be built to LEED Silver standards. Duke's 5,600 square foot Ocean Science Teaching Center has a LEED Gold rating and it was the first Marine laboratory building that was totally green. Harvard University has arguably the greenest buildings of any college with 26 LEED buildings and about 75 somewhere in the LEED process. Harvard also requires that all new construction be at least LEED Gold. Harvard's Pearson Laboratory has a LEED Gold rating and it boasts a 44% reduction in water consumption below EPAct standards and carbon dioxide sensors that control the outside air to vary based on occupancy and demand. University of Colorado has 5 LEED buildings (3 of which are Gold) and about a dozen that qualify under LEED but are not certified. All new buildings must adhere to LEED Gold standards. The Science and Engineering Building at the University of Colorado has thin-film solar panels, an ice-storage system, high-efficiency windows, energy-efficient lighting, and low-chemical paints. University of Florida has 12 LEED buildings with over two dozen somewhere in the LEED process. All new construction must be built to LEED gold standards. The James W. Heavener Football Complex at the University of Florida was the first LEED Platinum athletic building in the United States and the first Platinum-rated building in Florida. The James W. Heavener Football Complex uses 100% reclaimed water for irrigation, dual flush toilets, and occupancy sensors to control and reduce lighting. University of Washington currently has 7 LEED buildings and is planning on adding twenty more. University of Washington has also mandated that all new construction meets LEED Silver Standards. The Benjamin Hall Interdisciplinary Research Building was the 11th structure in the United States to be rated as LEED-CS Pilot Gold. Energy savings are projected to be at least $220,000 per year and the building has underground parking, energy-efficient plumbing, recycled content, and refrigerating and air-conditioning systems free of ozone-depleting chemicals. Yale University also is one of the schools most dedicated to green building design with 9 projects already with LEED, several more seeking LEED, and all new buildings to be built to LEED Gold standards. The Yale University Sculpture Building and Gallery has a green roof, eight-foot operable windows, and a skin that lets in natural light. Sustainability Among Student Groups Student Knowledge, Attitudes, and Behaviors Despite the millions spent each year on sustainability initiatives on college campuses, there is very little evidence that this work is going noticed by the students themselves, or that they are learning about the importance of practicing sustainable behaviors in their own lives. For example, at the University of Wisconsin-Eau Claire, Perrault and Clark found that about one-third of students surveyed did not know their campus had a Student Office of Sustainability, and nearly 80% did not know their own student fees were being used to pay for its $200,000 annual budget since its inception in 2011. When asked to define the term "sustainability," students also had a fairly uni-dimensional understanding of the concept - with the majority discussing the importance of simply maintaining the status-quo. While environmental factors were noted in about one-third of responses, the social and economic factors of the sustainability triad were largely not mentioned. If we want students to participate in more sustainable behaviors, not only do colleges need to make performing sustainable behaviors easier, but there also needs to be more effective communication to help students understand the importance of performing these behaviors. One simple way could be through the incorporation of more sustainability-related content within existing college courses. For example, having students develop a detailed campaign plan for a campus sustainability partner in a strategic communication planning course led to increased attitudinal shifts toward sustainability over the course of a semester. Stanford University, for example, has been described as “a living lab of sustainability.” Their efforts towards sustainability include plans to reach 100% renewable electricity by 2021 and make Stanford a zero-waste campus by 2030. Notes References http://www.usgbc.org/DisplayPage.aspx?CMSPageID=1988 http://www.greenreportcard.org/report-card-2011/schools/yale-university http://www.greenreportcard.org/report-card-2011/schools/oberlin-college http://www.princetonreview.com/green-honor-roll.aspx http://www.greenreportcard.org/report-card-2011/schools/search/235 http://www.grist.org/article/colleges1 http://news.ufl.edu/2011/08/10/solar-dok/ http://usfweb2.usf.edu/sustainability/academics/pdfs/SustainableRecycling.pdf http://pages.uoregon.edu/recycle/aboutus.htm http://sustainable.ufl.edu/wp-content/docs/UFRecyclingDirective_09_01_09x.pdf http://www.grrn.org/campus/campus_recycling.html http://rso.cornell.edu/ccra/projects/ http://www.facm.umn.edu/about/central-services/recycling/about.uofm.recycling/quad-system/index.htm http://www.recycle.umich.edu/grounds/recycle/recycle_write.php http://www.greenreportcard.org/report-card-2011/schools/yale-university http://climate.yale.edu/sustainability/office-sustainability http://sustainability.yale.edu/climate-change-action http://issuu.com/kerimaalahi/docs/2011ghg_excecutive_summary/4 http://www.greenreportcard.org/report-card-2011/schools/ohio-university http://www.ohio.edu/planetohio/efficiency.htm http://www.princeton.edu/reports/sustainability-20080219/index.xml http://www.greenreportcard.org/report-card-2011/schools/princeton-university http://sustainability.berkeley.edu/calcap/ http://sustainability.berkeley.edu/pages/about_sustainability/overview.shtml http://sustainability.berkeley.edu/calcap/docs/2009_UCB_Climate_Action_Plan.pdf http://sustainability.berkeley.edu/calcap/docs/Strategic_Energy_Plan_Update_May2010.pdf http://rs.acupcc.org/cap/70/ http://blogs.clemson.edu/barkers-blog/2011/02/clemsons-commitment-to-sustainability/ http://www.clemson.edu/media-relations/files/articles/2010/2716_529_5611_energywhitepaper%5B1%5D.pdf http://www.greenreportcard.org/report-card-2011/schools/clemson-university http://www.marketwire.com/press-release/Connect-by-Hertz-First-in-the-Nation-to-Deliver-Production-EVs-to-a-College-Campus-NYSE-HTZ-1505635.htm http://www.carsharing.net/where.html http://www.wecar.com/content/car-sharing/en_US.html https://www.ucarshare.com/secure/Home.aspx http://www.ripon.edu/velorution/index.html http://www.washington.edu/facilities/transportation/commuterservices/bike/ride-in-the-rain http://www.onlineuniversities.com/rankings/10-most-bike-friendly-campuses/ http://www.mass.gov/Eoeea/docs/eea/lbe/lbe_campus-sustain-practices.pdf http://www.zipcar.com/universities/solutions http://umich.greenride.com/en-US/ http://parking.fiu.edu/greenride.htm http://www.washington.edu/research/industry/newsletter/0907insidestory.html http://asumag.com/green/leed-gold-university-colorado-200909/ http://green.harvard.edu/seven-harvard-fas-projects-achieve-leed-certification http://artslibrary.wordpress.com/2008/04/30/yale-sculpture-building-1-of-the-top-10-green-building-projects/ http://www.nicholas.duke.edu/news/groundbreaking-for-duke-marine-laboratorys-ocean-science-teaching-center-slated-for-april-24 http://www.gwu.edu/explore/campuslife/universityservices/transportationservices http://www.colorado.edu/news/reports/sustainability/ http://www.mnn.com/money/green-workplace/blogs/the-nations-top-sustainable-colleges http://www.princetonreview.com/green-honor-roll.aspx http://www.gainesville.com/article/20100422/ARTICLES/4221055 http://www.floridaenergy.ufl.edu/?p=1853 http://www.katu.com/news/business/37645264.html http://www.fgcu.edu/crm/pressrelease.asp?id=20379 http://cms.bsu.edu/About/Geothermal.aspx https://sitecorecms.bsu.edu/About/Geothermal/FAQ.aspx#cost http://www.nwf.org/News-and-Magazines/Media-Center/News-by-Topic/Global-Warming/2011/02-28-11-Geothermal-Report.aspx http://www.aashe.org/resources/mandatory-student-fees-renewable-energy-and-energy-efficiency http://sustainability.yale.edu/becton http://www.unc.edu/campus/policies/Energy_Use_Policy.pdf http://www.aashe.org/resources/mandatory-student-fees-renewable-energy-and-energy-efficiency http://www.aashe.org/resources/mandatory-student-fees-renewable-energy-and-energy-efficiency http://www.american.edu/media/news/20100504_American_University_Buys_100_Percent_Green_Power.cfm http://www.stlawu.edu/green/green-bikes http://www.ridefinders.org/ http://www.zimride.com/rideshare/university http://www.commutesmart.org/ Higher education in the United States Sustainability at academic institutions
32693647
https://en.wikipedia.org/wiki/Flow%20Science%2C%20Inc.
Flow Science, Inc.
Flow Science, Inc. is a developer of software for computational fluid dynamics, also known as CFD, a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. History The firm was founded by Dr. C. W. "Tony" Hirt, previously a scientist at Los Alamos National Laboratory (LANL). Hirt is known for having pioneered the volume of fluid method (VOF) for tracking and locating the free surface or fluid-fluid interface. T Hirt left LANL and founded Flow Science in 1980 to develop CFD software for industrial and scientific applications using the VOF method . The company is located in Santa Fe, New Mexico. The company opened an office in Japan in June 2011, and an office in Germany in 2012. In December 2021 the holding company for the MAGMASOFT platform for simulation and virtual optimization of foundry and metallurgical processes, Dr. Flender Holding GmbH, of Aachen, Germany, acquired 100% of Flow Science Inc. shares. Products The company's products include FLOW-3D, a CFD software analyzing various physical flow processes; FLOW-3D CAST, a software product for metal casting users; FLOW-3D AM, a software product for simulating additive manufacturing and laser welding processes; and FLOW-3D CLOUD, a cloud computing service installed on Penguin Computing On Demand (POD). There are high-performance computing (HPC) versions of both FLOW-3D and FLOW-3D CAST. FLOW-3D software uses a fractional areas/volumes approach called FAVOR for defining problem geometry, and a free-gridding technique for mesh generation. Desktop Engineering Magazine, in a review of FLOW-3D Version 10.0, said: “Key enhancements include fluid structure interaction (FSI) and thermal stress evolution (TSE) models that use a combination of conforming finite-element and structured finite-difference meshes. You use these to simulate and analyze the deformations of solid components as well as solidified fluid regions and resulting stresses in response to pressure forces and thermal gradients.” Key improvements of FLOW-3D Version 11.0 included increased meshing capabilities, solution sub-domains, an improved core gas model and improved surface tension model. FLOW-3D v11.0 also included a new visualization tool, FlowSight. Key improvements of FLOW-3D Version 12.0 included a visual overhaul of the GUI, an immersed boundary method, sludge settling model, a 2-fluid 2-temperature model, and a steady-state accelerator. Applications Blue Hill Hydraulics used FLOW-3D software to update the design of a fish ladder on Mt. Desert Island, Maine, that helps alewife migrate to the fresh water spawning habitat. T. AECOM Technology Corporation studied emergency overflows from the Powell Butte Reservoir and demonstrated that the existing energy dissipation structure was not capable of handling per day, the maximum expected overflow rate. The FLOW-3D simulation demonstrated that problem could be solved by increasing the height of the wing walls by exactly one foot. Researchers from the CAST Cooperative Research Centre and M. Murray Associates developed flow and thermal control methods for the high pressure die casting of thin-walled aluminum components with thicknesses of less than 1 mm. FLOW-3D simulation predicted the complex structure of the metal flow in the die and subsequent casting solidification. Researchers at DuPont used FLOW-3D to optimize coating processes for a solution-coated active-matrix organic light-emitting diode (AMOLED) display technology. Eastman Kodak Company researchers rapidly developed an inkjet printer technology using FLOW 3-D simulation technology for predicting the performance of printhead designs . A research team composed of members from Auburn University, Lamar University and RJR Engineering used Flow Science’s TruVOF method as a virtual laboratory to evaluate performance of highway pavement and drainage inlets with different geometries. Researchers at Albany Chicago LLC and the University of Wisconsin – Milwaukee used FLOW-3D in conjunction with a one-dimensional algorithm to analyze the slow-shot and fast-shot die casting processes in order to reduce the number of iterations required to achieve desired process parameters. References Companies based in Santa Fe, New Mexico Computational fluid dynamics Software companies based in New Mexico Software companies of the United States
1566437
https://en.wikipedia.org/wiki/Physiologically%20based%20pharmacokinetic%20modelling
Physiologically based pharmacokinetic modelling
Physiologically based pharmacokinetic (PBPK) modeling is a mathematical modeling technique for predicting the absorption, distribution, metabolism and excretion (ADME) of synthetic or natural chemical substances in humans and other animal species. PBPK modeling is used in pharmaceutical research and drug development, and in health risk assessment for cosmetics or general chemicals. PBPK models strive to be mechanistic by mathematically transcribing anatomical, physiological, physical, and chemical descriptions of the phenomena involved in the complex ADME processes. A large degree of residual simplification and empiricism is still present in those models, but they have an extended domain of applicability compared to that of classical, empirical function based, pharmacokinetic models. PBPK models may have purely predictive uses, but other uses, such as statistical inference, have been made possible by the development of Bayesian statistical tools able to deal with complex models. That is true for both toxicity risk assessment and therapeutic drug development. PBPK models try to rely a priori on the anatomical and physiological structure of the body, and to a certain extent, on biochemistry. They are usually multi-compartment models, with compartments corresponding to predefined organs or tissues, with interconnections corresponding to blood or lymph flows (more rarely to diffusions). A system of differential equations for concentration or quantity of substance on each compartment can be written, and its parameters represent blood flows, pulmonary ventilation rate, organ volumes etc., for which information is available in scientific publications. Indeed, the description they make of the body is simplified and a balance needs to be struck between complexity and simplicity. Besides the advantage of allowing the recruitment of a priori information about parameter values, these models also facilitate inter-species transpositions or extrapolation from one mode of administration to another (e.g., inhalation to oral). An example of a 7-compartment PBPK model, suitable to describe the fate of many solvents in the mammalian body, is given in the Figure on the right. History The first pharmacokinetic model described in the scientific literature was in fact a PBPK model. It led, however, to computations intractable at that time. The focus shifted then to simpler models , for which analytical solutions could be obtained (such solutions were sums of exponential terms, which led to further simplifications.) The availability of computers and numerical integration algorithms marked a renewed interest in physiological models in the early 1970s. For substances with complex kinetics, or when inter-species extrapolations were required, simple models were insufficient and research continued on physiological models . By 2010, hundreds of scientific publications have described and used PBPK models, and at least two private companies are basing their business on their expertise in this area. Building a PBPK model The model equations follow the principles of mass transport, fluid dynamics, and biochemistry in order to simulate the fate of a substance in the body . Compartments are usually defined by grouping organs or tissues with similar blood perfusion rate and lipid content (i.e. organs for which chemicals' concentration vs. time profiles will be similar). Ports of entry (lung, skin, intestinal tract...), ports of exit (kidney, liver...) and target organs for therapeutic effect or toxicity are often left separate. Bone can be excluded from the model if the substance of interest does not distribute to it. Connections between compartment follow physiology (e.g., blood flow in exit of the gut goes to liver, etc.) Basic transport equations Drug distribution into a tissue can be rate-limited by either perfusion or permeability. Perfusion-rate-limited kinetics apply when the tissue membranes present no barrier to diffusion. Blood flow, assuming that the drug is transported mainly by blood, as is often the case, is then the limiting factor to distribution in the various cells of the body. That is usually true for small lipophilic drugs. Under perfusion limitation, the instantaneous rate of entry for the quantity of drug in a compartment is simply equal to (blood) volumetric flow rate through the organ times the incoming blood concentration. In that case; for a generic compartment i, the differential equation for the quantity Qi of substance, which defines the rate of change in this quantity, is: where Fi is blood flow (noted Q in the Figure above), Cart incoming arterial blood concentration, Pi the tissue over blood partition coefficient and Vi the volume of compartment i. A complete set of differential equations for the 7-compartment model shown above could therefore be given by the following table: The above equations include only transport terms and do not account for inputs or outputs. Those can be modeled with specific terms, as in the following. Modeling inputs Modeling inputs is necessary to come up with a meaningful description of a chemical's pharmacokinetics. The following examples show how to write the corresponding equations. Ingestion When dealing with an oral bolus dose (e.g. ingestion of a tablet), first order absorption is a very common assumption. In that case the gut equation is augmented with an input term, with an absorption rate constant Ka: That requires defining an equation for the quantity ingested and present in the gut lumen: In the absence of a gut compartment, input can be made directly in the liver. However, in that case local metabolism in the gut may not be correctly described. The case of approximately continuous absorption (e.g. via drinking water) can be modeled by a zero-order absorption rate (here Ring in units of mass over time): More sophisticated gut absorption model can be used. In those models, additional compartments describe the various sections of the gut lumen and tissue. Intestinal pH, transit times and presence of active transporters can be taken into account . Skin depot The absorption of a chemical deposited on skin can also be modeled using first order terms. It is best in that case to separate the skin from the other tissues, to further differentiate exposed skin and non-exposed skin, and differentiate viable skin (dermis and epidermis) from the stratum corneum (the actual skin upper layer exposed). This is the approach taken in [Bois F., Diaz Ochoa J.G. Gajewska M., Kovarich S., Mauch K., Paini A., Péry A., Sala Benito J.V., Teng S., Worth A., in press, Multiscale modelling approaches for assessing cosmetic ingredients safety, Toxicology. doi: 10.1016/j.tox.2016.05.026] Unexposed stratum corneum simply exchanges with the underlying viable skin by diffusion: where is the partition coefficient, is the total skin surface area, the fraction of skin surface area exposed, ... For the viable skin unexposed: For the skin stratum corneum exposed: for the viable skin exposed: dt(QSkin_u) and dt(QSkin_e) feed from arterial blood and back to venous blood. More complex diffusion models have been published [reference to add]. Intra-venous injection Intravenous injection is a common clinical route of administration. (to be completed) Inhalation Inhalation occurs through the lung and is hardly dissociable from exhalation (to be completed) Modelling metabolism There are several ways metabolism can be modeled. For some models, a linear excretion rate is preferred. This can be accomplished with a simple differential equation. Otherwise a Michaelis-Menten equation, as follows, is generally appropriate for a more accurate result. . Uses of PBPK modeling PBPK models are compartmental models like many others, but they have a few advantages over so-called "classical" pharmacokinetic models, which are less grounded in physiology. PBPK models can first be used to abstract and eventually reconcile disparate data (from physicochemical or biochemical experiments, in vitro or in vivo pharmacological or toxicological experiments, etc.) They give also access to internal body concentrations of chemicals or their metabolites, and in particular at the site of their effects, be it therapeutic or toxic. Finally they also help interpolation and extrapolation of knowledge between: Doses: e.g., from the high concentrations typically used in laboratory experiments to those found in the environment Exposure duration: e.g., from continuous to discontinuous, or single to multiple exposures Routes of administration: e.g., from inhalation exposures to ingestion Species: e.g., transpositions from rodents to human, prior to giving a drug for the first time to subjects of a clinical trial, or when experiments on humans are deemed unethical, such as when the compound is toxic without therapeutic benefit Individuals: e.g., from males to females, from adults to children, from non-pregnant women to pregnant From in vitro to in vivo. Some of these extrapolations are "parametric" : only changes in input or parameter values are needed to achieve the extrapolation (this is usually the case for dose and time extrapolations). Others are "nonparametric" in the sense that a change in the model structure itself is needed (e.g., when extrapolating to a pregnant female, equations for the foetus should be added). Owing to the mechanistic basis of PBPK models, another potential use of PBPK modeling is hypothesis testing. For example, if a drug compound showed lower-than-expected oral bioavailability, various model structures (i.e., hypotheses) and parameter values can be evaluated to determine which models and/or parameters provide the best fit to the observed data. If the hypothesis that metabolism in the intestines was responsibility for the low bioavailability yielded the best fit, then the PBPK modeling results support this hypothesis over the other hypotheses evaluated. As such, PBPK modeling can be used, inter alia, to evaluate the involvement of carrier-mediated transport, clearance saturation, enterohepatic recirculation of the parent compound, extra-hepatic/extra-gut elimination; higher in vivo solubility than predicted in vitro; drug-induced gastric emptying delays; gut loss and regional variation in gut absorption. Limits and extensions of PBPK modeling Each type of modeling technique has its strengths and limitations. PBPK modeling is no exception. One limitation is the potential for a large number of parameters, some of which may be correlated. This can lead to the issues of parameter identifiability and redundancy. However, it is possible (and commonly done) to model explicitly the correlations between parameters (for example, the non-linear relationships between age, body-mass, organ volumes and blood flows). After numerical values are assigned to each PBPK model parameter, specialized or general computer software is typically used to numerically integrate a set of ordinary differential equations like those described above, in order to calculate the numerical value of each compartment at specified values of time (see Software). However, if such equations involve only linear functions of each compartmental value, or under limiting conditions (e.g., when input values remain very small) that guarantee such linearity is closely approximated, such equations may be solved analytically to yield explicit equations (or, under those limiting conditions, very accurate approximations) for the time-weighted average (TWA) value of each compartment as a function of the TWA value of each specified input (see, e.g.,). PBPK models can rely on chemical property prediction models (QSAR models or predictive chemistry models) on one hand. For example, QSAR models can be used to estimate partition coefficients. They also extend into, but are not destined to supplant, systems biology models of metabolic pathways. They are also parallel to physiome models, but do not aim at modelling physiological functions beyond fluid circulation in detail. In fact the above four types of models can reinforce each other when integrated. PBPK models are data-hungry (many parameters need to be set) and may take a long time to develop and validate. For that reason, they have been criticized for delaying the development of important regulations. References Further references: Reddy M. et al. (2005) Physiologically Based Pharmacokinetic Modeling : Science and Applications, Wiley-Interscience. Peters S.A (2012) Physiologically-Based Pharmacokinetic (PBPK) Modeling and Simulations, Wiley. Forums Ecotoxmodels is a website on mathematical models in ecotoxicology. Software Dedicated software: BioDMET GastroPlus Maxsim2 PK-Sim PKQuest PSE: gCOAS Simcyp Simulator ADME Workbench General software: ADAPT 5 Berkeley Madonna COPASI: Biochemical System Simulator Ecolego Free simulation software: GNU MCSIM GNU Octave Matlab PottersWheel ModelMaker PhysioLab R deSolve package SAAM II Phoenix WinNonlin/NLME/IVIVC/Trial Simulator Toxicology Pharmacokinetics Pharmaceutics
46236066
https://en.wikipedia.org/wiki/Periscope%20%28service%29
Periscope (service)
Periscope was an American live video streaming app for Android and iOS developed by Kayvon Beykpour and Joe Bernstein and acquired by Twitter before launch in 2015. History Beykpour and Bernstein came up with the idea for Periscope while traveling abroad in 2013. Beykpour was in Istanbul when protests broke out in Taksim Square. He wanted to see what was happening there, so he turned to Twitter. While he could read about the protests, he could not see them. They started the company in February 2014, under the name Bounty. They raised $1.5 million from Founder Collective, Scott Belsky, Maveron, Google Ventures, Menlo Ventures, Bessemer, Stanford – StartX and Sam Shank in April 2014. Periscope was acquired January 2015 by Twitter before the product had been publicly launched. One investor source says the acquisition amount was "sizeable", above $50 million. Another says it fell between $75 and $100 million. A third says the deal was "small-ish". The acquisition was officially announced in a tweet from Periscope and retweeted by Twitter CEO Dick Costolo on 13 March after the rival video streaming app Meerkat was a breakout hit at South by Southwest 2015 (13–17 March). Meerkat became the talk of SXSW partially due to Twitter cutting Meerkat off from its social graph just as the festival was starting. Periscope was launched on 26 March 2015. Later, on 26 May 2015, Periscope was released for Android. On 12 August 2015, Periscope announced that it had surpassed 10 million accounts, four months after it was launched. At the same time, the company noted that the amount of video being watched had reached a level of "40 years per day". On 9 December 2015, Apple named Periscope as the iPhone App of the Year. On 26 January 2016, the company released an update that allows users to stream live from GoPro. In December 2016, some of Periscope's features were integrated into the main Twitter app. In April 2016, as part of a wider partnership with Twitter to stream selected Thursday Night Football games, the NFL announced that Periscope would feature ancillary behind the scenes content from these games. In June 2016, Democratic members of the U.S. House of Representatives staged a sit-in on the House floor to protest the lack of a vote on a gun control bill. The Speaker pro tem, Rep. Ted Poe, declared the House was in recess and subsequently the House video feed to C-SPAN was shut off. However, after a brief interruption, C-SPAN was able to broadcast the sit-in because Rep. Scott Peters streamed the activity and the speakers using his Periscope account. On 12 June 2018, a Turkish court banned Periscope in Turkey for violating the copyright of the Turkish company called Periskop. Periscope had been actively used by the Turkish opposition until an initial ban was put in place in April 2017, weeks before a constitutional referendum to expand presidential powers. On 15 December 2020, Twitter announced it would be discontinuing the service on 31 March 2021 due to declining usage, product realignment, and high maintenance costs. It was removed from its respective stores in Android and iOS on 31 March 2021. However, the videos of the service can still be watched via Twitter, as most of its features are now incorporated into the app. Service The services of Periscope are available in the mobile application itself as well as on Twitter. Users of Periscope are able to choose whether or not to make their video public or simply viewable to certain users such as their friends or families. Although the "scoper" usually simply uses a handheld device such as a smartphone to broadcast, it is also possible to broadcast through Periscope using a professional vision mixing suite such as Wirecast or Teradek using Periscope Pro. Periscope allows viewers to send "hearts" to the broadcaster by tapping on the mobile screen as a form of appreciation. The maximum number of users that a user can follow is 8,000. Both the "scoper" and viewers of the so-called "scope" are able to block viewers. When blocked by the "scoper", users are added to a blocked list and booted from the "scope". If enough "scopers" block a user, they are blocked from the "scope". If they receive more than four blocks from four different "scopes" then the user gets shadowbanned. On 8 September 2015, TechCrunch reported and later confirmed that Periscope was building an Apple TV app. This app has been released. On 10 September 2015, Periscope added the ability to broadcast live in landscape view. Copyright issues The app can be misappropriated for copyright infringement, an issue that was raised around the time of the app's launch when several users of the service used it to air the fifth-season premiere of HBO's Game of Thrones live. HBO has stated the service needs better tools and policies to deal with copyrighted content. These issues were magnified further by a professional boxing event on 2 May 2015, Floyd Mayweather, Jr. vs. Manny Pacquiao, which was televised via a pay per view that cost approximately US$90, but saw wide unauthorised distribution through streams of various quality on Periscope. Rebroadcasting copyrighted content violates Periscope's written terms of service, and can result in suspension or banning the offending account. Other complaints have come from firms acting on behalf of the NFL, the Premier League, the US Open Tennis Championship and Taylor Swift, according to data from Chilling Effects, which tracks online takedown notices and was started by attorney Wendy Seltzer, several law school clinics and the Electronic Frontier Foundation. The Ultimate Fighting Championship, which has kept a close eye on people it believes are illegally streaming its pay per view mixed martial arts matches, has sent more than 650 takedown notices to Periscope, according to data from Chilling Effects. Resources As of 5 July 2016, Periscope released an update where users can choose whether to save their broadcasts or delete them after 24 hours. (Although "scopes" disappear from www.periscope.tv/username after 24 hours, users were able to capture their "scopes", and other live streaming apps, using Katch.me. It stopped collecting videos on 22 April 2016 and shut down on 4 May 2016.) A television channel based around Periscope is PeriscopeTV. References External links Periscope at Medium Twitter services and applications 2015 software IOS software TvOS software Android (operating system) software Video hosting 2015 mergers and acquisitions Go (programming language) software Live streaming services Livestreaming software
9774359
https://en.wikipedia.org/wiki/Hardware%20virtualization
Hardware virtualization
Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead an abstract computing platform. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time. Concept The term "virtualization" was coined in the 1960s to refer to a virtual machine (sometimes called "pseudo machine"), a term which itself dates from the experimental IBM M44/44X system. The creation and management of virtual machines has been called "platform virtualization", or "server virtualization", more recently. Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine (VM), for its guest software. The guest software is not limited to user applications; many hosts allow the execution of complete operating systems. The guest software executes as if it were running directly on the physical hardware, with several notable caveats. Access to physical system resources (such as the network access, display, keyboard, and disk storage) is generally managed at a more restrictive level than the host processor and system-memory. Guests are often restricted from accessing specific peripheral devices, or may be limited to a subset of the device's native capabilities, depending on the hardware access policy implemented by the virtualization host. Virtualization often exacts performance penalties, both in resources required to run the hypervisor, and as well as in reduced performance on the virtual machine compared to running native on the physical machine. Reasons for virtualization In the case of server consolidation, many small physical servers are replaced by one larger physical server to decrease the need for more (costly) hardware resources such as CPUs, and hard drives. Although hardware is consolidated in virtual environments, typically OSs are not. Instead, each OS running on a physical server is converted to a distinct OS running inside a virtual machine. Thereby, the large server can "host" many such "guest" virtual machines. This is known as Physical-to-Virtual (P2V) transformation. In addition to reducing equipment and labor costs associated with equipment maintenance, consolidating servers can also have the added benefit of reducing energy consumption and the global footprint in environmental-ecological sectors of technology. For example, a typical server runs at 425 W and VMware estimates a hardware reduction ratio of up to 15:1. A virtual machine (VM) can be more easily controlled and inspected from a remote site than a physical machine, and the configuration of a VM is more flexible. This is very useful in kernel development and for teaching operating system courses, including running legacy operating systems that do not support modern hardware. A new virtual machine can be provisioned as required without the need for an up-front hardware purchase. A virtual machine can easily be relocated from one physical machine to another as needed. For example, a salesperson going to a customer can copy a virtual machine with the demonstration software to their laptop, without the need to transport the physical computer. Likewise, an error inside a virtual machine does not harm the host system, so there is no risk of the OS crashing on the laptop. Because of this ease of relocation, virtual machines can be readily used in disaster recovery scenarios without concerns with impact of refurbished and faulty energy sources. However, when multiple VMs are concurrently running on the same physical host, each VM may exhibit varying and unstable performance which highly depends on the workload imposed on the system by other VMs. This issue can be addressed by appropriate installation techniques for temporal isolation among virtual machines. There are several approaches to platform virtualization. Examples of virtualization use cases: Running one or more applications that are not supported by the host OS: A virtual machine running the required guest OS could permit the desired applications to run, without altering the host OS. Evaluating an alternate operating system: The new OS could be run within a VM, without altering the host OS. Server virtualization: Multiple virtual servers could be run on a single physical server, in order to more fully utilize the hardware resources of the physical server. Duplicating specific environments: A virtual machine could, depending on the virtualization software used, be duplicated and installed on multiple hosts, or restored to a previously backed-up system state. Creating a protected environment: If a guest OS running on a VM becomes damaged in a way that is not cost-effective to repair, such as may occur when studying malware or installing badly behaved software, the VM may simply be discarded without harm to the host system, and a clean copy used upon rebooting the guest . Full virtualization In full virtualization, the virtual machine simulates enough hardware to allow an unmodified "guest" OS designed for the same instruction set to be run in isolation. This approach was pioneered in 1966 with the IBM CP-40 and CP-67, predecessors of the VM family. Hardware-assisted virtualization In hardware-assisted virtualization, the hardware provides architectural support that facilitates building a virtual machine monitor and allows guest OSs to be run in isolation. Hardware-assisted virtualization was first introduced on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system. In 2005 and 2006, Intel and AMD provided additional hardware to support virtualization. Sun Microsystems (now Oracle Corporation) added similar features in their UltraSPARC T-Series processors in 2005. In 2006, first-generation 32- and 64-bit x86 hardware support was found to rarely offer performance advantages over software virtualization. Paravirtualization In paravirtualization, the virtual machine does not necessarily simulate hardware, but instead (or in addition) offers a special API that can only be used by modifying the "guest" OS. For this to be possible, the "guest" OS's source code must be available. If the source code is available, it is sufficient to replace sensitive instructions with calls to VMM APIs (e.g.: "cli" with "vm_handle_cli()"), then re-compile the OS and use the new binaries. This system call to the hypervisor is called a "hypercall" in TRANGO and Xen; it is implemented via a DIAG ("diagnose") hardware instruction in IBM's CMS under VM (which was the origin of the term hypervisor).. Operating-system-level virtualization In operating-system-level virtualization, a physical server is virtualized at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" operating system environments share the same running instance of the operating system as the host system. Thus, the same operating system kernel is also used to implement the "guest" environments, and applications running in a given "guest" environment view it as a stand-alone system. Hardware virtualization disaster recovery A disaster recovery (DR) plan is often considered good practice for a hardware virtualization platform. DR of a virtualization environment can ensure high rate of availability during a wide range of situations that disrupt normal business operations. In situations where continued operations of hardware virtualization platforms is important, a disaster recovery plan can ensure hardware performance and maintenance requirements are met. A hardware virtualization disaster recovery plan involves both hardware and software protection by various methods, including those described below. Tape backup for software data long-term archival needs This common method can be used to store data offsite, but data recovery can be a difficult and lengthy process. Tape backup data is only as good as the latest copy stored. Tape backup methods will require a backup device and ongoing storage material. Whole-file and application replication The implementation of this method will require control software and storage capacity for application and data file storage replication typically on the same site. The data is replicated on a different disk partition or separate disk device and can be a scheduled activity for most servers and is implemented more for database-type applications. Hardware and software redundancy This method ensures the highest level of disaster recovery protection for a hardware virtualization solution, by providing duplicate hardware and software replication in two distinct geographic areas. See also Application virtualization Comparison of platform virtualization software Desktop virtualization Dynamic infrastructure Hardware emulation Hyperjacking Instruction set simulator Popek and Goldberg virtualization requirements Physicalization Thin provisioning Virtual appliance Virtualization for aggregation Workspace virtualization References External links An Introduction to Virtualization, by Amit Singh Xen and the Art of Virtualization, ACM, 2003, by a group of authors Linux Virtualization Software
1434840
https://en.wikipedia.org/wiki/ADO.NET
ADO.NET
ADO.NET is a data access technology from the Microsoft .NET Framework that provides communication between relational and non-relational systems through a common set of components. ADO.NET is a set of computer software components that programmers can use to access data and data services from a database. It is a part of the base class library that is included with the Microsoft .NET Framework. It is commonly used by programmers to access and modify data stored in relational database systems, though it can also access data in non-relational data sources. ADO.NET is sometimes considered an evolution of ActiveX Data Objects (ADO) technology, but was changed so extensively that it can be considered an entirely new product. Architecture ADO.NET is conceptually divided into consumers and data providers. The consumers are the applications that need access to the data, and the providers are the software components that implement the interface and thereby provide the data to the consumer. Functionality exists in Visual Studio IDE to create specialized subclasses of the DataSet classes for a particular database schema, allowing convenient access to each field in the schema through strongly typed properties. This helps catch more programming errors at compile-time and enhances the IDE's Intellisense feature. A provider is a software component that interacts with a data source. ADO.NET data providers are analogous to ODBC drivers, JDBC drivers, and OLE DB providers. ADO.NET providers can be created to access such simple data stores as a text file and spreadsheet, through to such complex databases as Oracle Database, Microsoft SQL Server, MySQL, PostgreSQL, SQLite, IBM DB2, Sybase ASE, and many others. They can also provide access to hierarchical data stores such as email systems. However, because different data store technologies can have different capabilities, every ADO.NET provider cannot implement every possible interface available in the ADO.NET standard. Microsoft describes the availability of an interface as "provider-specific," as it may not be applicable depending on the data store technology involved. Providers may augment the capabilities of a data store; these capabilities are known as "services" in Microsoft parlance. Object-relational mapping Entity Framework Entity Framework (EF) is an open source object-relational mapping (ORM) framework for ADO.NET, part of .NET Framework. It is a set of technologies in ADO.NET that supports the development of data-oriented software applications. Architects and developers of data-oriented applications have typically struggled with the need to achieve two very different objectives. The Entity Framework enables developers to work with data in the form of domain-specific objects and properties, such as customers and customer addresses, without having to concern themselves with the underlying database tables and columns where this data is stored. With the Entity Framework, developers can work at a higher level of abstraction when they deal with data, and can create and maintain data-oriented applications with less code than in traditional applications. LINQ to SQL LINQ to SQL (formerly called DLINQ) allows LINQ to be used to query Microsoft SQL Server databases, including SQL Server Compact databases. Since SQL Server data may reside on a remote server, and because SQL Server has its own query engine, it does not use the query engine of LINQ. Instead, the LINQ query is converted to a SQL query that is then sent to SQL Server for processing. However, since SQL Server stores the data as relational data and LINQ works with data encapsulated in objects, the two representations must be mapped to one another. For this reason, LINQ to SQL also defines a mapping framework. The mapping is done by defining classes that correspond to the tables in the database, and containing all or a certain subset of the columns in the table as data members. References External links ADO.NET for the ADO Programmer ADO.NET Connection Strings Data management .NET Framework terminology Microsoft application programming interfaces Microsoft free software SQL data access ADO.NET Data Access technologies Software using the MIT license
3462682
https://en.wikipedia.org/wiki/Jeff%20Trepagnier
Jeff Trepagnier
Jeffery Trepagnier (born July 11, 1979) is a former American professional basketball player. High school Trepagnier played basketball at Compton High School, in Compton, California. College career Trepagnier played college basketball for the USC Trojans. He also took second place at the 2000 Pac-10 Championships for the second year in a row with a high jump of 7-foot-1, tying for the fifth-best jump in USC school history. Professional career Trepagnier was a second round draft pick of the Cleveland Cavaliers in the 2001 NBA draft. At 6'4", Trepagnier was considered by some scouts to be undersized for the NBA shooting guard position, his natural spot. In three seasons for the Cavaliers and the Denver Nuggets, Trepagnier averaged 2.8 points per game. He spent the 2004–05 season for Italian club Eldo Napoli. He signed with the Knights on June 25, 2006. Trepagnier signed a contract with French club Élan Béarnais Pau-Orthez in December 2007. In 2009, Trepagnier was playing for the Iowa Energy of the NBA D-League. In 2010, he joined Scaligera Verona of the Italian second league. In 2011, he signed to play in Brazil. He later joined the Los Angeles Slam of the ABA. Personal Born in Los Angeles, Trepagnier is of African-American descent. References External links Jeff Trepagnier NBA player profile @ NBA.com Jeff Trepagnier NBA D-League player profile @ NBA.com player profile @ Euroleague.net Player profile @ Legabasket.it 1979 births Living people 21st-century African-American sportspeople African-American basketball players American expatriate basketball people in Brazil American expatriate basketball people in France American expatriate basketball people in Italy American expatriate basketball people in Turkey American men's basketball players American people of Creole descent American people of French descent Asheville Altitude players Bakersfield Jam players Basket Napoli players Basketball players from Los Angeles Charlotte Bobcats expansion draft picks Cleveland Cavaliers draft picks Cleveland Cavaliers players Compton High School alumni Denver Nuggets players Élan Béarnais players Iowa Energy players Rio Grande Valley Vipers players Scaligera Basket Verona players Shooting guards Ülker G.S.K. basketball players USC Trojans men's basketball players USC Trojans men's track and field athletes 20th-century African-American sportspeople
48786739
https://en.wikipedia.org/wiki/Alison%20Adam
Alison Adam
Alison Adam is a British researcher in the field of Science and Technology Studies and is particularly known for her work on gender in information systems and the history of forensic science. She is Professor Emerita of science, technology and society at Sheffield Hallam University, Sheffield, UK. Career Adam was a research fellow at Lancaster University, and a lecturer and senior lecturer in the Department of Computation at the University of Manchester Institute of Science and Technology. She was professor of information systems from 2003 to 2008, then professor of science, technology and society from 2008 to 2012 at the University of Salford, where she worked from 2000 until 2012, heading the Information Systems Institute in 2004–6 and then directing the Information Systems, Organisations and Society Research Centre. She has been professor of science, technology and society at Sheffield Hallam University, Sheffield since 2012. Research interests Adam's research has primarily been within the area of Science and Technology Studies, including history and sociology of science and technology. Her predominant focus was the field of information systems. Her main contribution to research in gender and technology was a study of gender in relation to artificial intelligence (AI), in particular how feminist epistemology could be used to challenge the epistemology of AI and a study of feminist ethics and how it relates to computer ethics. Notable publications in this area include the Artificial Knowing: Gender and the Thinking Machine and Gender, Ethics and Information Technology. The book Gender, Ethics and Information Technology explores the "intersection of two areas; firstly gender and information and communication technologies and secondly, computer ethics." In 2012 she completed a three-year Engineering and Physical Sciences Research Council funded research project on online privacy, in collaboration with researchers at Salford, Royal Holloway, University of London, and Cranfield University. She has evaluated the part that gender plays in ethics in on-line behaviour evaluating the gender divide in hacking, cyberstalking, and pornography to evaluate what gender differences exist in on-line experiences. Since 2010 Adam has researched the history and sociology of forensic sciences. Her 2015 book, A History of Forensic Science: British Beginnings in the Twentieth Century, charts the history of forensic sciences in the UK, mainly England, considering the broad spectrum of factors that went into creating the discipline in Britain in the first part of the twentieth century. These influences were criminological, criminalistic, scientific, technological and even fictional. She argues that new interest in managing crime scenes arrived on British shores, from the Continent via British India and Egypt and was channeled into the ‘scientific aids’ movement of the 1930s which can be seen as Continental and Colonial criminalistics in British clothing. The book charts the strategies of the new forensic scientists to gain an authoritative voice in the courtroom and to forge a professional identity in the space between forensic medicine, scientific policing, and independent expert witnessing drawing on the moral voice of the forensic scientist alongside the cultural authority of the fictional scientific detective. She is currently researching the history of forensic science in 20th century Scotland. Textiles are another long-standing interest. She states that "there's nothing quite like making a physical artifact. The culture of sharing information about making, on-line, is fascinating." From 2012 to 2014, she was engaged in interdisciplinary research on the culture of mending clothes. Teaching and administration At the University of Salford, Adam taught cybercrime, digital divide, research methodology, and sociology of forensic sciences. Adam served as deputy chair of the Research Excellence Framework (REF) 2014 sub-panel on Communication, Cultural and Media Studies, Library and Information Management (UoA 36) and was a member of the Research Assessment Exercise (RAE) 2008 sub-panel on Library and Information Management (UoA 37). Selected publications Books A History of Forensic Science: British Beginnings in the Twentieth Century, Routledge; 2015 Gender, Ethics and Information Technology, Palgrave Macmillan; 2005 Artificial Knowing: Gender and the Thinking Machine, Routledge; 1998; 2006 References Philosophers of science Sociologists of science Historians of technology Academics of Sheffield Hallam University Academics of the University of Salford Academics of the University of Manchester Institute of Science and Technology Living people British women historians British non-fiction writers Year of birth missing (living people)
36584341
https://en.wikipedia.org/wiki/Education%20in%20Mizoram
Education in Mizoram
Education in Mizoram consists of a diverse array of formal education systems ranging from elementary to university, from training institution to technical courses. The Government of India imposes mandatory education at least up to the basic level. For this public schools are made free of fees, and provided with free textbooks and school lunch. The first formal education was started in 1894 by two British Christian Missionaries at Aizawl. They taught only two select students whom they could trust for further teaching and their own evangelism. The first government school was started in 1897 at Aizawl. The first middle school opened in 1906, and secondary school in 1944. The first higher education institute Pachhunga University College was started in 1958. The first university Mizoram University was established in 2001 by the University Grants Commission of India. The general pattern of education is simply a progression from primary to secondary education. Only after secondary level students are able to pursue their lines of career opportunities or preferences. Industrial Training Institute for craftsmanship training courses (tailoring, mechanic, electrician, cooking, etc.) was started in Aizawl by the state government in 1964 (Mizoram was then under Assam state). Education on technical and vocational courses started only after 1980s. There are now various opportunities including engineering, veterinary, business management, technology, nursing, pharmacy, and other career oriented courses. The College of Veterinary Sciences and Animal Husbandry, Selesih was opened in 1997 as one of the constituent colleges of the Central Agricultural University. National Institute of Electronics and Information Technology, Aizawl was started by the Indian Ministry of Communications and Information Technology in 2000. The Government of Mizoram established The Institute of Chartered Financial Analysts of India University, Mizoram in 2006. National Institute of Technology Mizoram was established in 2010 by the Ministry of Human Resources Development, Government of India. In spite of relatively late education system, as of the latest census in 2011, Mizoram is the second highest in literacy rate (91.58%) among the Indian states. History Before the land of the Mizos was annexed to the British Empire in 1890, Mizos were without written language and were totally illiterate. Knowledge was predominantly imparted orally at the Zawlbuk, the traditional learning centre of the Mizos. In 1894 two English missionaries of Arthington Aborigines Mission Dr. (Rev) J.H. Lorrain and Rev. F.W. Savidge arrived at Aizawl. They immediately worked on creating Mizo alphabets based on Roman script. After a stay of only two and half months, they started the first school on 1st April 1894. Their first and only pupils were Suaka and Thangphunga. The two teachers were surprised that their students had mastered the new alphabets in only a week. The first textbook Mizo Zir Tir Bu (A Lushai Primer) was released on 22 October 1895 and became the first book in Mizo language. A Welsh missionary Rev. D.E. Jones from the Calvinistic Methodist Mission then took up the education under government recognition in 1898. He organised classes for about thirty students at the verandah of his residence. He was assisted by a Khasi couple Rai Bhajur and his wife. A new government school was opened in Lunglei in 1897, and Bengali script was used for teaching. In 1901 the government honoured Lalluava, the Chief of Khawngbâwk, for his deed towards the British by establishing primary school in his village. By 1903 there were schools in fifteen villages. In 1903 the British administration started promoting education by waiving forced labour (called kuli) for those who passed class IV (primary school), in addition to scholarship for meritorious students and grants to existing schools. The first scholarship was given to 8 students with the amount of INR 3 each per month for 2 years. The first systematic examination called Lower Primary Exam was conducted on 25 June 1903, with 19 candidates (2 girl among 17 boys). Eleven of them passed. Sir Bamfield Fuller, Assam Chief Commissioner, visited Mizoram (then Lushai Hills) in February 1904, and was so impressed with the mission schools that he immediately issued an order for dissolution of all government schools. He also presented Gold Medal to Chhuahkhama (among boys) and Saii (among girls). In 1904 the entire educational administration was charged under the mission, and Rev. Edwind Rowlands became the first Honorary Inspector of Schools from April 1. The first middle school (was called upper primary) came up in 1906 in Aizawl. The first high school named Mizo High School was opened in February 1944 at Zarkawt. There were 56 students in class VII, under the headmaster Rev David Evan Jones. By 1941 Census of India Lushai had attained highest literacy rate (36%) in India. Till the late 1952 the church managed elementary education through Honorary Inspector of Schools. On 25 April 1952 Lushai Hills became Mizo District Council under the Government of Assam. A post of Deputy Inspector was created by the government. In 1953 the designation of Honorary Inspector was changed to Secretary, Education Management Committee. Under this administration all primary and middle scholarship examinations were coordinated. In 1953 the first teachers' training institute Basic Training Centre was opened. On 15 August 1958 Pachhunga University College (then Aijal College) was inaugurated to become the first institute of higher education. In 1961 Education Officer became the administrative authority of education in the Mizo District Council. After Mizoram became Union Territory (in 1972) a separate Directorate of Education was created in 1973 under a separate ministry. Mizoram Board of School Education was established in 1976. Within a hundred years of education, Mizoram remains at the top list of highest literacy rate in India. School education The office of school education for Mizoram was started in 1973. It became a separate Directorate of School Education in 1989 and is located at McDonald Hill, Zarkawt, Aizawl. The department looks after elementary, secondary, higher education, language development, adult education and physical education within the state. The directorate administers the entire state and divides into 4 (four) education districts, namely (1) Chhimtuipui district, (2) Lunglei district,(3) Aizawl East district, and (4) Aizawl West district. The structure of education in the state is based on the national level pattern with 12 years of schooling (10+2+3), consisting of eight years of elementary education, that is, five years of primary and three years of middle school education for the age groups of 6-11 and 11–14 years, respectively, followed by secondary and higher secondary education of two years each besides two years of pre-primary education. The entry age in class 1 is 5+. Pre-primary classes form age group 3 to 4. The higher secondary school certificate enables pupils to pursue studies either in universities or in colleges for higher education in general academic streams and in technical and professional course. SCERT The Mizoram State Council of Educational Research & Training was started in January 1980. It is an academic wing of the Directorate of School Education and is located at Chaltlang since its establishment. SCERT is responsible for qualitative improvement of school education from primary to higher secondary schools, non-formal education and teacher education. It is also responsible for successful implementation of various education projects sponsored by central government, UNICEF as well as state government. District Institute of Education and Training (DIET) The DIET in Mizoram was first established on 1 September 1973 at Chaltlang. It is a centre of training for school teachers which is mandatory for incumbent employees. It was then established at Lunglei and subsequently at all the other district capitals i.e. Saiha, Lawngtlai, Kolasib, Champhai, Serchhip and Mamit in 2005. Mizoram Board of School Education Mizoram Board of School Education is an autonomous academic body under the purview of the Department of Education, Government of Mizoram, India. It is an authority on conducting state level examinations of schools. The Mizoram State Board of School Examination evaluates students' progress by conducting two board examinations-one at the end of class 10 and the other at the end of class 12. Higher and technical education Higher education under the Government of Mizoram is administered by the Directorate of Higher and Technical Education. It became the Department of Higher & Technical Education in 1989, with its head office at MacDonald Hill, Zarkawt. It is responsible for the administration of collegiate education, technical education beyond the higher secondary level and technical education at the diploma level and language development. There are 20 colleges, 2 deficit colleges including one Aizawl Law College, two training colleges (B.Ed. Training College and Hindi Training College) and two polytechnics under its jurisdiction. Right to education The Government of Mizoram adopted the Right of Children to Free and Compulsory Education (RTE) Act, 2009, and based on it has enacted its own Mizoram Right of Children to Free and Compulsory Education Rules, 2011. The rules demand compulsory schooling for children aged between 6 and 14 years, special training for children in need of special development, provision of free textbooks and writing materials, free uniforms for BPL children. SSA in Mizoram Mizoram state education department started implementing Sarva Shiksha Abhiyan from the financial year 2000-2001. Funds were utilised for various activities, such as conducting household survey, training of teachers, preparation of district plan, purchase of vehicles, etc. At the initial stage, when only Saiha district was selected for starting pre-project activities, there was no society constituted for this programme and no district committee was formed either. As a result, District Education Officer (DEO), Saiha and supporting staff in consultation with Directorate of School Education, carried out the pre-project activities. "The Mizoram Sarva Shiksha Abhiyan Raja Mission Rules 2001" was passed by the Mizoram Legislative Assembly and the same was published in the Mizoram Gazette on 1 August 2001. In the same year the Mizoram Sarva Shiksha Abhiyan mission was registered under the societies registration (extension to Mizoram) Act 1976 (Mizoram Act No. 3 of 1977). Medium of instruction At elementary level, private schools use English as a medium of instruction, while the government run schools are mostly with Mizo medium. At higher levels, English is strictly the major language. Accreditation All recognised schools belong to one of the following accreditation systems: Central Board of Secondary Education - for all years of study Mizoram Board of School Education - for all years of study Higher education in Mizoram The major institutes for higher education in Mizoram are College of Veterinary Sciences and Animal Husbandry, Selesih, Central Agricultural University ICFAI University, Mizoram Mizoram University National Institute of Electronics and Information Technology, Aizawl National Institute of Technology Mizoram Most institutes are affiliated to Mizoram University including College of Teachers Education, Aizawl Government Aizawl College Government Aizawl North College Government Aizawl West College Government Champhai College Government Hrangbana College Government J. Buana College Government J. Thankima College Government Johnson College Government Kolasib College Government Mizoram Law College Government Saiha College Government Serchhip College Government T. Romana College Government Zirtiri Residential Science College Government Mamit College Government Saitual College Government Lawngtlai College Government Khawzawl College Government Hnahthial College Government Zawlnuam College Helen Lowry College of Arts & Commerce Kamalanagar College Lunglei Government College Mizoram College of Nursing Zoram Medical College National Institute of Electronics and Information Technology, Aizawl Pachhunga University College (constituent college) St. Xavier's College, Lengpui Regional Institute of Paramedical and Nursing Aizawl Higher and Technical Institute of Mizoram References External links Mizoram University Mizoram Board of School Education '''
1064072
https://en.wikipedia.org/wiki/Advanced%20Host%20Controller%20Interface
Advanced Host Controller Interface
The Advanced Host Controller Interface (AHCI) is a technical standard defined by Intel that specifies the operation of Serial ATA (SATA) host controllers in a non-implementation-specific manner in its motherboard chipsets. The specification describes a system memory structure for computer hardware vendors to exchange data between host system memory and attached storage devices. AHCI gives software developers and hardware designers a standard method for detecting, configuring, and programming SATA/AHCI adapters. AHCI is separate from the SATA 3 Gbit/s standard, although it exposes SATA's advanced capabilities (such as hot swapping and native command queuing) such that host systems can utilize them. For modern solid state drives, the interface has been superseded by NVMe. The current version of the specification is 1.3.1. Operating modes Many SATA controllers offer selectable modes of operation: legacy Parallel ATA emulation (more commonly called IDE Mode), standard AHCI mode (also known as Native Mode), or vendor-specific RAID (which generally enables AHCI in order to take advantage of its capabilities). Intel recommends choosing RAID mode on their motherboards (which also enables AHCI) rather than AHCI/SATA mode for maximum flexibility. Legacy mode is a software backward-compatibility mechanism intended to allow the SATA controller to run in legacy operating systems which are not SATA-aware or where a driver does not exist to make the operating system SATA-aware. When a SATA controller is configured to operate in IDE Mode, the number of storage devices per controller is usually limited to four (two IDE channels, master device and slave device with up to two devices per channel), compared to the maximum of 32 devices/ports when configured in AHCI mode. But the chipset SATA interfaces may emulate more than one "IDE controller" when configured in IDE Mode. Operating system support AHCI is supported out of the box on Windows Vista and later, Linux-based operating systems (since version 2.6.19 of the kernel), OpenBSD (since version 4.1), NetBSD (since version 4.0), FreeBSD (since version 8.0), macOS, GNU Mach, ArcaOS, eComStation (since version 2.1), and Solaris 10 (since version 8/07). DragonFlyBSD based its AHCI implementation on OpenBSD's and added extended features such as port multiplier support. Older versions of operating systems require hardware-specific drivers in order to support AHCI. Windows XP and older do not provide AHCI support out of the box. System drive boot issues Some operating systems, notably Windows Vista, Windows 7, Windows 8, Windows 8.1 and Windows 10, do not configure themselves to load the AHCI driver upon boot if the SATA controller was not in AHCI mode at the time the operating system was installed. Although this is an easily rectifiable condition, it remains an ongoing issue with the AHCI standard. The most prevalent symptom for an operating system (or systems) that are installed in IDE mode (in some BIOS firmware implementations otherwise called 'Combined IDE mode'), is that the system drive typically fails to boot, with an ensuing error message, if the SATA controller (in BIOS) is inadvertently switched to AHCI mode after OS installation. In Microsoft Windows the symptom is a boot loop which begins with a Blue Screen error, if not rectified. Technically speaking, this is an implementation bug with AHCI that can be avoided, but it has not been fixed yet. As an interim resolution, Intel recommends changing the drive controller to AHCI or RAID before installing an operating system. (It may also be necessary to load chipset-specific AHCI or RAID drivers at installation time, for example from a USB flash drive). On Windows Vista and Windows 7, this can be fixed by configuring the msahci device driver to start at boot time (rather than on-demand). Setting non-AHCI mode (i.e. IDE or Combined mode) in the BIOS will allow the user to boot into Windows, and thereby the required registry change can be performed. Consequently, the user then has the option of continuing to use the system in Combined mode or switching to AHCI mode. Inter alia with Windows 10 and 8, this can be fixed by forcing the correct drivers to reload during Safe Mode. In Windows 8, Windows 8.1 and Windows Server 2012, the controller driver has changed from msahci to storahci, and the procedures to upgrade to the AHCI controller is similar to that of Windows 7. On Windows 8, 8.1 and Windows Server 2012, changing from IDE mode to AHCI mode without first updating the registry will make the boot drive inaccessible (i.e. resulting in a recurring boot loop, which begins with a Blue Screen error). In Windows 10, after changing the controller to AHCI mode, if the OS is allowed to reboot a couple of times after the start of the boot loop, which starts with an INACCESSIBLE_BOOT_DEVICE BSOD, Windows presents recovery options. Out of the Advanced options, if Startup Repair option is selected, Windows attempts to fix the issue and the PC begins to function normally. A similar problem can occur on Linux systems if the AHCI driver is compiled as a kernel module rather than built into the kernel image, as it may not be included in the initrd (initial RAM disk) created when the controller is configured to run in Legacy Mode. The solution is either to build a new initrd containing the AHCI module, or to build the AHCI driver into the kernel image. Power management Power management is handled by the Aggressive Link Power Management (ALPM) protocol. See also Open Host Controller Interface (OHCI) Universal Host Controller Interface (UHCI) Enhanced Host Controller Interface (EHCI) Extensible Host Controller Interface (XHCI) Wireless Host Controller Interface (WHCI) Host controller interface (USB, Firewire) References External links "AHCI Specification". Intel. "AHCI". OSDev Wiki Serial ATA
48400926
https://en.wikipedia.org/wiki/Miitomo
Miitomo
Miitomo is a discontinued freemium social networking mobile app developed by Nintendo for iOS and Android devices. The app, Nintendo's first, allowed users to converse with friends by answering various questions, and featured Twitter and Facebook integration. The app was released in March 2016 for iOS and two months later for Android, launching alongside their My Nintendo service. Despite initially being a critical and commercial success, with over ten million downloads worldwide a month after release, its popularity dwindled soon after and it was ultimately discontinued on May 9, 2018. Features Miitomo served as a conversational app where users could communicate with friends by answering questions on various topics, such as favourite foods or current interests. Similar to Tomodachi Life, which some of the Miitomo development team also worked on, players used a Mii avatar which they could create from scratch or obtain from their My Nintendo account or a QR code, and gave it a computer generated voice and personality. Users could add friends to Miitomo by communicating directly with their device or by linking the app to their Facebook and Twitter accounts. By tapping their Mii, users could answer various questions which were shared with their friends, while tapping their thought bubble allowed them to hear answers from other friends. Users could visit, or be visited by, other friends and were able to answer certain questions that would only be shared with a specific friend. Players were also able to take pictures of their Mii, known as MiiFotos, which could be shared with friends as well as posted online. Performing various actions in the app would earn Miitomo Coins, which could also be obtained through in-app purchases. These coins could be spent on various clothing items that can be used to customize the user's Mii. Additional clothing items could be obtained through the Miitomo Drop minigame, which could be played by either spending Miitomo Coins or using Game Tickets earned through play. The app was tied into My Nintendo's rewards scheme, with users able to earn Miitomo Platinum Points by clearing missions such as changing their outfits daily or linking their accounts. Miitomo Platinum Points could be exchanged for special item rewards or additional Game Tickets, or could be combined with standard Platinum Points for other My Nintendo rewards. Development Miitomo was initially announced on October 25, 2015. Nintendo partnered with DeNA to leverage their understanding of mobile platforms as part of Nintendo's push for development on mobile devices, who were responsible for the service's infrastructure and My Nintendo integration. The app was first released in Nintendo's home market of Japan on March 17, 2016, and was later released in Western territories on March 31, 2016. The development team was headed up by Tomodachi Lifes core developers, under the supervision of Super Metroid director Yoshio Sakamoto. Additionally, Nintendo announced plans to update the app further beyond the launch period. Albeit not required, users who linked their Nintendo Account to Miitomo enjoyed benefits such as cloud-saving. The app was released alongside the My Nintendo service respectively in all supported countries. Miitomo first launched in Japan on March 17, 2016, and by the end of the month, the app became officially available in all sixteen countries that were eligible for My Nintendo's pre-registration period. The app later became available in Mexico, Switzerland, and South Africa on June 30, 2016, and in Brazil on July 28, 2016. An update in November 2016 added five new features, enabling users to send messages to friends, customize their rooms, share their outfits with the world in "Style Central", publicly answer questions in "Answer Central", and allow for the creation of "Sidekick" Mii characters, which have their own rooms. Along with the major update, Miitomo launched in forty additional countries on the same day without any official announcements. In January 2018, Nintendo announced that the game would be discontinued, with its servers being shut down on May 9, 2018. Nintendo also stated that a browser-based Mii Maker tool would be created in late May following the discontinuation of Miitomo, the likes of which could be used to transfer and save Mii characters created within the app. Reception In Japan, Miitomo had one million users within three days of its launch, overtaking the instant messenger Line as the most downloaded free app on the Japanese App Store. In the week after its initial launch, Nintendo's shares grew by eight percent following the success of the app. In less than 24 hours after its worldwide launch on March 31, the app already had three million users globally, and also rose to the top of the U.S. App Store, overtaking Snapchat. Miitomo later had 1.6 million downloads within its first four days in the United States. By April 2016, Miitomo had a user base of over 10 million users with 300 million conversations between friends and 20 million screenshots taken within the app itself. Later observations conducted by SurveyMonkey however, found that only a quarter of the people who had downloaded it regularly opened the app by May 2016, being used half as much as Candy Crush Saga and Clash Royale. Notes References External links 2016 software 2016 video games Products and services discontinued in 2018 Android (operating system) games Android (operating system) software Communication software Computer-related introductions in 2016 Cross-platform software Defunct iOS software Free-to-play video games Inactive massively multiplayer online games Inactive online games IOS games IOS software Nintendo Entertainment Planning & Development games Nintendo games Video games developed in Japan Video games featuring protagonists of selectable gender Video games with downloadable content
2701254
https://en.wikipedia.org/wiki/Bachelor%20of%20Computer%20Science
Bachelor of Computer Science
The Bachelor of Computer Science or Bachelor of Science in Computer Science (abbreviated BCompSc or BCS or BSCS or B.Sc. CS) is a type of bachelor's degree awarded after collegiate study in computer science. In general, computer science degree programs emphasize the mathematical and theoretical foundations of computing. Typical requirements Because computer science is a wide field, courses required to earn a bachelor of computer science degree vary. A typical list of course requirements includes topics such as: Computer programming Programming paradigms Algorithms Data structures Logic & Computation Computer architecture Some schools may place more emphasis on mathematics and require additional courses such as: Linear algebra Calculus Probability theory and statistics Combinatorics and discrete mathematics Differential calculus and mathematics Beyond the basic set of computer science courses, students can typically choose additional courses from a variety of different fields, such as: Theory of computation Operating systems Numerical computation Compilers, compiler design Real-time computing Distributed systems Computer networking Data communication Computer graphics Artificial intelligence Human-computer interaction Information theory Software testing Information assurance Some schools allow students to specialize in a certain area of computer science. Related degrees Bachelor of Software Engineering Bachelor of Science in Information Technology Bachelor of Computing Bachelor of Information Technology Bachelor of Computer Information Systems See also Computer science Computer science and engineering References Computer Science Computer science education Computer science educators
7984526
https://en.wikipedia.org/wiki/Barcode%20system
Barcode system
A barcode system is a network of hardware and software, consisting primarily of mobile computers, printers, handheld scanners, infrastructure, and supporting software. Barcode systems are used to automate data collection where hand recording is neither timely nor cost effective. Despite often being provided by the same company, Barcoding systems are not radio-frequency identification (RFID) systems. Many companies use both technologies as part of larger resource management systems. History In 1948, Bernard Silver was a graduate student at Drexel Institute of Technology in Philadelphia. A local food chain store owner had made an inquiry to the Drexel Institute asking about research into a method of automatically reading product information during checkout. Silver joined together with fellow graduate student Norman Joseph Woodland to work on a solution. Woodland's first idea was to use ultraviolet light sensitive ink. The team built a working prototype but decided that the system was too unstable and expensive. They went back to the drawing board. On October 20, 1949, Woodland and Silver filed their patent application for the "Classifying Apparatus and Method", describing their invention as "article classification… through the medium of identifying patterns". The first commercially successful barcode reading system was patented in November 1969 by John F. Keidel for the General Atronics Corp. It was soon realized that there would have to be some sort of industry standard set. In 1970, the Universal Grocery Products Identification Code or UGPIC was written by a company called Logicon Inc. The first company to produce bar code equipment for retail trade use (using UGPIC) was the American company Monarch Marking in 1970, and for industrial use, the British company Plessey Telecommunications was also first in 1970. UGPIC evolved into the U.P.C. symbol set or Universal Product Code, which is still used in the United States. George J. Laurer is considered the inventor of U.P.C. or Uniform Product Code, which was invented in 1973. In June 1974, the first U.P.C. scanner was installed at a Marsh's supermarket in Troy, Ohio. The first product to have a barcode included was a packet of Wrigley's Gum. Hardware There is a wide range of hardware that is manufactured today for use in Barcode Systems. The best known brand of handheld scanners and mobile computers is Symbol, which is now a division of Motorola. Other manufacturers include Intermec, HHP (Hand Held Products), Microscan Systems, Unitech, Metrologic, PSC and PANMOBIL. Software While there is a range of hardware on the market, software is more difficult to find from the hardware manufacturers. Some ERP, MRP, and other inventory management software have built in support for barcode reading and some even allow the software to run directly on a mobile computer. Besides full management software, there are more than a few software development kits on the market that allow the developer to easily produce custom mobile interfaces and that handle the connect to the database. One such software is RFgen another is PeopleVox. Then there is always the option of developing a custom software solution, using a language such as C++, C#, Java, Visual Basic.NET, and many others. Often developing a custom interface using software such as RFgen or developing new, personalized software is the most effective method since it allows the individual to have a solution that is fitted to their exact needs. Typical system A typical barcode system consist of some infrastructure, either wired or wireless that connects some number of mobile computers, handheld scanners, and printers to one or many databases that store and analyze the data collected by the system. At some level there must be some software to manage the system. The software may be as simple as code that manages the connection between the hardware and the database or as complex as an ERP, MRP, or some other inventory management software. References Barcodes Automatic identification and data capture Mobile computers
69076711
https://en.wikipedia.org/wiki/Deepak%20Gupta%20%28researcher%29
Deepak Gupta (researcher)
Deepak Gupta is an Indian researcher, software developer, and writer. He is known as an author and editor of handbooks such as the Handbook of Computer Networks and Cyber Security and the Handbook of Research on Multimedia Cyber Security. Education Gupta earned a Master of Science in Computer Forensics and Cyber Security from the Illinois Institute of Technology in Chicago. As a graduate student, Gupta worked on and led VOIP-related research projects for Bell Labs. Career After finishing his graduate studies, Gupta worked at Sageworks. As a software developer, he created a centralized integration process for porting customers' data to the company's database. In 2011, Gupta founded CIAM and social API provider LoginRadius with his friend and co-founder Rakesh Soni. At first, it was based in Edmonton, Canada, but then later set up offices in San Francisco, California. It also has offices in Vancouver, Jaipur, and Hyderabad. Publications Gupta has written various articles and books on cybersecurity and various other IT topics. In 2018, Gupta contributed a chapter to the book Computer and Cyber Security: Principles, Algorithm, Applications, and Perspectives. He has served as an editor for the 2019 volume Handbook of Computer Networks and Cyber Security: Principles and Paradigms. Additionally, Gupta has served as an editor for the 2020 volume Handbook of Research on Multimedia Cyber Security. Gupta is also writing another book titled The Power of Digital Identity in 2021. He has also written articles for Forbes, FastCompany, DevOps.com, and others. Books Mishra, A., Gupta, B., & Gupta, D. (2018). Identity theft, malware, and social engineering in dealing with cybercrime. In Gupta, B., Agrawal, D. P., & Wang, H. (eds.). Computer and cyber security: Principles, algorithm, applications, and perspectives. (pp. 627–649). United States: CRC Press. Gupta, D., Agrawal, D. P., Perez, G. M., Gupta, B. B. (2019). Handbook of computer networks and cyber security: Principles and paradigms. Germany: Springer International Publishing. Gupta, B., & Gupta, D. (eds.). (2020). Handbook of research on multimedia cyber security. IGI Global, Information Science Reference (an imprint of IGI Global). Research Gupta's patents include: Method and system for defense against Distributed Denial-of-Service attack, Peraković, D., Gupta, B.B., Gupta, D., Mishra, A., AU2021102049A Method and system of performing a fine-grained searchable encryption for resource-constrained devices in m-health network, Nguyen, T., Castiglione, A., Gupta, B.B., Gupta, D., Mamta, AU2021102048A4 References Indian software engineers Scientists at Bell Labs Illinois Institute of Technology alumni Living people Year of birth missing (living people)
66295925
https://en.wikipedia.org/wiki/JamKazam
JamKazam
JamKazam is proprietary networked music performance software that enables real-time rehearsing, jamming and performing with musicians at remote locations, overcoming latency - the time lapse that occurs while (compressed) audio streams travel to and from each musician. JamKazam is available in free and premium versions; the free version is peer-to-peer only, while the paid version offers the client-server model too, choosing whichever route is faster. It also allows streaming to social media, and has pre-recorded "JamTracks" for subscribers to play along to. The founders ran out of capital in 2017, but like other software of this type, saw huge growth during the 2020 COVID-19 pandemic, and managed to raise over $100,000 through crowdfunding on GoFundMe. See also Jamstud.io Jamulus Ninjam / Ninbot SonoBus HPSJam Comparison of Remote Music Performance Software References Audio software 2014 software Music software
47167561
https://en.wikipedia.org/wiki/Exploit%20kit
Exploit kit
An exploit kit is simply a collection of exploits, which is a simple one-in-all tool for managing a variety of exploits altogether. Exploit kits act as a kind of repository, and make it easy for users without much technical knowledge to use exploits. Users can add their own exploits to it and use them simultaneously apart from the pre-installed ones. Details One of the earlier kits was MPack, in 2006. Exploit kits are often designed to be modular and easy to use, enabling the addition of new vulnerabilities and the removal of existing ones. Exploit kits also provide a user interface for the person who controls them, which typically includes information on success rates and other types of statistics, as well as the ability to control their settings. A typical kit is a collection of PHP scripts that target security holes in commonly used programs such as Apple Quicktime or Mozilla Firefox. Widely used software such as Oracle Java and Adobe Systems products are targeted particularly often. Exploit kits come packed with a variety of tools ranging from hunting vulnerabilities to further automated exploitation of the security loopholes which it has discovered. Basically it follows a simple hierarchy of the five steps of hacking. The exploit kit gathers information on the victim machine, finds vulnerabilities and determines the appropriate exploit, and delivers the exploit, which typically silently drive-by downloads and executes malware, and further running post-exploitation modules to maintain further remote access to the compromised system. Lastly as a measure to cover up tracks, it uses special techniques like erasing logs to avoid detection. They do not require any understanding of exploits, and very little computer proficiency. Kits may have a Web interface showing active victims and statistics. They may have a support period and updates like commercial software. Exploit kits are sold in cybercriminal circles, often with vulnerabilities already loaded onto them. A study by Solutionary's Security Engineering Research Team (SERT) found about 70% of exploit kits released in Q4 2012 come from Russia, followed by China and Brazil, with 20% not attributed. A typical, relatively unsophisticated kit may cost US$500 per month. Licenses for advanced kits have been reported to cost as much as $10,000 per month. Exploit kits are often encoded, instead of in plain PHP, to prevent unlicensed use and complicate anti-malware analysis. Further Research from Recorded Future's Threat Intelligence Team revealed that Adobe Flash Player provided six of the top 10 vulnerabilities used by exploit kits in 2016. Flash Player's popularity with cyber criminals remains even after increased Adobe security issue mitigation efforts. Kits continue to include exploitation of vulnerabilities that were long patched, as there continues to be a significant population of unpatched computers. Exploit kits tend to be deployed covertly on legitimate Web sites that have been hacked, unknown to the site operators and visitors. Exploit kits that have been named include Angler, MPack, Phoenix, Blackhole, Crimepack, RIG, Nuclear, Neutrino, and Magnitude exploit kits. See also References Malware toolkits Spyware Computer security exploits
64102425
https://en.wikipedia.org/wiki/List%20of%20moths%20of%20Somaliland
List of moths of Somaliland
There are about 380 known moth species of Somaliland. The moths (mostly nocturnal) and butterflies (mostly diurnal) together make up the taxonomic order Lepidoptera. This is a list of moth species which have been recorded in Somaliland. Arctiidae Alpenus diversata (Hampson, 1916) Alpenus investigatorum (Karsch, 1898) Alpenus nigropunctata (Bethune-Baker, 1908) Amata alicia (Butler, 1876) Amata chrysozona (Hampson, 1898) Amata romeii Berio, 1941 Amata velatipennis (Walker, 1864) Amerila vitrea Plötz, 1880 Amsacta melanogastra (Holland, 1897) Amsacta paolii Berio, 1936 Amsactarctia radiosa (Pagenstecher, 1903) Amsactarctia venusta (Toulgoët, 1980) Apisa canescens Walker, 1855 Argina amanda (Boisduval, 1847) Argina astrea (Drury, 1773) Automolis meteus (Stoll, 1781) Estigmene griseata Hampson, 1916 Galtara somaliensis (Hampson, 1916) Micralarctia tolgoeti Watson, 1988 Nanna eningae (Plötz, 1880) Ochrota unicolor (Hopffer, 1857) Paralacydes arborifera (Butler, 1875) Paralacydes fiorii (Berio, 1937) Paralacydes minorata (Berio, 1935) Secusio discoidalis Talbot, 1929 Secusio strigata Walker, 1854 Spilosoma mediopunctata (Pagenstecher, 1903) Spilosoma semihyalina Bartel, 1903 Teracotona rhodophaea (Walker, 1865) Teracotona submacula (Walker, 1855) Trichaeta fulvescens (Walker, 1854) Trichaeta pterophorina (Mabille, 1892) Utetheisa amhara Jordan, 1939 Utetheisa pulchella (Linnaeus, 1758) Bombycidae Ocinara ficicola (Westwood & Ormerod, 1889) Cossidae Aethalopteryx tristis (Gaede, 1915) Nomima prophanes Durrant, 1916 Crambidae Bocchoris inspersalis (Zeller, 1852) Cnaphalocrocis trapezalis (Guenée, 1854) Crocidolomia pavonana (Fabricius, 1794) Herpetogramma mutualis (Zeller, 1852) Hodebertia testalis (Fabricius, 1794) Pyrausta phaenicealis (Hübner, 1818) Gelechiidae Pectinophora gossypiella (Saunders, 1844) Geometridae Acidaliastis subbrunnescens Prout, 1916 Aetheometra iconoclasis Prout, 1931 Antharmostes papilio Prout, 1912 Chiasmia calvifrons (Prout, 1916) Chiasmia inconspicua (Warren, 1897) Chiasmia semialbida (Prout, 1915) Chiasmia subcurvaria (Mabille, 1897) Cyclophora imperialis (Berio, 1937) Hemidromodes subbrunnescens Prout, 1915 Isturgia deerraria (Walker, 1861) Lomographa indularia (Guenée, 1858) Pachypalpella subalbata (Warren, 1900) Phaiogramma stibolepida (Butler, 1879) Prasinocyma perpulverata Prout, 1916 Scopula africana Berio, 1937 Scopula minoa (Prout, 1916) Scopula nepheloperas (Prout, 1916) Scopula sagittilinea (Warren, 1897) Sesquialtera ridicula Prout, 1916 Traminda acuta (Warren, 1897) Traminda neptunaria (Guenée, 1858) Tricentroscelis protrusifrons Prout, 1916 Zamarada mesotaenia Prout, 1931 Zamarada secutaria (Guenée, 1858) Zamarada torrida D. S. Fletcher, 1974 Gracillariidae Acrocercops bifasciata (Walsingham, 1891) Lasiocampidae Anadiasa colenettei Hartig, 1940 Anadiasa nicotrai Hartig, 1940 Anadiasa simplex Pagenstecher, 1903 Beralade fulvostriata Pagenstecher, 1903 Beralade sobrina (Druce, 1900) Bombycopsis hyatti Tams, 1931 Chionopsyche grisea Aurivillius, 1914 Gonometa negrottoi Berio, 1940 Odontocheilopteryx myxa Wallengren, 1860 Odontocheilopteryx politzari Gurkovich & Zolotuhin, 2009 Odontopacha fenestrata Aurivillius, 1909 Sena donaldsoni (Holland, 1901) Sena prompta (Walker, 1855) Stenophatna marshalli Aurivillius, 1909 Stoermeriana abyssinicum (Aurivillius, 1908) Stoermeriana collenettei Tams, 1931 Streblote finitorum Tams, 1931 Limacodidae Coenobasis chloronoton Hampson, 1916 Coenobasis postflavida Hampson, 1910 Gavara caprai Berio, 1937 Gavara leucomera Hampson, 1916 Gavara velutina Walker, 1857 Latoia vivida (Walker, 1865) Scotinochroa minor Hampson, 1916 Lymantriidae Aclonophlebia inconspicua Hampson, 1910 Casama impura (Hering, 1926) Casama vilis (Walker, 1865) Cropera confalonierii Berio, 1937 Croperoides negrottoi Berio, 1940 Dasychira daphne Hering, 1926 Knappetra fasciata (Walker, 1855) Laelia subrosea (Walker, 1855) Rhypopteryx rhodea (Hampson, 1905 Metarbelidae Metarbela erecta Gaede, 1929 Noctuidae Acantholipes circumdata (Walker, 1858) Achaea catella Guenée, 1852 Achaea lienardi (Boisduval, 1833) Achaea mercatoria (Fabricius, 1775) Acontia apatelia (Swinhoe, 1907) Acontia basifera Walker, 1857 Acontia berioi Hacker, Legrain & Fibiger, 2008 Acontia caeruleopicta Hampson, 1916 Acontia caffraria (Cramer, 1777) Acontia chiaromontei Berio, 1936 Acontia discoidea Hopffer, 1857 Acontia ectorrida (Hampson, 1916) Acontia hortensis Swinhoe, 1884 Acontia imitatrix Wallengren, 1856 Acontia insocia (Walker, 1857) Acontia lanzai (Berio, 1985) Acontia mascheriniae (Berio, 1985) Acontia miogona (Hampson, 1916) Acontia notha Hacker, Legrain & Fibiger, 2010 Acontia nubila Hampson, 1910 Acontia opalinoides Guenée, 1852 Acontia pergratiosa Berio, 1937 Acontia porphyrea (Butler, 1898) Acontia rigatoi Hacker, Legrain & Fibiger, 2008 Acontia semialba Hampson, 1910 Acontia somaliensis (Berio, 1977) Acontia sublactea Hacker, Legrain & Fibiger, 2008 Acontia transfigurata Wallengren, 1856 Acontia trimaculata Aurivillius, 1879 Adisura bella Gaede, 1915 Aegleoides paolii Berio, 1937 Aegocera brevivitta Hampson, 1901 Aegocera rectilinea Boisduval, 1836 Agrotis bialbifasciata Berio, 1953 Agrotis negrottoi Berio, 1938 Agrotis nicotrai Berio, 1945 Agrotis pictifascia (Hampson, 1896) Amazonides menieri Laporte, 1974 Amyna axis Guenée, 1852 Amyna punctum (Fabricius, 1794) Androlymnia clavata Hampson, 1910 Anoba trigonosema (Hampson, 1916) Anomis erosa (Hübner, 1818) Anomis flava (Fabricius, 1775) Anomis involuta Walker, 1857 Anomis mesogona (Walker, 1857) Anomis sabulifera (Guenée, 1852) Antarchaea digramma (Walker, 1863) Antarchaea fragilis (Butler, 1875) Anticarsia rubricans (Boisduval, 1833) Ariathisa abyssinia (Guenée, 1852) Asplenia melanodonta (Hampson, 1896) Athetis discopuncta Hampson, 1916 Athetis satellitia (Hampson, 1902) Audea melanoplaga Hampson, 1902 Beihania diascota (Hampson, 1916) Brevipecten calimanii (Berio, 1939) Brevipecten cornuta Hampson, 1902 Brevipecten discolora Hacker & Fibiger, 2007 Brevipecten marmoreata Hacker & Fibiger, 2007 Brevipecten tessenei Berio, 1939 Calesia zambesita Walker, 1865 Callhyccoda viriditrina Berio, 1935 Callopistria latreillei (Duponchel, 1827) Callopistria yerburii Butler, 1884 Caranilla uvarovi (Wiltshire, 1949) Catephia mesonephele Hampson, 1916 Cerocala albimacula Hampson, 1916 Cerocala grandirena Berio, 1954 Cerocala illustrata Holland, 1897 Cerocala munda Druce, 1900 Cerocala oppia (Druce, 1900) Cetola vicina de Joannis, 1913 Chrysodeixis acuta (Walker, [1858]) Chrysodeixis chalcites (Esper, 1789) Chrysodeixis eriosoma (Doubleday, 1843) Clytie tropicalis Rungs, 1975 Condica capensis (Guenée, 1852) Craterestra definiens (Walker, 1857) Crionica cervicornis (Fawcett, 1917) Ctenoplusia fracta (Walker, 1857) Ctenusa curvilinea Hampson, 1913 Cyligramma fluctuosa (Drury, 1773) Diparopsis castanea Hampson, 1902 Discestra quercii Berio, 1941 Donuctenusa fiorii Berio, 1940 Dysgonia algira (Linnaeus, 1767) Dysgonia torrida (Guenée, 1852) Ecthymia lemonia Berio, 1940 Epharmottomena sublimbata Berio, 1894 Erebus macrops (Linnaeus, 1767) Ethiopica hesperonota Hampson, 1909 Ethiopica ignecolora Hampson, 1916 Ethiopica phaeocausta Hampson, 1916 Eublemma daphoenoides Berio, 1941 Eublemma exigua (Walker, 1858) Eublemma galacteoides Berio, 1937 Eublemma olmii Berio, 1937 Eublemma postrosea Gaede, 1935 Eublemma reninigra Berio, 1945 Eublemma rivula (Moore, 1882) Eublemma scitula (Rambur, 1833) Eudocima materna (Linnaeus, 1767) Eulocastra tamsi Berio, 1938 Eustrotia decissima (Walker, 1865) Eustrotia extranea Berio, 1937 Eutelia discitriga Walker, 1865 Eutelia grisescens Hampson, 1916 Giubicolanta orientalis Berio, 1937 Grammodes exclusiva Pagenstecher, 1907 Grammodes stolida (Fabricius, 1775) Hadjina plumbeogrisea (Hampson, 1916) Helicoverpa zea (Boddie, 1850) Heliothis nubigera Herrich-Schäffer, 1851 Hemituerta mahdi (Pagenstecher, 1903) Heraclia thruppi (Butler, 1886) Heteropalpia robusta Wiltshire, 1988 Heteropalpia vetusta (Walker, 1865) Hypena abyssinialis Guenée, 1854 Hypena lividalis (Hübner, 1790) Hypena obacerralis Walker, [1859] Hypena obsitalis (Hübner, [1813]) Hypotacha bubo Berio, 1941 Hypotacha indecisa Walker, [1858] Hypotacha retracta (Hampson, 1902) Janseodes melanospila (Guenée, 1852) Leucania inangulata (Gaede, 1935) Leucania loreyi (Duponchel, 1827) Leucania melanostrota (Hampson, 1905) Leucania negrottoi (Berio, 1940) Leucania patrizii (Berio, 1935) Lophotavia incivilis Walker, 1865 Lyncestoides unilinea (Swinhoe, 1885) Masalia fissifascia (Hampson, 1903) Masalia leucosticta (Hampson, 1902) Masalia perstriata (Hampson, 1903) Matopo descarpentriesi (Laporte, 1975) Mentaxya muscosa Geyer, 1837 Microraphe fiorii Berio, 1937 Mimasura innotata Hampson, 1910 Mimasura pseudopyralis Berio, 1937 Mimasura unipuncta (Hampson, 1902) Mitrophrys menete (Cramer, 1775) Odontoretha featheri Hampson, 1916 Oedicodia melanographa Hampson, 1916 Oedicodia strigipennis Hampson, 1916 Ophiusa tirhaca (Cramer, 1777) Oraesia intrusa (Krüger, 1939) Oraesia provocans Walker, [1858] Ozarba albimarginata (Hampson, 1896) Ozarba albomediovittata Berio, 1937 Ozarba aloisiisabaudiae Berio, 1937 Ozarba boursini Berio, 1940 Ozarba deficiens Berio, 1935 Ozarba endoplaga Hampson, 1916 Ozarba endoscota Hampson, 1916 Ozarba negrottoi Berio, 1940 Ozarba nicotrai Berio, 1950 Ozarba parvula Berio, 1940 Ozarba pluristriata (Berio, 1937) Ozarba scorpio Berio, 1935 Ozarba semiluctuosa Berio, 1937 Ozarba semitorrida Hampson, 1916 Pericyma mendax (Walker, 1858) Pericyma metaleuca Hampson, 1913 Pericyma umbrina (Guenée, 1852) Phytometra pentheus Fawcett, 1916 Plecoptera reflexa Guenée, 1852 Plecopterodes clytie Gaede, 1936 Polydesma scriptilis Guenée, 1852 Pseudozarba bipartita (Herrich-Schäffer, 1950) Pseudozarba mianoides (Hampson, 1893) Pseudozarba opella (Swinhoe, 1885) Rabila albiviridis (Hampson, 1916) Radara subcupralis (Walker, [1866]) Rhesala moestalis (Walker, 1866) Rhynchina albiscripta Hampson, 1916 Sesamia cretica Lederer, 1857 Speia vuteria (Stoll, 1790) Sphingomorpha chlorea (Cramer, 1777) Spodoptera cilium Guenée, 1852 Spodoptera exempta (Walker, 1857) Spodoptera exigua (Hübner, 1808) Spodoptera littoralis (Boisduval, 1833) Spodoptera mauritia (Boisduval, 1833) Teucocranon microcallia Berio, 1937 Thermesia incedens (Walker, 1858) Thiacidas acronictoides (Berio, 1950) Thiacidas cerurodes (Hampson, 1916) Thiacidas fasciata (Fawcett, 1917) Thiacidas roseotincta (Pinhey, 1962) Thiacidas somaliensis Hacker & Zilli, 2010 Thiacidas triangulata (Gaede, 1939) Trichoplusia ni (Hübner, [1803]) Trichoplusia orichalcea (Fabricius, 1775) Trigonodes exportata Guenée, 1852 Trigonodes hyppasia (Cramer, 1779) Tuerta pastocyana Berio, 1940 Tytroca leucoptera (Hampson, 1896) Ulotrichopus primulina (Hampson, 1902) Ulotrichopus tinctipennis (Hampson, 1902) Xanthomera leucoglene (Mabille, 1880) Zethesides bettoni (Butler, 1898) Nolidae Arcyophora longivalvis Guenée, 1852 Bryophilopsis tarachoides Mabille, 1900 Earias insulana (Boisduval, 1833) Leocyma candace Fawcett, 1916 Leocyma discophora Hampson, 1912 Maurilia arcuata (Walker, [1858]) Nola doggeensis Strand, 1920 Notodontidae Antheua grisea (Gaede, 1928) Phalera imitata Druce, 1896 Scrancia discomma Jordan, 1916 Simesia balachowskyi Kiriakoff, 1973 Simesia olmii (Berio, 1937) Plutellidae Paraxenistis africana Mey, 2007 Psychidae Melasina psephota Durrant, 1916 Melasina recondita Durrant, 1916 Pterophoridae Agdistis arabica Amsel, 1958 Arcoptilia gizan Arenberger, 1985 Deuterocopus socotranus Rebel, 1907 Megalorhipida leucodactylus (Fabricius, 1794) Saturniidae Bunaeopsis oubie (Guérin-Méneville, 1849) Gynanisa maja (Klug, 1836) Ludia arguta Jordan, 1922 Ludia hansali Felder, 1874 Ludia jordani Bouyer, 1997 Melanocera menippe (Westwood, 1849) Orthogonioptilum ianthinum Rougeot, 1959 Yatanga smithi (Holland, 1892) Sesiidae Echidgnathia vitrifasciata (Hampson, 1910) Melittia pyropis Hampson, 1919 Melittia ursipes Walker, 1856 Sphingidae Acherontia atropos (Linnaeus, 1758) Agrius convolvuli (Linnaeus, 1758) Cephonodes hylas (Linnaeus, 1771) Ellenbeckia monospila Rothschild & Jordan, 1903 Hippotion celerio (Linnaeus, 1758) Hippotion moorei Jordan, 1926 Hippotion pentagramma (Hampson, 1910) Hippotion rebeli Rothschild & Jordan, 1903 Hippotion rosae (Butler, 1882) Hippotion socotrensis (Rebel, 1899) Hippotion stigma (Rothschild & Jordan, 1903) Leucostrophus alterhirundo d'Abrera, 1987 Likoma crenata Rothschild & Jordan, 1907 Microclanis erlangeri (Rothschild & Jordan, 1903) Nephele argentifera (Walker, 1856) Nephele funebris (Fabricius, 1793) Nephele xylina Rothschild & Jordan, 1910 Poliana micra Rothschild & Jordan, 1903 Poliodes roseicornis Rothschild & Jordan, 1903 Polyptychoides grayii (Walker, 1856) Polyptychoides niloticus (Jordan, 1921) Praedora marshalli Rothschild & Jordan, 1903 Pseudoclanis postica (Walker, 1856) Rufoclanis erlangeri (Rothschild & Jordan, 1903) Thyrididae Kuja hamatypex (Hampson, 1916) Tineidae Phthoropoea halogramma (Meyrick, 1927) Trichophaga abruptella (Wollaston, 1858) Tortricidae Ancylis spinicola Meyrick, 1927 Eucosma somalica Durrant, 1916 Xyloryctidae Eretmocera fasciata Walsingham, 1896 Zygaenidae Epiorna abessynica (Koch, 1865) Saliunca homochroa (Holland, 1897) References Hampson,G.F., 1916, Description of new species [in] Poulton,E.B., On a collection of moths made in Somaliland by Mr.W.Feather. Proc. zool. Soc. Lond. 1916: 91: 91-182, 2 Pls. Holland,W.J., 1895, List of the Lepidoptera collected in Somaliland, East Africa, by Mr. William Astor Chanler and Lieut. von Hoehnel. Proc. U.S. nat. Mus. 18: 259-264. Poulton,E.B., 1916, On a collection of moths made in Somaliland by Mr. W.Feather. Proc. zool. Soc. Lond. 1916: 91: 91-182, 2 Pls Walsingham,M.A. & G.F.Hampson, 1896, On moths collected at Aden and in Somaliland. Proc. zool. Soc. Lond. 1896: 257-283, Pl.10. External links Moths Moths Somaliland
976880
https://en.wikipedia.org/wiki/Killian%20documents%20controversy
Killian documents controversy
The Killian documents controversy (also referred to as Memogate or Rathergate) involved six documents containing false allegations about President George W. Bush's service in the Texas Air National Guard in 1972–73, allegedly typed in 1973. Dan Rather presented four of these documents as authentic in a 60 Minutes II broadcast aired by CBS on September 8, 2004, less than two months before the 2004 presidential election, but it was later found that CBS had failed to authenticate them. Several typewriter and typography experts soon concluded that they were forgeries. Lt. Col. Bill Burkett provided the documents to CBS, but he claims to have burned the originals after faxing them copies. CBS News producer Mary Mapes obtained the copied documents from Burkett, a former officer in the Texas Army National Guard, while pursuing a story about the George W. Bush military service controversy. Burkett claimed that Bush's commander Lieutenant Colonel Jerry B. Killian wrote them, which included criticisms of Bush's service in the Guard during the 1970s. In the 60 Minutes segment, Rather stated that the documents "were taken from Lieutenant Colonel Killian's personal files", and he falsely asserted that they had been authenticated by experts retained by CBS. The authenticity of the documents was challenged within hours on Internet forums and blogs, with questions initially focused on anachronisms in the typography, and the scandal quickly spread to the mass media. CBS and Rather defended the authenticity and usage of the documents for two weeks, but other news organizations continued to scrutinize the evidence, and USA Today obtained an independent analysis from outside experts. CBS finally repudiated the use of the documents on September 20, 2004. Rather stated, "if I knew then what I know now – I would not have gone ahead with the story as it was aired, and I certainly would not have used the documents in question", and CBS News President Andrew Heyward said, "Based on what we now know, CBS News cannot prove that the documents are authentic, which is the only acceptable journalistic standard to justify using them in the report. We should not have used them. That was a mistake, which we deeply regret." Several months later, a CBS-appointed panel led by Dick Thornburgh and Louis Boccardi criticized both the initial CBS news segment and CBS's "strident defense" during the aftermath. CBS fired producer Mapes, requested resignations from several senior news executives, and apologized to viewers by saying only that there were "substantial questions regarding the authenticity of the Killian documents". The story of the controversy was dramatized in the 2015 film Truth starring Robert Redford as Dan Rather and Cate Blanchett as Mary Mapes, directed by James Vanderbilt. It is based on Mapes' memoir Truth and Duty: The Press, the President, and the Privilege of Power. Former CBS President and CEO Les Moonves refused to approve the film, and CBS refused to air advertisements for it. A CBS spokesman stated that it contained "too many distortions, evasions, and baseless conspiracy theories". Background and timeline The memos, allegedly written in 1972 and 1973, were obtained by CBS News producer Mary Mapes and freelance journalist Michael Smith, from Lt. Col. Bill Burkett, a former US Army National Guard officer. Mapes and Dan Rather, among many other journalists, had been investigating for several years the story of Bush's alleged failure to fulfill his obligations to the National Guard. Burkett had received publicity in 2000, after making and then retracting a claim that he had been transferred to Panama for refusing "to falsify personnel records of [then-]Governor Bush", and in February 2004, when he claimed to have knowledge of "scrubbing" of Bush's Texas Air National Guard records. Mapes was "by her own account [aware that] many in the press considered Burkett an 'anti-Bush zealot', his credibility in question". Mapes and Smith made contact with Burkett in late August, and on August 24 Burkett offered to meet with them to share the documents he possessed, and later told reporters from USA Today "that he had agreed to turn over the documents to CBS if the network would arrange a conversation with the Kerry campaign", a claim substantiated by emails between Smith and Mapes detailing Burkett's additional requests for help with negotiating a book deal, security, and financial compensation. During the last week of August, Mapes asked Josh Howard, her immediate superior at CBS, for permission to facilitate contact between Burkett and the Kerry campaign; Howard and Mapes subsequently disputed whether such permission had been given. Two documents were provided by Burkett to Mapes on September 2 and four others on September 5, 2004. At that time, Burkett told Mapes that they were copies of originals that had been obtained from Killian's personal files via Chief Warrant Officer George Conn, another former member of the TexANG. Mapes informed Rather of the progress of the story, which was being targeted to air on September 8 along with footage of an interview with Ben Barnes, a former Lieutenant Governor of Texas, who would publicly state for the first time his opinion that Bush had received preferential treatment to get into the National Guard. Mapes had also been in contact with the Kerry campaign several times between late August and September 6, when she spoke with senior Kerry advisor Joe Lockhart regarding the progressing story. Lockhart subsequently stated he was "wary" of contact with Mapes at this stage, because if the story were true, his involvement might undermine its credibility, and if it were false, "he did not want to be associated with it." Lockhart called Burkett on September 6 at the number provided by Mapes, and both men stated they discussed Burkett's view of Kerry's presidential campaign strategy, not the existence of the documents or the related story. Content of the memos The documents claimed that Bush had disobeyed orders while in the Guard, and that undue influence had been exerted on Bush's behalf to improve his record. The documents included the following: An order directing Bush to submit to a physical examination. A note that Killian had grounded Bush from flying due to "failure to perform to USAF / TexANG standards", and for failure to submit to the physical examination as ordered. Killian also requested that a flight inquiry board be convened, as required by regulations, to examine the reasons for Bush's loss of flight status. A note of a telephone conversation with Bush in which Bush sought to be excused from "drill", The note records that Bush said he did not have the time to attend to his National Guard duties because he had a campaign to do (the Senate campaign of Winton M. Blount in Alabama). A note (labeled "CYA" for "cover your ass") claiming that Killian was being pressured from above to give Bush better marks in his yearly evaluation than he had earned. The note attributed to Killian says that he was being asked to "sugarcoat" Bush's performance. "I'm having trouble running interference [for Bush] and doing my job." USA Today also received copies of the four documents used by CBS, reporting this and publishing them the morning after the CBS segment, along with two additional memos. Burkett was assured by USA Today that they would keep the source confidential. CBS investigations prior to airing the segment Mapes and her colleagues began interviewing people who might be able to corroborate the information in the documents, while also retaining four forensic document experts, Marcel J. Matley, James J. Pierce, Emily Will, and Linda James, to determine the validity of the memos. On September 5, CBS interviewed Killian's friend Robert Strong, who ran the Texas Air National Guard administrative office. Among other issues covered in his interview with Rather and Mapes, Strong was asked if he thought the documents were genuine. Strong stated, "they are compatible with the way business was done at the time. They are compatible with the man that I remember Jerry Killian being." Strong had first seen the documents twenty minutes earlier and also said he had no personal knowledge of their content; he later claimed he had been told to assume the content of the documents was accurate. On September 6, CBS interviewed General Robert "Bobby" Hodges, a former officer at the Texas Air National Guard and Killian's immediate superior at the time. Hodges declined CBS' request for an on-camera interview, and Mapes read the documents to him over the telephone—or perhaps only portions of the documents; his recollection and Mapes's differed. According to Mapes, Hodges agreed with CBS's assessment that the documents were real, and CBS reported that Hodges stated that these were "the things that Killian had expressed to me at the time". However, according to Hodges, when Mapes read portions of the memos to him he simply stated, "well if he wrote them, that's what he felt", and he stated he never confirmed the validity of the content of the documents. General Hodges later asserted to the investigatory panel that he told Mapes that Killian had never, to his knowledge, ordered anyone to take a physical and that he had never been pressured regarding Lieutenant Bush, as the documents alleged. Hodges also claims that when CBS interviewed him, he thought the memos were handwritten, not typed, and following the September 8 broadcast, when Hodges had seen the documents and heard of claims of forgery by Killian's wife and son, he was "convinced they were not authentic" and told Rather and Mapes on September 10. Response of the document examiners Prior to airing, all four of the examiners responded to Mapes' request for document analysis, though only two to Mapes directly: Emily Will noted discrepancies in the signatures on the memos, and had questions about the letterhead, the proportional spacing of the font, the superscripted "th" and the improper formatting of the date. Will requested other documents to use for comparison. Linda James was "unable to reach a conclusion about the signature" and noted that the superscripted "th" was not in common use at the time the memos were allegedly written; she later recalled telling CBS, "the two memos she looked at 'had problems.'" James Pierce concluded that both of the documents were written by the same person and that the signature matched Killian's from the official Bush records. Only one of the two documents provided to Pierce had a signature. James Pierce wrote, "the balance of the Jerry B. Killian signatures appearing on the photocopied questioned documents are consistent and in basic agreement", and stated that based on what he knew, "the documents in question are authentic". However, Pierce also told Mapes he could not be sure if the documents had been altered because he was reviewing copies, not original documents. Marcel Matley's review was initially limited to Killian's signature on one of the Burkett documents, which he compared to signatures from the official Bush records. Matley "seemed fairly confident" that the signature was Killian's. On September 6, Matley was interviewed by Rather and Mapes and was provided with the other four documents obtained from CBS (he would prove to be the only reviewer to see these documents prior to the segment). Matley told Rather "he could not authenticate the documents due to the fact that they were poor quality copies." In the interview, Matley told Rather that with respect to the signatures, they were relying on "poor material" and that there were inconsistencies in the signatures, but also replied "Yes", when asked if it would be safe to say the documents were written by the person who signed them. Both Emily Will and Linda James suggested to Mapes that CBS contact typewriter expert Peter Tytell (son of Martin Tytell). Associate producer Yvonne Miller left him a voicemail on September 7; he returned the call at 11 am on September 8 but was told they "did not need him anymore". September 8 segment and initial reactions The segment entitled "For the Record" aired on 60 Minutes Wednesday on September 8. After introducing the documents, Rather said, in reference to Matley, "We consulted a handwriting analyst and document expert who believes the material is authentic." The segment introduced Lieutenant Robert Strong's interview, describing him as a "friend of Killian" (without noting he had not worked in the same location and without mentioning he had left the TexANG prior to the dates on the memos). The segment used the sound bite of Strong saying the documents were compatible with how business was done but did not include a disclaimer that Strong was told to assume the documents were authentic. In Rather's narration about one of the memos, he referred to pressure being applied on Bush's behalf by General Buck Staudt, and described Staudt as "the man in charge of the Texas National Guard". Staudt had retired from the guard a year and a half prior to the dates of the memos. Interview clips with Ben Barnes, former Speaker of the Texas House, created the impression "that there was no question but that President Bush had received Barnes' help to get into the TexANG", because Barnes had made a telephone call on Bush's behalf, when Barnes himself had acknowledged that there was no proof his call was the reason, and that "sometimes a call to General Rose did not work". Barnes' disclaimer was not included in the segment. Internet skepticism spreads Discussion quickly spread to various weblogs in the blogosphere, principally Little Green Footballs and Power Line. The initial analysis appeared in posts by "Buckhead", a username of Harry W. MacDougald, an Atlanta attorney who had worked for conservative groups such as the Federalist Society and the Southeastern Legal Foundation, and who had helped draft the petition to the Arkansas Supreme Court for the disbarment of President Bill Clinton. MacDougald questioned the validity of the documents on the basis of their typography, writing that the memos were "in a proportionally spaced font, probably Palatino or Times New Roman", and alleging that this was an anachronism: "I am saying these documents are forgeries, run through a copier for 15 generations to make them look old. This should be pursued aggressively." By the following day, questions about the authenticity of the documents were being publicized by the Drudge Report, which linked to the analysis at the Powerline blog in the mid-afternoon, and the story was covered on the website of the magazine The Weekly Standard and broke into mass media outlets, including the Associated Press and the major television news networks. It also was receiving serious attention from conservative writers such as National Review Online's Jim Geraghty. By the afternoon of September 9, Charles Foster Johnson of Little Green Footballs had posted his attempt to recreate one of the documents using Microsoft Word with the default settings. The September 9 edition of ABC's Nightline made mention of the controversy, along with an article on the ABC News website. Thirteen days after this controversy had emerged the national newspaper USA Today published a timeline of events surrounding the CBS story. Accordingly, on the September 9 morning after the "60 minutes" report, the broadcast was front-page news in the New York Times and Washington Post. Additionally, the story was given two-thirds of a full page within USA Today'''s news section, which mentioned that it had also obtained copies of the documents. However, the authenticity of the memos was not part of the story carried by major news outlets on that day. Also on that day, CBS published the reaction of Killian's son, Gary, to the documents, reporting that Gary Killian questioned one of the memos but stated that others "appeared legitimate" and characterized the collection as "a mixture of truth and fiction". In an interview with Fox News, Gary Killian expressed doubts about the documents' authenticity on the basis of his father's positive view of Bush. In 2006, the two Free Republic (Rathergate) bloggers, Harry W. MacDougald, username "Buckhead", an Atlanta-based lawyer and Paul Boley, username "TankerKC", were awarded the Reed Irvine Award for New Media by the Accuracy in Media watchdog at the Conservative Political Action Conference (CPAC). CBS's response and widening media coverage At 5:00 p.m. on Thursday, September 9, CBS News released a statement saying the memos were "thoroughly investigated by independent experts, and we are convinced of their authenticity", and stating, "this report was not based solely on recovered documents, but rather on a preponderance of evidence, including documents that were provided by unimpeachable sources". The statement was replaced later that day with one that omitted this claim. The first newspaper articles questioning the documents appeared on September 10 in The Washington Post, The New York Times and in USA Today via the Associated Press. The Associated Press reported, "Document examiner Sandra Ramsey Lines ... said she was 'virtually certain' [the documents] were generated by computer. Lines said that meant she could testify in court that, beyond a reasonable doubt, her opinion was that the memos were written on a computer." Also on September 10, The Dallas Morning News reported, "the officer named in one memo as exerting pressure to 'sugarcoat' Bush's military record was discharged a year and a half before the memo was written. The paper cited a military record showing that Col. Walter 'Buck' Staudt was honorably discharged on March 1, 1972, while the memo cited by CBS as showing that Staudt was interfering with evaluations of Bush was dated August 18, 1973." In response to the media attention, a CBS memo said that the documents were "backed up not only by independent handwriting and forensic document experts but by sources familiar with their content" and insisted that no internal investigation would take place. On the CBS Evening News of September 10, Rather defended the story and noted that its critics included "partisan political operatives". In the broadcast, Rather stated that Marcel Matley "analyzed the documents for CBS News. He believes they are real", and broadcast additional excerpts from Matley's September 6 interview showing Matley's agreement that the signatures appeared to be from the same source. Rather did not report that Matley had referred to them as "poor material", that he had only opined about the signatures or that he had specifically not authenticated the documents. Rather presented footage of the Strong interview, introducing it by stating Robert Strong "is standing by his judgment that the documents are real", despite Strong's lack of standing to authenticate them and his brief exposure to the documents. Rather concluded by stating, "If any definitive evidence to the contrary of our story is found, we will report it. So far, there is none." In an appearance on CNN that day, Rather asserted "I know that this story is true. I believe that the witnesses and the documents are authentic. We wouldn't have gone to air if they would not have been." However, CBS's Josh Howard spoke at length by telephone with typewriter expert Peter Tytell and later told the panel that the discussion was "an 'unsettling event' that shook his belief in the authenticity of the documents". Producer Mapes dismissed Tytell's concerns. A former vice president of CBS News, Jonathan Klein, dismissed the allegations of bloggers, suggesting that the "checks and balances" of a professional news organization were superior to those of individuals sitting at their home computers "in their pajamas". CBS's defense, apology As media coverage widened and intensified, CBS at first attempted to produce additional evidence to support its claims. On September 11, a CBS News segment stated that document expert Phillip Bouffard thought the documents "could have been prepared on an IBM Selectric Composer typewriter, available at the time". The Selectric Composer was introduced in 1966 for use by typesetting professionals to generate camera-ready copy; according to IBM archives describing this specialized equipment, "To produce copy which can be reproduced with 'justified', or straight left-and right-hand margins, the operator types the copy once and the composer computes the number of spaces needed to justify the line. As the operator types the copy a second time, the spaces are added automatically." Bouffard's comments were also cited by the Boston Globe in an article entitled "Authenticity backed on Bush documents". However, the Globe soon printed a retraction regarding the title. CBS noted that although General Hodges was now stating he thought the documents were inauthentic, "we believed General Hodges the first time we spoke with him." CBS reiterated: "we believe the documents to be genuine." By September 13, CBS's position had shifted slightly, as Rather acknowledged "some of these questions come from people who are not active political partisans", and stated that CBS "talked to handwriting and document analysts and other experts who strongly insist the documents could have been created in the '70s". The analysts and experts cited by Rather did not include the original four consulted by CBS. Rather instead presented the views of Bill Glennon and Richard Katz. Glennon, a former typewriter repairman with no specific credentials in typesetting beyond that job, was found by CBS after posting several defenses of the memos on blogs including Daily Kos and Kevin Drum's blog hosted at Washington Monthly. However, in the actual broadcast, neither interviewee asserted that the memos were genuine. As a result, some CBS critics began to accuse CBS of expert shopping. 60 Minutes Wednesday, one week later The original document examiners, however, continued to be part of the story. By September 15, Emily Will was publicly stating that she had told CBS that she had doubts about both the production of the memos and the handwriting prior to the segment. Linda James stated that the memos were of "very poor quality" and that she did not authenticate them, telling ABC News, "I did not authenticate anything and I don't want it understood that I did." In response, 60 Minutes Wednesday released a statement suggesting that Will and James had "misrepresented" their role in the authentication of the documents and had played only a small part in the process. CBS News concurrently amended its previous claim that Matley had authenticated the documents, saying instead that he had authenticated only the signatures. On CNN, Matley stated he had only verified that the signatures were "from the same source", not that they were authentically Killian's: "When I saw the documents, I could not verify the documents were authentic or inauthentic. I could only verify that the signatures came from the same source", Matley said. "I could not authenticate the documents themselves. But at the same time, there was nothing to tell me that they were not authentic." On the evening of September 15, CBS aired a segment that featured an interview with Marian Carr Knox, a secretary at Ellington Air Force Base from 1956 to 1979, and who was Killian's assistant on the dates shown in the documents. Dan Rather prefaced the segment on the recorded interview by stating, "She told us she believes what the documents actually say is, exactly, as we reported." In the aired interview, Knox expressed her belief that the documents reflected Killian's "sentiments" about Bush's service, and that this belief motivated her decision to reach out to CBS to provide the interview.CBS Sept 15 2004: Dan Rather Talks To Lt. Col. Killian's Ex-Secretary About Bush Memos In response to a direct question from Rather about the authenticity of the memo on Bush's alleged insubordination, she stated that no such memo was ever written; she further emphasized that she would have known if such a memo existed, as she had sole responsibility to type Killian's memos in that time period. At this point, she also admitted she had no firsthand knowledge of Bush's time in the Guard. However, controversially, Knox said later in the interview, "The information in here was correct, but it was picked up from the real ones." She went on to say, "I probably typed the information and somebody picked up the information some way or another." The archived link works only with JavaScript disabled in the browser; a version with all scripts disabled is here. The New York Times' headline report on this interview, including the phrase "Fake but Accurate", created an immediate backlash from critics of CBS's broadcast. The conservative-leaning Weekly Standard proceeded to predict the end of CBS's news division. At this time, Dan Rather first acknowledged there were problems in establishing the validity of the documents used in the report, stating: "If the documents are not what we were led to believe, I'd like to break that story." CBS also hired a private investigator to look into the matter after the story aired and the controversy began. Copies of the documents were first released to the public by the White House. Press Secretary Scott McClellan stated that the memos had been provided to them by CBS in the days prior to the report and that, "We had every reason to believe that they were authentic at that time." The Washington Post reported that at least one of the documents obtained by CBS had a fax header indicating it had been faxed from a Kinko's copy center in Abilene, Texas, leading some to trace the documents back to Burkett. CBS states that use of the documents was a mistake As a growing number of independent document examiners and competing news outlets reported their findings about the documents, CBS News stopped defending the documents and began to report on the problems with their story. On September 20 they reported that their source, Bill Burkett, "admits that he deliberately misled the CBS News producer working on the report, giving her a false account of the documents' origins to protect a promise of confidentiality to the actual source." While the network did not state that the memos were forgeries, CBS News president Andrew Heyward said, Based on what we now know, CBS News cannot prove that the documents are authentic, which is the only acceptable journalistic standard to justify using them in the report. We should not have used them. That was a mistake, which we deeply regret. Dan Rather stated, "if I knew then what I know now – I would not have gone ahead with the story as it was aired, and I certainly would not have used the documents in question." In an interview with Rather, Burkett admitted that he misled CBS about the source of the documents, and then claimed that the documents came to him from someone he claimed was named "Lucy Ramirez", whom CBS was unable to contact or identify as an actual person. Burkett said he then made copies at the local Kinko's and burned the original documents. Investigations by CBS, CNN and the Washington Post failed to turn up evidence of "Lucy Ramirez" being an actual person.Jonathan V. Last, "Whitewash", The Weekly Standard, January 10, 2005. On September 21, CBS News addressed the contact with the Kerry campaign in its statement, saying "it is obviously against CBS News standards and those of every other reputable news organization to be associated with any political agenda." The next day the network announced it was forming an independent review panel to perform an internal investigation. Review panel established Soon after, CBS established a review panel "to help determine what errors occurred in the preparation of the report and what actions need to be taken". Dick Thornburgh, a Republican former governor of Pennsylvania and United States Attorney General under George H.W. Bush, and Louis Boccardi, retired president and chief executive officer and former executive editor of the Associated Press, made up the two-person review board. CBS also hired a private investigator, a former FBI agent named Erik T. Rigler, to gather further information about the story. Findings On January 5, 2005, the Report of the Independent Review Panel on the September 8, 2004, 60 Minutes Wednesday Segment "For the Record" Concerning President Bush's Air National Guard Service was released. The purpose of the panel was to examine the process by which the September 8 Segment was prepared and broadcast, to examine the circumstances surrounding the subsequent public statements and news reports by CBS News defending the segment, and to make any recommendations it deemed appropriate. Among the Panel's conclusions were the following: The most serious defects in the reporting and production of the September 8 Segment were: The failure to obtain clear authentication of any of the Killian documents from any document examiner; The false statement in the September 8 Segment that an expert had authenticated the Killian documents when all he had done was authenticate one signature from one document used in the Segment; The failure of 60 Minutes Wednesday management to scrutinize the publicly available, and at times controversial, background of the source of the documents, retired Texas Army National Guard Lieutenant Colonel Bill Burkett; The failure to find and interview the individual who was understood at the outset to be Lieutenant Colonel Burkett's source of the Killian documents, and thus to establish the chain of custody; The failure to establish a basis for the statement in the Segment that the documents "were taken from Colonel Killian's personal files"; The failure to develop adequate corroboration to support the statements in the Killian documents and to carefully compare the Killian documents to official TexANG records, which would have identified, at a minimum, notable inconsistencies in content and format; The failure to interview a range of former National Guardsmen who served with Lieutenant Colonel Killian and who had different perspectives about the documents; The misleading impression conveyed in the Segment that Lieutenant Strong had authenticated the content of the documents when he did not have the personal knowledge to do so; The failure to have a vetting process capable of dealing effectively with the production speed, significance and sensitivity of the Segment; and The telephone call prior to the Segment's airing by the producer of the Segment to a senior campaign official of Democratic presidential candidate John Kerry – a clear conflict of interest – that created the appearance of a political bias. Once questions were raised about the September 8 segment, the reporting thereafter was mishandled and compounded the damage done. Among the more egregious shortcomings during the Aftermath were: The strident defense of the September 8 Segment by CBS News without adequately probing whether any of the questions raised had merit; Allowing many of the same individuals who produced and vetted the by-then controversial September 8 Segment to also produce the follow-up news reports defending the Segment; The inaccurate press statements issued by CBS News after the broadcast of the Segment that the source of the documents was "unimpeachable" and that experts had vouched for their authenticity; The misleading stories defending the Segment that aired on the CBS Evening News after September 8 despite strong and multiple indications of serious flaws; The efforts by 60 Minutes Wednesday to find additional document examiners who would vouch for the authenticity of the documents instead of identifying the best examiners available regardless of whether they would support this position; and Preparing news stories that sought to support the Segment, instead of providing accurate and balanced coverage of a raging controversy. Panel's view of the documents The Panel did not undertake a thorough examination of the authenticity of the Killian documents, but consulted Peter Tytell, a New York City-based forensic document examiner and typewriter and typography expert. Tytell had been contacted by 60 Minutes producers prior to the broadcast, and had informed associate producer Yvonne Miller and executive producer Josh Howard on September 10 that he believed the documents were forgeries. The Panel report stated, "The Panel met with Peter Tytell, and found his analysis sound in terms of why he thought the documents were not authentic ... The Panel reaches no conclusion as to whether Tytell was correct in all respects." Aftermath The controversy had long-reaching personal, political and legal consequences. In a 2010 issue of TV Guide, Rather's report was ranked on a list of TV's ten biggest "blunders". CBS personnel and programming changes CBS terminated Mary Mapes and demanded the resignations of 60 Minutes Wednesday Executive Producer Josh Howard and Howard's top deputy, Senior Broadcast Producer Mary Murphy, as well as Senior Vice President Betsy West, who had been in charge of all prime time newscasts. Murphy and West resigned on February 25, 2005, and after settling a legal dispute regarding his level of responsibility for the segment, Josh Howard resigned on March 25, 2005. Dan Rather announced on November 23, 2004, that he would step down in early 2005 and on March 9, his 24th anniversary as anchor, he left the network. It is unclear whether or not Rather's retirement was directly caused by this incident. Les Moonves, CEO of CBS, stated "Dan Rather has already apologized for the segment and taken responsibility for his part in the broadcast. He voluntarily moved to set a date to step down from the CBS Evening News in March of 2005." He added, "We believe any further action would not be appropriate." CBS was originally planning to show a 60 Minutes report critical of the Bush administration justification for going to war in Iraq. This segment was replaced with the Killian documents segment. CBS further postponed airing the Iraq segment until after the election due to the controversy over the Killian documents. "We now believe it would be inappropriate to air the report so close to the presidential election", CBS spokesman Kelli Edwards said in a statement. After the Killian documents controversy, the show was renamed 60 Minutes Wednesday to differentiate it from the original 60 Minutes Sunday edition, and reverted to its original title on July 8, 2005, when it was moved to the 8 p.m. Friday timeslot. It was cancelled in 2005 due to low ratings. Mapes's and Rather's view of the documents On November 9, 2005, Mary Mapes gave an interview to ABC News correspondent Brian Ross. Mapes stated that the documents have never been proved to be forgeries. Ross expressed the view that the responsibility is on the reporter to verify their authenticity. Mapes responded with, "I don't think that's the standard." This stands in contrast to the statement of the president of CBS News that proof of authenticity is "the only acceptable journalistic standard." Also in November 2005, Mapes told readers of the Washington Post, "I personally believe the documents are not false" and "I was fired for airing a story that could not definitively be proved false but made CBS's public relations department cringe." As of September 2007, Mapes continued to defend the authenticity of the documents: "the far right blogosphere bully boys ... screamed objections that ultimately proved to have no basis in fact." On November 7, 2006, Rather defended the report in a radio interview, and rejected the CBS investigation's findings. In response, CBS spokesman Kevin Tedesco told the Associated Press, "CBS News stands by the report the independent panel issued on this matter and to this day, no one has been able to authenticate the documents in question." Dan Rather continued to stand by the story, and in subsequent interviews stated that he believed that the documents have never conclusively been proven to be forgeries – and that even if the documents are false, the underlying story is true. Rather's lawsuit against CBS/Viacom On September 19, 2007, Rather filed a $70 million lawsuit against CBS and its former corporate parent, Viacom, claiming they had made him a "scapegoat" over the controversy caused by the 2004 60 Minutes Wednesday report that featured the Killian documents. The suit names as defendants: CBS and its CEO, Leslie Moonves: Viacom, Sumner Redstone, chairman of both Viacom and CBS Corporation; and Andrew Heyward, the former president of CBS News. In January 2008, the legal teams for Rather and CBS reached an agreement to produce for Rather's attorneys "virtually all of the materials" related to the case, including the findings of Erik T. Rigler's report to CBS about the documents and the story. On September 29, 2009, New York State Court of Appeals dismissed Rather's lawsuit and stated that the lower court should have honored CBS's request to throw out the entire lawsuit instead of just throwing out parts. Authentication issues No generally recognized document experts have positively authenticated the memos. Since CBS used only faxed and photocopied duplicates, authentication to professional standards is impossible, regardless of the provenance of the originals. Document experts have challenged the authenticity of the documents as photocopies of valid originals on a variety of grounds ranging from anachronisms of their typography, their quick reproducibility using modern technology, and to errors in their content and style.<ref name=wapoexpert>Kurtz, Howard Document Experts Say CBS Ignored Memo 'Red Flags' Washington Post'.' Retrieved April 2006.</ref> The CBS independent panel report did not specifically take up the question of whether the documents were forgeries, but retained a document expert, Peter Tytell, who concluded the documents used by CBS were produced using current word processing technology. Tytell concluded ... that (i) the relevant portion of the Superscript Exemplar was produced on an Olympia manual typewriter, (ii) the Killian documents were not produced on an Olympia manual typewriter and (iii) the Killian documents were produced on a computer in Times New Roman typestyle [and that] the Killian documents were not produced on a typewriter in the early 1970s and therefore were not authentic. Accusations of bias Some critics of CBS and Dan Rather argued that by proceeding with the story when the documents had not been authenticated, CBS was exhibiting media bias and attempting to influence the outcome of the 2004 U.S. presidential election. Freelance journalist Michael Smith had emailed Mapes, asking, "What if there was a person who might have some information that could possibly change the momentum of an election but we needed to get an ASAP book deal to help get us the information?" Mapes replied, "that looks good, hypothetically speaking of course." The Thornburgh–Boccardi report found that Mapes' contact with Kerry adviser Joe Lockhart was "highly inappropriate", and that it "crossed the line as, at a minimum, it gave the appearance of a political bias and could have been perceived as a news organizations' assisting a campaign as opposed to reporting on a story"; however, the Panel did not "find a basis to accuse those who investigated, produced, vetted or aired the Segment of having a political bias". In a later interview with The Washington Post, when asked about the issue of political bias, review panel member Louis Boccardi said "bias is a hard thing to prove". The panel concluded that the problems occurred "primarily because of a rush to air that overwhelmed the proper application of the CBS News Standards". Some Democratic critics of Bush suggested that the memos were produced by the Bush campaign to discredit the media's reporting on Bush's National Guard service. The chairman of the Democratic National Committee, Terry McAuliffe, suggested that the memos might have originated with long-time Bush strategist Karl Rove. McAuliffe told reporters on September 10, "I can tell you that nobody at the Democratic National Committee or groups associated with us were involved in any way with these documents", he said. "I'm just saying that I would ask Karl Rove the same question." McAuliffe later pointed out that Rove and another Republican operative, Ralph E. Reed, Jr., had "a known history of dirty tricks", and he asked whether Republican National Committee chairman Ed Gillespie would rule out any involvement by GOP consultant Roger Stone. At a community forum in Utica, New York in 2005, U.S. Representative Maurice Hinchey (D-NY) pointed out that the controversy served Rove's objectives: "Once they did that, then it undermined everything else about Bush's draft dodging. ... That had the effect of taking the whole issue away." After being criticized, Hinchey responded, "I didn't allege I had any facts. I said this is what I believe and take it for what it's worth." Rove and Stone have denied any involvement. In a 2008 interview in The New Yorker, Stone said "It was nuts to think I had anything to do with those documents ... [t]hose papers were potentially devastating to George Bush. You couldn't put them out there assuming that they would be discredited. You couldn't have assumed that this would rebound to Bush's benefit. I believe in bank shots, but that one was too big a risk." See also George W. Bush military service controversy Questioned document examination Footnotes External links Killian documents PDF files These are the Killian documents supplied to CBS Reports by Bill Burkett: Memorandum, May 4, 1972 (CBS News) Memo to File, May 19, 1972 (CBS News) Memorandum For Record, August 1, 1972 (CBS News) Memo to File, August 18, 1973 (CBS News) USA Today Killian documents (USA Today, six memos in one.pdf file) Bush documents from the TexANG archives Page 31 is a 3 November 1970 memo from the office of Lt Col Killian on promotion of Lt Bush: Bush enlistment documents (USA Today) 60 Minutes II, September 8 transcript Transcript of CBS segment Dan Rather interviews Marion Carr Knox - September 15, 2004 YouTube Statements of the CBS document examiners Marcel B. Matley, September 14, 2004 James J. Pierce, September 14, 2004 Bill Glennon, September 13, 2004 Richard Katz, September 13, 2004 Thornburgh–Boccardi report [Link to site supposedly containing the exhibits and appendices, but links from that site don't work] Document analysis A Pentagon memo next to one of CBS's Killian memos — The Washington Post, September 14, 2004 The Paper Trail: A Comparison of Documents The Washington Post, September 18, 2004 Graphic comparison of all the CBS memos with officially released Killian memos The Washington Post, September 19, 2004 "Blog-gate" Columbia Journalism Review "CJR Fallacies", response by Joseph Newcomer "Are the Bush Documents Fakes?", analysis by Richard Polt Overview Timeline at USA Today "Scoops and skepticism: How the story unfolded" — timeline from USA Today — September 21, 2004 Further reading Truth and Duty: The Press, the President, and the Privilege of Power (), by Mary Mapes, November 2005, St. Martin's Press, In other media Truth, 2015 film starring Cate Blanchett and Robert Redford, whose story is based on the Mapes book above about this controversy. "Dan Rather interviews Marion Carr Knox - September 15, 2004" YouTube 60 Minutes Memoranda 2004 controversies 2004 in American politics Political forgery Journalistic scandals Mass media-related controversies in the United States 2004 United States presidential election Political controversies in the United States 2000s controversies in the United States
60789181
https://en.wikipedia.org/wiki/Nintendo%20of%20America%2C%20Inc.%20vs.%20Blockbuster%20LLC.
Nintendo of America, Inc. vs. Blockbuster LLC.
On August 4, 1989, Nintendo of America filed an American copyright lawsuit through the District Court of Newark, New Jersey against Blockbuster LLC, seeking injunctive relief and damages. The dispute was raised after multiple Blockbuster locations across the United States of America were accused of allegedly photocopying and reproducing Nintendo's video game manuals when renting out Nintendo titles. The game company claimed this was an infringement of their intellectual property, and hoped to change the laws on renting video games and rentals. After one year of battle in the District Court, the court ruled in favor of Blockbuster, as the House and Senate verdicts determined the law on video game rentals remain unchanged. Background Nintendo in 1989 In 1989 Nintendo prioritised the release of the Game Boy handheld system, appearing on the Japanese market on April 21, 1989 alongside the title Super Mario Land. The system was later released on July 31, 1989 in North America, with the president of Nintendo of America Minoru Arakawa seeking to bundle the Game Boy with the third-party title Tetris. In securing the deal, the console launched to instant success. In Japan, new titles such as Mother and Zelda Game and Watch were released by Nintendo. In North America, Nintendo of America introduced the Dragon Warrior franchise, a series of three games that had previously released in Japan on the NES. It is estimated that in 1989, Nintendo sold $2.7 billion in video game software and games, accounting for 80% of the market. This number does not include the toys, apparel and miscellaneous merchandise that Nintendo was beginning to introduce into the market. Peter Main, the vice president of Nintendo of America at the time was named "Marketer of the Year" by Adweek's Marketing Week Magazine, and it was estimated that 15 million homes in America were in possession of a Nintendo product; this estimation raising to 20 million in consideration of the post-Christmas season. Rick Anguila, editor of Toy and Business World, said of Nintendo and its products in America: "The kids of America are saying 'This is great, we've got to have one.' For boys in this country between the ages of 8 and 15, not having a Nintendo is like not having a baseball bat.''. One of every four houses in America had a Nintendo product. Nintendo's Previous Legal History Nintendo of Japan and Nintendo of America have been recognised for having a 'complicated' legal history, involved in cases both put forward by the company or imposed against them. In 1984, Nintendo was sued by Universal Studios under an infringed copyright claim, in which Universal claimed the game Donkey Kong was trademark infringement on the movie King Kong a film released over 50 years prior to the suit. The United States District Court for the Southern District of New York ruled in favor of Nintendo, after the company themselves proved in a prior legal case that the characters and plot of the film were available for public domain, and not holistically owned by Universal. In 1989, Nintendo had a legal battle with the company Tengen over the Tetris game copyrights, which they won and forced Tengen to recall all unlicensed copies of the game. Nintendo sues Tengen again in November over a number of unlicensed Nintendo games. Nintendo wins again. Blockbuster in 1989 Founded by David Cook only some 4 years previous, Blockbuster was slowly growing with 19 stores established across the United States. John Melk, an associate of Wayne Huizenga of Waste Management, saw the success of the business and was attracted to the family friendly image and business model. He pushed Huizenga to invest, and in 1987 he agreed to own some of the stores, despite his reservations about entering the video rental industry. Combining the techniques they learned from their work in Waste Management and by following Ray Kroc's business expansion model, Huizenga and Melk worked to expand Blockbuster substantially across America. In 1989, it was estimated that a store was opening approximately every 24 hours. This included the first UK store opening in March 1989 on Walmart Road, London. Blockbuster's estimated revenue in 1989 was over $600 million cementing the brand as the 'king' of the video rental industry, as its closest rival West Coast Video Ltd. turned $180 million in profit. Copyright Law and Video Games For Nintendo, the security of their product had always been highly coveted. In 1984, the Recording Industry Association of Japan successfully implemented the Rental Right, or Right of Lending into Japanese Copyright Law, allowing makers of a product of brand to specifically allocate how their product be reproduced or used by rental stores and services, implementing their own terms and conditions. Nintendo, in line with this law, did not allow for any rental stores in Japan to rent their products, and continue to hold a strict 'no copy' policy on all their products, brand and associations. In America of 1989 however, renting video games was considered differently, as Copyright Law of the United States did not have a clause specific to video game rentals. The music industry was protected by the Rental Records Amendments Act, and music rentals were completely banned. Video, television and movies had created an extremely successful rental business, annual revenue of 1988 estimated to be over $5 billion; surpassing the box office revenue of $4.5 billion. Video rental stores sought to profit from the success of video games, and created rental sections to rent out software and titles. There was divided opinion about video game rental, especially between the general consumer and those in the industry. Howard Lincoln, who serves as the vice president of Nintendo of America, went on Record in David Sheff's book Game Over: How Nintendo Zapped an American Industry, Captured Your Dollars, & Enslaved Your Children and said video game rental was "nothing less than commercial rape." The general consumer saw many benefits to renting a video game or its software; a whole game cost a fair amount of money, and it gave people the opportunity to try a game before investing in it. The computer software industry was facing a similar issue with rentals, and sought to gain protection with the Computer Software Rental Amendments Act. In theory, this bill was to cover video games software as well, but the Video Software Dealer's Association excluded video games from the final draft. The VSDA was so determined to leave video games out of the bill, in fact, that they were determined to defeat the bill should video games be included. This was justified in the belief that Nintendo cartridges were extremely difficult to copy and reproduce, unlike normal computer software that could be replicated with hard drives and discs. Lawsuit On July 31, 1989, Nintendo of America sent a letter of request to Blockbuster LLC., after learning that some of the stores had been photocopying and reproducing Nintendo's video game manuals to be paired with game rentals. On August 4, 1989, Nintendo - claiming that the request had been ignored - filed a formal lawsuit against Blockbuster in New Jersey federal court, under the claim that copyright law had been breached. The suit charges at least one company owned store and three franchises within New Jersey of having photocopied gaming manuals to rent them out to consumers with their respected games. Nintendo's general counsel Lynn Hvalsoe said in a prepared statement, "The photocopying of Nintendo's game instruction manuals by video rental outlets infringes Nintendo's registered copyrights... We intend to stop this illegal practice". Blockbuster reacted negatively to the suit, calling it a "reflection of the frustration [Nintendo] feel," after being removed from the Computer Software Rental Amendments Act. Blockbuster now sought alternatives to photocopying the manuals, as they still needed to produce some when renters returned with the booklets either lost or damaged. Robert A. Guerin, the vice president of Blockbuster's national development, stated that "If need be, we might even consider writing our own [manuals],". The company did not pursue this course of action, deeming it a waste of resources when video games made up only 3% of annual profits. Instead, they replaced the Nintendo original manuals with generic copies made by third party companies. Outcome On August 9, Blockbuster filed court papers in compliance to Nintendo's request of ceasing photocopying manuals, and released a statement claiming they had contacted managers of Blockbuster stores to stop copying the manuals when the original letter of request was received on July 31. Both companies settled the matter outside of court for an undisclosed amount, but Nintendo still hoped for their inclusion in the Computer Software Rentals Amendment Act. In September 1990, in a revision of the case, the House and Senate ruled in favor of Blockbuster, and agreed that the rental of video games be widely allowed. In November, both houses of Congress passed the Computer Software Rentals Amendment Act. Nintendo remained excluded from the bill. Nintendo and Blockbuster repaired the business relationship lost during the lawsuit, and collaborated throughout the 1990s. This included Nintendo allowing Blockbuster to rent and sell a number of games made exclusive to the rental company's stores and the 1994 Blockbuster Video World Game Championship making use of the Super NES console throughout the competition's preliminary, regional and final rounds. For many years, Blockbuster continued to rent video game consoles and software, where Nintendo had finally backed down from the battle to ban all rentals. However, due to other competition from mail-order services such as Netflix, video on demand services and Redbox automated kiosks, Blockbuster suffered huge profit losses in the early and late 2000s, and on September 23, 2010, filed for bankruptcy protection. Nintendo continues to grow as a leading video game corporation, with over 5,500 employees working internationally as of 2017 and a net sales earn of $9.95 billion in 2018. They also continue to be very strict about copyright law, with a large number of cases taken out after 1989 with varied success. Most notable include the case of Lewis Galoob Toys, Inc. v. Nintendo of America, Inc. in which Nintendo sued the Canadian gaming company Camerica Ltd. for patent violations of the Game Genie for the NES. Nintendo sued Camerica and their USA counterpart Galoob many times, including in the courts of Canada, New York and California. Nintendo lost every suit filed. The case taken out against NTDEC is an example of one of Nintendo's greatest successes in court, being awarded $24 million in damages. The case was based on the arrest of two employees at NTDEC in 1991, convicted of large-scale piracy in reproducing NES games and using the Nintendo trademark. In 1993, the case was settled and Nintendo was awarded over $24 million, subsequently driving NTDEC out of business. See also Copyright and video games Lewis Galoob Toys, Inc. v. Nintendo of America, Inc. Universal City Studios, Inc. v. Nintendo Co., Ltd. NTDEC References Blockbuster LLC Copyright case law Nintendo Video game copyright law United States District Court for the District of New Jersey cases Copyright infringement of software 1990 in United States case law 1990 in video gaming
48590779
https://en.wikipedia.org/wiki/Mobile%20procurement
Mobile procurement
Mobile procurement is mobile business software that helps organizations streamline their procurement process from a mobile device. Features of mobile procurement software include mobile purchase order creation, on-demand notifications, and real-time analytics. What makes mobile procurement successful is the ability to leverage software-side servers to move data along. The key benefit for organizations using mobile procurement systems is the ability to track business operations using any ordinary mobile device. Mobile procurement software is generally represented in the form of custom applications provided as an add-on feature of a larger enterprise resource planning software solution. As such, if the goal of a mobile procurement system should be to complement existing information systems, mobile procurement software should typically involve the needs of procurement professionals before implementation. Assuring that necessary features are prioritized over having many features is key to successful user adoption. It also simplifies order management processes by removing the confusion and disorder often seen with paper-based procurement. Mobile procurement provides a clearer view of the steps behind procurement and give companies the insight needed to consistently secure the best possible total cost of ownership. Mobile Enterprise Applications Mobile enterprise application software is the use of office software applications on a mobile platform in a way that adapts to different devices and networks. Many organizations use these applications to increase employee productivity and streamline business operations. The first common mobile enterprise application was the implementation of email. Now, 53% of emails are opened on a mobile device, a 45% increase in three years. The next wave of popular mobile enterprise application software was the advent of the CRM software. This allows salespeople to stay up-to-date with the business and manage customer relationships crucial to success from anywhere. Another enterprise software application being leveraged on a mobile platform is procurement. This trend is led by the consumer shopping experience. Over 36% of online sales on Black Friday in 2015 came from mobile shopping, according to IBM. As more shoppers are migrating to mobile applications, procurement departments need to keep up with user preferences. This mobile platform provides capabilities to complete the procurement process from start to finish. This includes searching for items and services, comparing vendors and prices, submitting purchase requests, approving requests, electronic signing, purchase orders and invoicing. Being able to accomplish this all in one place from any device significantly streamlines business procurement and operations. Users leverage mobile procurement for various reasons, and the tool can benefit businesses in many ways. User Convenience The clear advantage of using mobile procurement is the ability to use it anywhere from any device. This eliminates interruptions in the process or the need to complete a request from a single device. Users can search, compare and request items from anywhere. Suppliers can receive and sign purchase orders electronically. Checking inventory levels of physical locations provides a strong utility for mobile procurement. Save Time Mobile procurement saves time by reducing approval cycle time. A manager can make approvals from anywhere, not just his office. Users can request things they need when they need it, instead of after the fact, breaking corporate policy. Procurement now fills idle time rather than being a major item on a to-do list. This keeps employees and business moving forward. A fast procurement turnaround increases outcomes. Visibility Full visibility of suppliers and information helps businesses make informed decisions for better results and quicker processes. User analytics also help companies shop smarter in the long run. Native Apps vs. HTML5 Mobile interaction now exceeds desktop Web interaction by 9%. Now that mobile is the dominant form of online activity, mobile procurement is simply a natural step into today’s digital landscape. Mobile procurement platforms can be approached in two ways: native apps or Web apps using HTML5. Native Apps A native app is a mobile application developed specifically for use on mobile devices, launched directly from the home screen. In the United States, use of mobile apps greatly exceeds use of mobile Web browsing, but that trend is not worldwide. Using a native app requires development for all operating systems and brands. A fully operational native app needs to be designed for iOS, Android, Microsoft, all their various generations, and at different resolutions and orientations. For total success of an app, users will expect it to work on everything from a high-resolution tablet down to an Apple Watch. The other factor to consider with native apps is the way they access backend data. This is often more complex than a simple push to read data that takes place on a Web browser. Apps are developed, depending on their purpose and functionality, to access other applications such as the camera or local storage. Native apps can be the preferred use of mobile procurement in environments with either no IT access or highly secure networks. Financial institutions value privacy and security when considering mobile tools operating in their network. Oil and gas environments are often remote and require apps that can work with limited Internet access. A downside to mobile apps is the memory they occupy on a device. Because of this, many businesses will steer away from apps that require a great deal of storage. Web Apps and HTML5 A Web app is run by a browser and is really a responsive Web site. Responsive sites change and adapt to any digital environment, including operating system, screen size and orientation. This has several advantages for mobile procurement platforms. Responsive design can adjust to any browser on any device. The design is such that images, text, and user experience adjust based on the size and resolution of the platform. Adaptive web design is based on predetermined parameters for each platform. Both are ideal for any business that plans on interacting with customers on any type of mobile device. HTML, JavaScript and CSS3 seamlessly integrate backend systems with browsers and user interfaces. This creates an easy-to-use experience for all users on any device. Although some designs are scaled down, which can limit functionality, responsive and adaptive web design is easily accessible for any customer. And functionality is all about priority. If a scaled down site still performs the necessary functions, users will still find the advantages of increased turnarounds and time saving. The widespread availability of Wi-Fi increases the availability of HTML5 sites, reducing concerns about accessibility. With Wi-Fi everywhere, mobile procurement can happen in any place on a responsive site. Elegantly designed responsive and adaptive sites are the ideal solution for mobile procurement because of the simplicity of creating and implementing one. If a user’s experience is smooth and consistent, they will have no problem accessing a mobile procurement platform from any device. With the advancement of tools and technology it is currently possible to provide a native feel and minimal native feature set to your mobile products. Examples would be AngularJS, which allows for rapid web app development while preserving quality. References Mobile Procurement Company E-commerce Procurement Mobile technology
46607954
https://en.wikipedia.org/wiki/YaDICs
YaDICs
YaDICs is a program written to perform digital image correlation on 2D and 3D tomographic images. The program was designed to be both modular, by its plugin strategy and efficient, by it multithreading strategy. It incorporates different transformations (Global, Elastic, Local), optimizing strategy (Gauss-Newton, Steepest descent), Global and/or local shape functions (Rigid-body motions, homogeneous dilatations, flexural and Brazilian test models)... Theoretical background Context In solid mechanics, digital image correlation is a tool that allows to identify the displacement field to register a reference image (called herein fixed image) to images during an experiment (mobile image). For example, it is possible to observe the face of a specimen with a painted speckle on it in order to determine its displacement fields during a tensile test. Before the appearance of such methods, researchers usually used strain gauges to measure the mechanical state of the material but strain gauges only measure the strain on a point and don't allow to understand material with an heterogeneous behavior. One can obtain a full in plane strain tensor by derivation of the displacement fields. Many methods are based upon the optical flow. In fluid mechanics a similar method is used, called Particle Image Velocimetry (PIV); the algorithms are similar to those of DIC but it is impossible to ensure that the optical flow is conserved so a vast majority of the software used the normalized cross correlation metric. In mechanics the displacement or velocity fields are the only concern, registering images is just a side effect. There is another process called image registration using the same algorithms (on monomodal images) but where the goal is to register images and thereby identifying the displacement field is just a side effect. YaDICs uses the general principle of image registration with a particular attention to the displacement fields basis. Image registration principle YaDICs can be explained using the classical image registration framework: Image registration general scheme The common idea of image registration and digital image correlation is to find the transformation between a fixed image and a moving one for a given metric using an optimization scheme. While there are many methods to achieve such a goal, Yadics focuses on registering images with the same modality. The idea behind the creation of this software is to be able to process data that comes from a µ-tomograph; i.e.: data cube over 10003 voxels. With such a size it is not possible to use naive approach usually used in a two-dimensional context. In order to get sufficient performances OpenMP parallelism is used and data are not globally stored in memory. As an extensive description of the different algorithms is given in. Sampling Contrary to image registration, Digital Image Correlation targets the transformation, one wants to extracted the most accurate transformation from the two images and not just match the images. Yadics uses the whole image as a sampling grid: it is thus a total sampling. Interpolator It is possible to choose between bilinear interpolation and bicubic interpolation for the grey level evaluation at non integer coordinates. The bi-cubic interpolation is the recommended one. Metrics Sum of squared differences (SSD) The SSD is also known as mean squared error. The equation below defines the SSD metric: where is the fixed image, the moving one, the integration area the number of pi(vo)xels (cardinal) and the transformation parametrized by μ The transformation can be written as: This metric is the main one used in the YaDICs as it works well with same modality images. One has to find the minimum of this metric Normalized cross-correlation The normalized cross-correlation (NCC) is used when one cannot assure the optical flow conservation; it happens in case of change of lighting or if particles disappear from the scene can occur in particle images velocimetry (PIV). The NCC is defined by: where and are the mean values of the fixed and mobile images. This metric is only used to find local translation in Yadics. This metric with translation transform can be solved using cross-correlation methods, which are non iterative and can be accelerated using Fast Fourier Transform . Classification of transformations There are three categories of parametrization: elastic, global and local transformation. The elastic transformations respect the partition of unity, there are no holes created or surfaces counted several times. This is commonly used in Image Registration by the use of B-Spline functions and in solid mechanics with finite element basis. The global transformations are defined on the whole picture using rigid body or affine transformation (which is equivalent to homogeneous strain transformation). More complex transformations can be defined such as mechanically based one. These transformations have been used for stress intensity factor identification by and for rod strain by. The local transformation can be considered as the same global transformation defined on several Zone Of Interest (ZOI) of the fixed image. Global Several global transforms have been implemented: Rigid and homogeneous (Tx,Ty,Rz in 2D; Tx,Ty,Tz,Rx,Ry,Rz,Exx,Eyy,Ezz,Eyz,Exz,Exy in 3D) Brazilian (Only in 2D), Dynamic Flexion, Elastic First-order quadrangular finite elements Q4P1 are used in Yadics. Local Every global transform can be used on a local mesh. Optimization The YaDICs optimization process follows a gradient descent scheme. The first step is to compute the gradient of the metric regarding the transform parameters Gradient method Once the metric gradient has been computed, one has to find an optimization strategy The gradient method principle is explained below: The gradient step can be constant or updated at every iteration. , allows one to choose between the following methods : steepest descent, Gauss-Newton. Many different methods exist (e.g. BFGS, conjugate gradient, stochastic gradient) but as steepest gradient and Gauss-Newton are the only ones implemented in Yadics these methods are not discussed here. The Gauss-Newton method is a very efficient method that needs to solve a [M]{U}={F}. On 10003 voxels µ-tomographic image the number of degrees of freedom can reach 1e6 (i.e: on a 12×12×12 mesh), dealing with such a problem is more a matter of numerical scientists and required specific development (using libraries like Petsc or MUMPS) so we don't use Gauss-Newton methods to solve such problems. One has developed a specific steepest gradient algorithm with a specific tuning of the αk scalar parameter at each iteration. The Gauss-Newton method can be used in small problems in 2D. Pyramidal filter None of these optimization methods can succeed directly if applied at the last scale as the gradient methods are sensitive to the initial guests. In order to find a global optimum one has to evaluate the transformation on a filtered image. The figure below illustrates how to use the pyramidal filter to find the transformation. Pyramidal process used in Yadics (and ITK). Regularization The metrics is often called image energy; people usually add energy that comes from mechanics assumptions as the Laplacian of displacement (a special case of Tikhonov regularization ) or even finite element problems. As one decided not to solve the Gauss-Newton problem for most of cases this solution is far from being CPU efficient. Cachier et al. demonstrated that the problem of minimizing image and mechanical energy can be reformulated in solving the energy image then applying a Gaussian filter at each iteration. We use this strategy in Yadics and we add the median filter as it is massively used in PIV. One notes that the median filter avoids local minima while preserving discontinuities. The filtering process is illustrated in the figure below : See also Image registration Optical flow Displacement vector Particle Image Velocimetry References External links Graphics libraries Free software programmed in C++ Command-line software Graphics software Free graphics software Free raster graphics editors Image processing Multidimensional signal processing Computer vision software Image segmentation
42546621
https://en.wikipedia.org/wiki/Meizu%20MX3
Meizu MX3
The Meizu MX3 is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Flyme OS, Meizu's modified Android operating system. It is a previous phablet model of the MX series, succeeding the Meizu MX2 and preceding the Meizu MX4. It was the first smartphone with 128 GB of internal storage. It was unveiled on September 2, 2013 in Beijing. History Rumors about Meizu releasing a new flagship device appeared after renders of the upcoming device had been leaked. According to these leaks, the MX3 was supposed to have a single circular “Home” key for navigation and a Full HD 5.1-inch display. In August 2013, Meizu has sent out invitations for a launch event on September 2, 2013 in Beijing. Release As announced, the Meizu MX3 was released at the launch event in Beijing on September 2, 2013. Features Flyme The Meizu MX3 was released with an updated version of Flyme OS, a modified operating system based on Android Jelly Bean. It features an alternative, flat design and improved one-handed usability. Hardware and design The Meizu MX3 features a Samsung Exynos 5410 Octa system-on-a-chip with an array of four ARM Cortex-A15 and four Cortex-A7 CPU cores, a PowerVR SGX544MP3 GPU and 2 GB of RAM. The MX3 reaches a score of 24,780 points on the AnTuTu benchmark and is therefore almost 104% faster than its predecessor, the Meizu MX2. The MX3 is available in five different colors (white, blue, pink, orange and green) and comes with 16 GB, 32 GB, 64 GB or 128 GB of internal storage. It was the first smartphone to feature 128 GB of internal storage at the time of the release. The body of the MX3 measures x x and weighs . It has a slate form factor, being rectangular with rounded corners. The MX3 uses a single circular halo button on the front for navigation. The MX3 features a 5.1-inch OGS multi-touch capacitive touchscreen display with a FHD resolution of 1080 by 1800 pixels. The pixel density of the display is 412 ppi. In addition to the touchscreen input and the front key, the device has volume/zoom control buttons and the power/lock button on the right side, a 3.5mm TRS audio jack on the top and a microUSB (Micro-B type) port on the bottom for charging and connectivity. The Meizu MX3 has two cameras. The rear camera has a resolution of 8 MP, a ƒ/2.0 aperture, autofocus and an LED flash. The front camera has a resolution of 2 MP, a ƒ/2.0 aperture and just like the rear camera it is also capable of recording video with a resolution of up to 1080p30. Reception The MX3 received positive reviews. Android Authority reviewed the MX3 and praised good specifications, sharp display and the attractive price. GSMArena stated that “Meizu has delivered a solid smartphone and an interesting alternative to the mainstream Android flagships” and praised the performance of the MX3. Android Headlines also reviewed the device and concluded that “if you’re looking for a somewhat cheaper flagship device and you have a way to get one of these, consider it, because it really is worth taking a look.”. Furthermore, Android Headlines praised the sleek body and powerful specifications of the device. See also Meizu Meizu MX2 Meizu MX4 Comparison of smartphones References External links Official product page (archived) Meizu Android (operating system) devices Mobile phones introduced in 2013 Meizu smartphones Discontinued smartphones
33667887
https://en.wikipedia.org/wiki/Nook%20Tablet
Nook Tablet
The Nook Tablet (sometimes styled NOOK Tablet) is a tablet e-reader/media player that was produced and marketed by Barnes & Noble. It followed the Nook Color and was intended to compete with both e-book readers and tablet computers. Barnes & Noble announced the Nook Tablet 16 GB version on November 7, 2011; the device became available on November 17 for US$249. Barnes & Noble released the Nook Tablet 8 GB on February 21, 2012. Both versions have a 7-inch (18 cm) screen, a microSDHC slot compatible with cards up to 32 GB in size, 8 or 16 GB of internal storage, a 1 GHz dual-core processor, and a FAT32 file system. Additionally, the 16 GB model has 1 GB of RAM, ROM of 16 GB eMMC, and 11 GB of storage capacity; the 8 GB model has 512 MB of RAM and ROM of 8 GB eMMC. The Nook Tablet models were discontinued shortly after the release of the Nook HD and Nook HD+. According to estimates by Forrester Research, about 5 million units were sold by mid-October 2012, making the Nook Tablet the third best selling tablet after Apple's iPad and Amazon's Kindle Fire in 2012. Design The device is based on the Nook Color design by Yves Béhar from fuseproject. Its frame is gray in color, with an angled lower corner intended to evoke a turned page. The textured back is designed to make holding the device comfortable. Supported file types E-books: EPUB (including Adobe DRM or DRM free), PDF files, and CBZ files Other documents: XLS, DOC, PPT, TXT, DOCM, XLSM, PPTM, PPSX, PPSM, DOCX, XLSX, PPTX Audio: MP3, MP4, AAC, AMR, WAV, Ogg (Audio codecs: MP3, AAC, AMR, LPCM, Vorbis) Images: JPEG, GIF (animated GIF is not supported), PNG, BMP Videos : MP4, Adobe Flash, 3GP, 3G2, MKV, WebM (Video codecs: H.264, MPEG-4, H.263, VP8) Comparison 16 GB version The 16 GB version was announced on November 7, 2011, and became available on November 17 for US $249. Of the 16 GB internal storage, 13 GB is available for content, with only 1 GB available for sideloaded, non-Barnes & Noble content. Barnes & Noble announced that from March 12, 2012, users could bring their Nook Tablets 16 GB into stores for repartitioning to increase the internal storage for sideloaded content. On August 12, 2012, Barnes & Noble lowered the price to US$199 to compete with the Kindle Fire. On November 4, 2012, the price was further reduced to US$179. 8 GB version On February 22, 2012, Barnes & Noble released the Nook Tablet 8 GB at US $199 to compete with the Kindle Fire. The differences from the 16 GB model are: 512 MB RAM and 8 GB of internal storage, of which 5 GB is available for user content and 1 GB is reserved for NOOK Store content On August 12, 2012, Barnes & Noble lowered the price to $179. On November 4, 2012, the price was further reduced to US $159. Modifying the Nook Tablet Rooting Developers have found means to root the device, which provides access to hidden files and settings, making it possible to run apps that require deep access to a file system or make changes to your device. For instance you can use apps such as Titanium Backup to back up or restore all of the apps on your device. Numerous websites offer downloadable software and step-by-step directions to do-it-yourselfers.</ref> Third-party apps and firmware update 1.4.1 When the Nook Tablet was first offered, users could install third-party apps. However, days before Christmas 2011, the forced over-the-air "firmware update from Barnes & Noble for the Nook Tablet and Nook Color — 1.4.1 — close[d] the loophole that allowed users to sideload any Android app and also [broke] root for those who’[d] gone that extra step to customize the device. Alternative operating systems In addition to the stock firmware provided by Barnes and Noble, the Nook Tablet can run free, third-party, alternative Android operating systems such as CyanogenMod. These replacement distributions typically include advanced tablet features such as overclocking, a regular Android tablet interface, and access to competing app and content stores such as the Google Play and Amazon Appstore. Alternative operating systems may be run either from the emmc or via a microSD card, which allows multi-booting. When the card is in the slot, the Nook Tablet will start from the operating system on the SD card. Otherwise, it will boot from the emmc. While much of the replacement firmware for the Nook Tablet is available via free downloads, and instructions are readily available to install to either external microSD cards or internal storage, pre-installed versions on microSD cards are also available for sale by vendors who have tested and developed error-free versions and instructions for the free software and offer it for a price and in some cases, along with customer service and user forums. References Barnes & Noble Android (operating system) devices Products introduced in 2011 Tablet computers Touchscreen portable media players
952891
https://en.wikipedia.org/wiki/Open%20and%20Free%20Technology%20Community
Open and Free Technology Community
The Open and Free Technology Community (OFTC) is an IRC network that provides collaboration services to members of the free software community in any part of the world. OFTC is an associated project of Software in the Public Interest, a non-profit organization which was founded to help organizations develop and distribute open hardware and software. The network's servers are accessible via Round-robin DNS from the URL irc.oftc.net. As of October 2019, OFTC has 31 volunteer staff members, and 16 sponsors. History OFTC was founded at the end of 2001 by a group of experienced members of the open source and free software communities aiming to provide these communities with better communication, development, and support infrastructure. OFTC is ruled by a written constitution and the staff elect the officers among each other using a voting mechanism. OFTC became a member project of Software in the Public Interest (SPI) in July 2002, and SPI became the legal owner of the project's domain names. The ability for all users to connect using Transport Layer Security was added in April 2016 with the use of SSL certificates from the Let's Encrypt certificate authority. In 2016, OFTC was bridged to the Matrix instant messaging network. Projects OFTC currently develops three projects for its purposes: "oftc-hybrid" (a fork of the Hybrid IRC daemon), "oftc-ircservices" (the IRC services suite), and "oftc-geodns" (a GeoIP DNS responder to handle user distribution across the servers). OFTC uses GitHub repositories to host its codebase and issue tracker. Prospective users of the software can find tarball releases at https://www.oftc.net/releases/, named according to semantic versioning. Developers contributing to the code base should read and be familiar with Subversion. References Further reading External links Internet Relay Chat networks Free software culture and documents
15043
https://en.wikipedia.org/wiki/International%20Space%20Station
International Space Station
The International Space Station (ISS) is a modular space station (habitable artificial satellite) in low Earth orbit. It is a multinational collaborative project involving five participating space agencies: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada). The ownership and use of the space station is established by intergovernmental treaties and agreements. The station serves as a microgravity and space environment research laboratory in which scientific research is conducted in astrobiology, astronomy, meteorology, physics, and other fields. The ISS is suited for testing the spacecraft systems and equipment required for possible future long-duration missions to the Moon and Mars. The ISS programme evolved from the Space Station Freedom, an American proposal which was conceived in 1984 to construct a permanently manned Earth-orbiting station, and the contemporaneous Soviet/Russian Mir-2 proposal from 1976 with similar aims. The ISS is the ninth space station to be inhabited by crews, following the Soviet and later Russian Salyut, Almaz, and Mir stations and the American Skylab. It is the largest artificial object in space and the largest satellite in low Earth orbit, regularly visible to the naked eye from Earth's surface. It maintains an orbit with an average altitude of by means of reboost manoeuvres using the engines of the Zvezda Service Module or visiting spacecraft. The ISS circles the Earth in roughly 93 minutes, completing orbits per day. The station is divided into two sections: the Russian Orbital Segment (ROS) is operated by Russia, while the United States Orbital Segment (USOS) is run by the United States as well as by the other states. The Russian segment includes six modules. The US segment includes ten modules, whose support services are distributed 76.6% for NASA, 12.8% for JAXA, 8.3% for ESA and 2.3% for CSA. Roscosmos has endorsed the continued operation of ROS through 2024, having previously proposed using elements of the segment to construct a new Russian space station called OPSEK. The first ISS component was launched in 1998, and the first long-term residents arrived on 2 November 2000 after being launched from the Baikonur Cosmodrome on 31 October 2000. The station has since been continuously occupied for , the longest continuous human presence in low Earth orbit, having surpassed the previous record of held by the Mir space station. The latest major pressurised module, Nauka, was fitted in 2021, a little over ten years after the previous major addition, Leonardo in 2011. Development and assembly of the station continues, with an experimental inflatable space habitat added in 2016, and several major new Russian elements scheduled for launch starting in 2021. In January 2022, the station's operation authorization was extended to 2030, with funding secured through that year. There have been calls to privatize ISS operations after that point to pursue future Moon and Mars missions, with former NASA Administrator Jim Bridenstine stating: "given our current budget constraints, if we want to go to the moon and we want to go to Mars, we need to commercialize low Earth orbit and go on to the next step." The ISS consists of pressurised habitation modules, structural trusses, photovoltaic solar arrays, thermal radiators, docking ports, experiment bays and robotic arms. Major ISS modules have been launched by Russian Proton and Soyuz rockets and US Space Shuttles. The station is serviced by a variety of visiting spacecraft: the Russian Soyuz and Progress, the SpaceX Dragon 2, and the Northrop Grumman Space Systems Cygnus, and formerly the European Automated Transfer Vehicle (ATV), the Japanese H-II Transfer Vehicle, and SpaceX Dragon 1. The Dragon spacecraft allows the return of pressurised cargo to Earth, which is used, for example, to repatriate scientific experiments for further analysis. , 251 astronauts, cosmonauts, and space tourists from 19 different nations have visited the space station, many of them multiple times; this includes 155 Americans, 52 Russians, 11 Japanese, 8 Canadians, 5 Italians, 4 French, 4 Germans, 1 Belgian, 1 Dutch, 1 Swede, 1 Brazilian, 1 Dane, 1 Kazakhstani, 1 Spaniard, 1 Briton, 1 Malaysian, 1 South African, 1 South Korean and 1 Emirati. History Purpose The ISS was originally intended to be a laboratory, observatory, and factory while providing transportation, maintenance, and a low Earth orbit staging base for possible future missions to the Moon, Mars, and asteroids. However, not all of the uses envisioned in the initial memorandum of understanding between NASA and Roscosmos have been realised. In the 2010 United States National Space Policy, the ISS was given additional roles of serving commercial, diplomatic, and educational purposes. Scientific research The ISS provides a platform to conduct scientific research, with power, data, cooling, and crew available to support experiments. Small uncrewed spacecraft can also provide platforms for experiments, especially those involving zero gravity and exposure to space, but space stations offer a long-term environment where studies can be performed potentially for decades, combined with ready access by human researchers. The ISS simplifies individual experiments by allowing groups of experiments to share the same launches and crew time. Research is conducted in a wide variety of fields, including astrobiology, astronomy, physical sciences, materials science, space weather, meteorology, and human research including space medicine and the life sciences. Scientists on Earth have timely access to the data and can suggest experimental modifications to the crew. If follow-on experiments are necessary, the routinely scheduled launches of resupply craft allows new hardware to be launched with relative ease. Crews fly expeditions of several months' duration, providing approximately 160 person-hours per week of labour with a crew of six. However, a considerable amount of crew time is taken up by station maintenance. Perhaps the most notable ISS experiment is the Alpha Magnetic Spectrometer (AMS), which is intended to detect dark matter and answer other fundamental questions about our universe and is as important as the Hubble Space Telescope according to NASA. Currently docked on station, it could not have been easily accommodated on a free flying satellite platform because of its power and bandwidth needs. On 3 April 2013, scientists reported that hints of dark matter may have been detected by the AMS. According to the scientists, "The first results from the space-borne Alpha Magnetic Spectrometer confirm an unexplained excess of high-energy positrons in Earth-bound cosmic rays". The space environment is hostile to life. Unprotected presence in space is characterised by an intense radiation field (consisting primarily of protons and other subatomic charged particles from the solar wind, in addition to cosmic rays), high vacuum, extreme temperatures, and microgravity. Some simple forms of life called extremophiles, as well as small invertebrates called tardigrades can survive in this environment in an extremely dry state through desiccation. Medical research improves knowledge about the effects of long-term space exposure on the human body, including muscle atrophy, bone loss, and fluid shift. These data will be used to determine whether high duration human spaceflight and space colonisation are feasible. In 2006, data on bone loss and muscular atrophy suggested that there would be a significant risk of fractures and movement problems if astronauts landed on a planet after a lengthy interplanetary cruise, such as the six-month interval required to travel to Mars. Medical studies are conducted aboard the ISS on behalf of the National Space Biomedical Research Institute (NSBRI). Prominent among these is the Advanced Diagnostic Ultrasound in Microgravity study in which astronauts perform ultrasound scans under the guidance of remote experts. The study considers the diagnosis and treatment of medical conditions in space. Usually, there is no physician on board the ISS and diagnosis of medical conditions is a challenge. It is anticipated that remotely guided ultrasound scans will have application on Earth in emergency and rural care situations where access to a trained physician is difficult. In August 2020, scientists reported that bacteria from Earth, particularly Deinococcus radiodurans bacteria, which is highly resistant to environmental hazards, were found to survive for three years in outer space, based on studies conducted on the International Space Station. These findings supported the notion of panspermia, the hypothesis that life exists throughout the Universe, distributed in various ways, including space dust, meteoroids, asteroids, comets, planetoids or contaminated spacecraft. Remote sensing of the Earth, astronomy, and deep space research on the ISS have dramatically increased during the 2010s after the completion of the US Orbital Segment in 2011. Throughout the more than 20 years of the ISS program researchers aboard the ISS and on the ground have examined aerosols, ozone, lightning, and oxides in Earth's atmosphere, as well as the Sun, cosmic rays, cosmic dust, antimatter, and dark matter in the universe. Examples of Earth-viewing remote sensing experiments that have flown on the ISS are the Orbiting Carbon Observatory 3, ISS-RapidScat, ECOSTRESS, the Global Ecosystem Dynamics Investigation, and the Cloud Aerosol Transport System. ISS-based astronomy telescopes and experiments include SOLAR, the Neutron Star Interior Composition Explorer, the Calorimetric Electron Telescope, the Monitor of All-sky X-ray Image (MAXI), and the Alpha Magnetic Spectrometer. Freefall Gravity at the altitude of the ISS is approximately 90% as strong as at Earth's surface, but objects in orbit are in a continuous state of freefall, resulting in an apparent state of weightlessness. This perceived weightlessness is disturbed by five effects: Drag from the residual atmosphere. Vibration from the movements of mechanical systems and the crew. Actuation of the on-board attitude control moment gyroscopes. Thruster firings for attitude or orbital changes. Gravity-gradient effects, also known as tidal effects. Items at different locations within the ISS would, if not attached to the station, follow slightly different orbits. Being mechanically connected these items experience small forces that keep the station moving as a rigid body. Researchers are investigating the effect of the station's near-weightless environment on the evolution, development, growth and internal processes of plants and animals. In response to some of the data, NASA wants to investigate microgravity's effects on the growth of three-dimensional, human-like tissues and the unusual protein crystals that can be formed in space. Investigating the physics of fluids in microgravity will provide better models of the behaviour of fluids. Because fluids can be almost completely combined in microgravity, physicists investigate fluids that do not mix well on Earth. Examining reactions that are slowed by low gravity and low temperatures will improve our understanding of superconductivity. The study of materials science is an important ISS research activity, with the objective of reaping economic benefits through the improvement of techniques used on the ground. Other areas of interest include the effect of low gravity on combustion, through the study of the efficiency of burning and control of emissions and pollutants. These findings may improve knowledge about energy production and lead to economic and environmental benefits. Exploration The ISS provides a location in the relative safety of low Earth orbit to test spacecraft systems that will be required for long-duration missions to the Moon and Mars. This provides experience in operations, maintenance as well as repair and replacement activities on-orbit. This will help develop essential skills in operating spacecraft farther from Earth, reduce mission risks, and advance the capabilities of interplanetary spacecraft. Referring to the MARS-500 experiment, a crew isolation experiment conducted on Earth, ESA states that "Whereas the ISS is essential for answering questions concerning the possible impact of weightlessness, radiation and other space-specific factors, aspects such as the effect of long-term isolation and confinement can be more appropriately addressed via ground-based simulations". Sergey Krasnov, the head of human space flight programmes for Russia's space agency, Roscosmos, in 2011 suggested a "shorter version" of MARS-500 may be carried out on the ISS. In 2009, noting the value of the partnership framework itself, Sergey Krasnov wrote, "When compared with partners acting separately, partners developing complementary abilities and resources could give us much more assurance of the success and safety of space exploration. The ISS is helping further advance near-Earth space exploration and realisation of prospective programmes of research and exploration of the Solar system, including the Moon and Mars." A crewed mission to Mars may be a multinational effort involving space agencies and countries outside the current ISS partnership. In 2010, ESA Director-General Jean-Jacques Dordain stated his agency was ready to propose to the other four partners that China, India and South Korea be invited to join the ISS partnership. NASA chief Charles Bolden stated in February 2011, "Any mission to Mars is likely to be a global effort". Currently, US federal legislation prevents NASA co-operation with China on space projects. Education and cultural outreach The ISS crew provides opportunities for students on Earth by running student-developed experiments, making educational demonstrations, allowing for student participation in classroom versions of ISS experiments, and directly engaging students using radio, videolink, and email. ESA offers a wide range of free teaching materials that can be downloaded for use in classrooms. In one lesson, students can navigate a 3D model of the interior and exterior of the ISS, and face spontaneous challenges to solve in real time. The Japanese Aerospace Exploration Agency (JAXA) aims to inspire children to "pursue craftsmanship" and to heighten their "awareness of the importance of life and their responsibilities in society". Through a series of education guides, students develop a deeper understanding of the past and near-term future of crewed space flight, as well as that of Earth and life. In the JAXA "Seeds in Space" experiments, the mutation effects of spaceflight on plant seeds aboard the ISS are explored by growing sunflower seeds that have flown on the ISS for about nine months. In the first phase of Kibō utilisation from 2008 to mid-2010, researchers from more than a dozen Japanese universities conducted experiments in diverse fields. Cultural activities are another major objective of the ISS programme. Tetsuo Tanaka, the director of JAXA's Space Environment and Utilization Center, has said: "There is something about space that touches even people who are not interested in science." Amateur Radio on the ISS (ARISS) is a volunteer programme that encourages students worldwide to pursue careers in science, technology, engineering, and mathematics, through amateur radio communications opportunities with the ISS crew. ARISS is an international working group, consisting of delegations from nine countries including several in Europe, as well as Japan, Russia, Canada, and the United States. In areas where radio equipment cannot be used, speakerphones connect students to ground stations which then connect the calls to the space station. First Orbit is a 2011 feature-length documentary film about Vostok 1, the first crewed space flight around the Earth. By matching the orbit of the ISS to that of Vostok 1 as closely as possible, in terms of ground path and time of day, documentary filmmaker Christopher Riley and ESA astronaut Paolo Nespoli were able to film the view that Yuri Gagarin saw on his pioneering orbital space flight. This new footage was cut together with the original Vostok 1 mission audio recordings sourced from the Russian State Archive. Nespoli is credited as the director of photography for this documentary film, as he recorded the majority of the footage himself during Expedition 26/27. The film was streamed in a global YouTube premiere in 2011 under a free licence through the website firstorbit.org. In May 2013, commander Chris Hadfield shot a music video of David Bowie's "Space Oddity" on board the station, which was released on YouTube. It was the first music video ever to be filmed in space. In November 2017, while participating in Expedition 52/53 on the ISS, Paolo Nespoli made two recordings of his spoken voice (one in English and the other in his native Italian), for use on Wikipedia articles. These were the first content made in space specifically for Wikipedia. In November 2021, a virtual reality exhibit called The Infinite featuring life aboard the ISS was announced. Construction Manufacturing Since the International Space Station is a multi-national collaborative project, the components for in-orbit assembly were manufactured in various countries around the world. Beginning in the mid 1990s, the U.S. components Destiny, Unity, the Integrated Truss Structure, and the solar arrays were fabricated at the Marshall Space Flight Center and the Michoud Assembly Facility. These modules were delivered to the Operations and Checkout Building and the Space Station Processing Facility (SSPF) for final assembly and processing for launch. The Russian modules, including Zarya and Zvezda, were manufactured at the Khrunichev State Research and Production Space Center in Moscow. Zvezda was initially manufactured in 1985 as a component for Mir-2, but was never launched and instead became the ISS Service Module. The European Space Agency (ESA) Columbus module was manufactured at the EADS Astrium Space Transportation facilities in Bremen, Germany, along with many other contractors throughout Europe. The other ESA-built modulesHarmony, Tranquility, the Leonardo MPLM, and the Cupolawere initially manufactured at the Thales Alenia Space factory in Turin, Italy. The structural steel hulls of the modules were transported by aircraft to the Kennedy Space Center SSPF for launch processing. The Japanese Experiment Module Kibō, was fabricated in various technology manufacturing facilities in Japan, at the NASDA (now JAXA) Tsukuba Space Center, and the Institute of Space and Astronautical Science. The Kibo module was transported by ship and flown by aircraft to the SSPF. The Mobile Servicing System, consisting of the Canadarm2 and the Dextre grapple fixture, was manufactured at various factories in Canada (such as the David Florida Laboratory) and the United States, under contract by the Canadian Space Agency. The mobile base system, a connecting framework for Canadarm2 mounted on rails, was built by Northrop Grumman. Assembly The assembly of the International Space Station, a major endeavour in space architecture, began in November 1998. Russian modules launched and docked robotically, with the exception of Rassvet. All other modules were delivered by the Space Shuttle, which required installation by ISS and Shuttle crewmembers using the Canadarm2 (SSRMS) and extra-vehicular activities (EVAs); by 5 June 2011, they had added 159 components during more than 1,000 hours of EVA. 127 of these spacewalks originated from the station, and the remaining 32 were launched from the airlocks of docked Space Shuttles. The beta angle of the station had to be considered at all times during construction. The first module of the ISS, Zarya, was launched on 20 November 1998 on an autonomous Russian Proton rocket. It provided propulsion, attitude control, communications, and electrical power, but lacked long-term life support functions. A passive NASA module, Unity, was launched two weeks later aboard Space Shuttle flight STS-88 and attached to Zarya by astronauts during EVAs. The Unity module has two Pressurised Mating Adapters (PMAs): one connects permanently to Zarya and the other allowed the Space Shuttle to dock to the space station. At that time, the Russian (Soviet) station Mir was still inhabited, and the ISS remained uncrewed for two years. On 12 July 2000, the Zvezda module was launched into orbit. Onboard preprogrammed commands deployed its solar arrays and communications antenna. Zvezda then became the passive target for a rendezvous with Zarya and Unity, maintaining a station-keeping orbit while the Zarya–Unity vehicle performed the rendezvous and docking via ground control and the Russian automated rendezvous and docking system. Zarya computer transferred control of the station to Zvezda computer soon after docking. Zvezda added sleeping quarters, a toilet, kitchen, CO2 scrubbers, dehumidifier, oxygen generators, and exercise equipment, plus data, voice and television communications with mission control, enabling permanent habitation of the station. The first resident crew, Expedition 1, arrived in November 2000 on Soyuz TM-31. At the end of the first day on the station, astronaut Bill Shepherd requested the use of the radio call sign "Alpha", which he and cosmonaut Sergei Krikalev preferred to the more cumbersome "International Space Station". The name "Alpha" had previously been used for the station in the early 1990s, and its use was authorised for the whole of Expedition 1. Shepherd had been advocating the use of a new name to project managers for some time. Referencing a naval tradition in a pre-launch news conference he had said: "For thousands of years, humans have been going to sea in ships. People have designed and built these vessels, launched them with a good feeling that a name will bring good fortune to the crew and success to their voyage." Yuri Semenov, the President of Russian Space Corporation Energia at the time, disapproved of the name "Alpha" as he felt that Mir was the first modular space station, so the names "Beta" or "Mir 2" for the ISS would have been more fitting. Expedition 1 arrived midway between the Space Shuttle flights of missions STS-92 and STS-97. These two flights each added segments of the station's Integrated Truss Structure, which provided the station with Ku-band communication for US television, additional attitude support needed for the additional mass of the USOS, and substantial solar arrays to supplement the station's four existing arrays. Over the next two years, the station continued to expand. A Soyuz-U rocket delivered the Pirs docking compartment. The Space Shuttles Discovery, Atlantis, and Endeavour delivered the Destiny laboratory and Quest airlock, in addition to the station's main robot arm, the Canadarm2, and several more segments of the Integrated Truss Structure. The expansion schedule was interrupted in 2003 by the Space Shuttle Columbia disaster and a resulting hiatus in flights. The Space Shuttle was grounded until 2005 with STS-114 flown by Discovery. Assembly resumed in 2006 with the arrival of STS-115 with Atlantis, which delivered the station's second set of solar arrays. Several more truss segments and a third set of arrays were delivered on STS-116, STS-117, and STS-118. As a result of the major expansion of the station's power-generating capabilities, more pressurised modules could be accommodated, and the Harmony node and Columbus European laboratory were added. These were soon followed by the first two components of Kibō. In March 2009, STS-119 completed the Integrated Truss Structure with the installation of the fourth and final set of solar arrays. The final section of Kibō was delivered in July 2009 on STS-127, followed by the Russian Poisk module. The third node, Tranquility, was delivered in February 2010 during STS-130 by the Space Shuttle Endeavour, alongside the Cupola, followed by the penultimate Russian module, Rassvet, in May 2010. Rassvet was delivered by Space Shuttle Atlantis on STS-132 in exchange for the Russian Proton delivery of the US-funded Zarya module in 1998. The last pressurised module of the USOS, Leonardo, was brought to the station in February 2011 on the final flight of Discovery, STS-133. The Alpha Magnetic Spectrometer was delivered by Endeavour on STS-134 the same year. By June 2011, the station consisted of 15 pressurised modules and the Integrated Truss Structure. Three modules are still to be launched, including the Prichal module, and two power modules called NEM-1 and NEM-2. Russia's latest primary research module Nauka docked in July 2021, along with the European Robotic Arm which will be able to relocate itself to different parts of the Russian modules of the station. The gross mass of the station changes over time. The total launch mass of the modules on orbit is about (). The mass of experiments, spare parts, personal effects, crew, foodstuff, clothing, propellants, water supplies, gas supplies, docked spacecraft, and other items add to the total mass of the station. Hydrogen gas is constantly vented overboard by the oxygen generators. Structure The ISS is a modular space station. Modular stations can allow modules to be added to or removed from the existing structure, allowing greater flexibility. Below is a diagram of major station components. The blue areas are pressurised sections accessible by the crew without using spacesuits. The station's unpressurised superstructure is indicated in red. Planned components are shown in white and former ones in gray. Other unpressurised components are yellow. The Unity node joins directly to the Destiny laboratory. For clarity, they are shown apart. Similar cases are also seen in other parts of the structure. Pressurised modules Zarya Zarya (), also known as the Functional Cargo Block or FGB (from the or ФГБ), is the first module of the ISS to have been launched. The FGB provided electrical power, storage, propulsion, and guidance to the ISS during the initial stage of assembly. With the launch and assembly in orbit of other modules with more specialized functionality, Zarya is now primarily used for storage, both inside the pressurized section and in the externally mounted fuel tanks. The Zarya is a descendant of the TKS spacecraft designed for the Russian Salyut program. The name Zarya ("Dawn") was given to the FGB because it signified the dawn of a new era of international cooperation in space. Although it was built by a Russian company, it is owned by the United States. Unity The Unity connecting module, also known as Node 1, is the first U.S.-built component of the ISS. It connects the Russian and U.S. segments of the station, and is where crew eat meals together. The module is cylindrical in shape, with six berthing locations (forward, aft, port, starboard, zenith, and nadir) facilitating connections to other modules. Unity measures in diameter, is long, made of steel, and was built for NASA by Boeing in a manufacturing facility at the Marshall Space Flight Center in Huntsville, Alabama. Unity is the first of the three connecting modules; the other two are Harmony and Tranquility. Zvezda Zvezda (, meaning "star"), Salyut DOS-8, also known as the Zvezda Service Module, is a module of the ISS. It was the third module launched to the station, and provides all of the station's life support systems, some of which are supplemented in the USOS, as well as living quarters for two crew members. It is the structural and functional center of the Russian Orbital Segment, which is the Russian part of the ISS. Crew assemble here to deal with emergencies on the station. The module was manufactured by RKK Energia, with major sub-contracting work by GKNPTs Khrunichev. Zvezda was launched on a Proton rocket on 12 July 2000, and docked with the Zarya module on 26 July 2000. Destiny The Destiny module, also known as the U.S. Lab, is the primary operating facility for U.S. research payloads aboard the ISS. It was berthed to the Unity module and activated over a period of five days in February 2001. Destiny is NASA's first permanent operating orbital research station since Skylab was vacated in February 1974. The Boeing Company began construction of the research laboratory in 1995 at the Michoud Assembly Facility and then the Marshall Space Flight Center in Huntsville, Alabama. Destiny was shipped to the Kennedy Space Center in Florida in 1998, and was turned over to NASA for pre-launch preparations in August 2000. It launched on 7 February 2001, aboard the Space Shuttle Atlantis on STS-98. Astronauts work inside the pressurized facility to conduct research in numerous scientific fields. Scientists throughout the world would use the results to enhance their studies in medicine, engineering, biotechnology, physics, materials science, and Earth science. Quest The Joint Airlock (also known as "Quest") is provided by the U.S. and provides the capability for ISS-based Extravehicular Activity (EVA) using either a U.S. Extravehicular Mobility Unit (EMU) or Russian Orlan EVA suits. Before the launch of this airlock, EVAs were performed from either the U.S. Space Shuttle (while docked) or from the Transfer Chamber on the Service Module. Due to a variety of system and design differences, only U.S. space suits could be used from the Shuttle and only Russian suits could be used from the Service Module. The Joint Airlock alleviates this short-term problem by allowing either (or both) spacesuit systems to be used. The Joint Airlock was launched on ISS-7A / STS-104 in July 2001 and was attached to the right hand docking port of Node 1. The Joint Airlock is 20 ft. long, 13 ft. in diameter, and weighs 6.5 tons. The Joint Airlock was built by Boeing at Marshall Space Flight Center. The Joint Airlock was launched with the High Pressure Gas Assembly. The High Pressure Gas Assembly was mounted on the external surface of the Joint Airlock and will support EVAs operations with breathing gases and augments the Service Module's gas resupply system. The Joint Airlock has two main components: a crew airlock from which astronauts and cosmonauts exit the ISS and an equipment airlock designed for storing EVA gear and for so-called overnight "campouts" wherein Nitrogen is purged from astronaut's bodies overnight as pressure is dropped in preparation for spacewalks the following day. This alleviates the bends as the astronauts are repressurized after their EVA. The crew airlock was derived from the Space Shuttle's external airlock. It is equipped with lighting, external handrails, and an Umbilical Interface Assembly (UIA). The UIA is located on one wall of the crew airlock and provides a water supply line, a wastewater return line, and an oxygen supply line. The UIA also provides communication gear and spacesuit power interfaces and can support two spacesuits simultaneously. This can be either two American EMU spacesuits, two Russian ORLAN spacesuits, or one of each design. Poisk Poisk () was launched on 10 November 2009 attached to a modified Progress spacecraft, called Progress M-MIM2, on a Soyuz-U rocket from Launch Pad 1 at the Baikonur Cosmodrome in Kazakhstan. Poisk is used as the Russian airlock module, containing two identical EVA hatches. An outward-opening hatch on the Mir space station failed after it swung open too fast after unlatching, because of a small amount of air pressure remaining in the airlock. All EVA hatches on the ISS open inwards and are pressure-sealing. Poisk is used to store, service, and refurbish Russian Orlan suits and provides contingency entry for crew using the slightly bulkier American suits. The outermost docking port on the module allows docking of Soyuz and Progress spacecraft, and the automatic transfer of propellants to and from storage on the ROS. Since the departure of the identical Pirs module on July 26, 2021, Poisk has served as the only airlock on the ROS. Harmony Harmony, also known as Node 2, is the "utility hub" of the ISS. It connects the laboratory modules of the United States, Europe and Japan, as well as providing electrical power and electronic data. Sleeping cabins for four of the crew are housed here. Harmony was successfully launched into space aboard Space Shuttle flight STS-120 on 23 October 2007. After temporarily being attached to the port side of the Unity node, it was moved to its permanent location on the forward end of the Destiny laboratory on 14 November 2007. Harmony added to the station's living volume, an increase of almost 20 percent, from . Its successful installation meant that from NASA's perspective, the station was considered to be "U.S. Core Complete". Tranquility Tranquility, also known as Node 3, is a module of the ISS. It contains environmental control systems, life support systems, a toilet, exercise equipment, and an observation cupola. The European Space Agency and the Italian Space Agency had Tranquility manufactured by Thales Alenia Space. A ceremony on 20 November 2009 transferred ownership of the module to NASA. On 8 February 2010, NASA launched the module on the Space Shuttle's STS-130 mission. Columbus Columbus is a science laboratory that is part of the ISS and is the largest single contribution to the station made by the European Space Agency. Like the Harmony and Tranquility modules, the Columbus laboratory was constructed in Turin, Italy by Thales Alenia Space. The functional equipment and software of the lab was designed by EADS in Bremen, Germany. It was also integrated in Bremen before being flown to the Kennedy Space Center in Florida in an Airbus Beluga. It was launched aboard Space Shuttle Atlantis on 7 February 2008, on flight STS-122. It is designed for ten years of operation. The module is controlled by the Columbus Control Centre, located at the German Space Operations Center, part of the German Aerospace Center in Oberpfaffenhofen near Munich, Germany. The European Space Agency has spent €1.4 billion (about US$2 billion) on building Columbus, including the experiments it carries and the ground control infrastructure necessary to operate them. Kibō The Japanese Experiment Module (JEM), nicknamed , is a Japanese science module for the International Space Station (ISS) developed by JAXA. It is the largest single ISS module, and is attached to the Harmony module. The first two pieces of the module were launched on Space Shuttle missions STS-123 and STS-124. The third and final components were launched on STS-127. Cupola The Cupola is an ESA-built observatory module of the ISS. Its name derives from the Italian word , which means "dome". Its seven windows are used to conduct experiments, dockings and observations of Earth. It was launched aboard Space Shuttle mission STS-130 on 8 February 2010 and attached to the Tranquility (Node 3) module. With the Cupola attached, ISS assembly reached 85 percent completion. The Cupola central window has a diameter of . Rassvet Rassvet (; lit. "dawn"), also known as the Mini-Research Module 1 (MRM-1) (, ) and formerly known as the Docking Cargo Module (DCM), is a component of the International Space Station (ISS). The module's design is similar to the Mir Docking Module launched on STS-74 in 1995. Rassvet is primarily used for cargo storage and as a docking port for visiting spacecraft. It was flown to the ISS aboard Space Shuttle Atlantis on the STS-132 mission on 14 May 2010, and was connected to the ISS on 18 May 2010. The hatch connecting Rassvet with the ISS was first opened on 20 May 2010. On 28 June 2010, the Soyuz TMA-19 spacecraft performed the first docking with the module. MLM outfittings In May 2010, equipment for Nauka was launched on STS-132 (as part of an agreement with NASA) and delivered by Space Shuttle Atlantis. Weighing 1.4 metric tons, the equipment was attached to the outside of Rassvet (MRM-1). It included a spare elbow joint for the European Robotic Arm (ERA) (which was launched with Nauka) and an ERA-portable workpost used during EVAs, as well as a heat radiator, internal hardware and an experiment airlock for launching CubeSats to be positioned on the modified passive forward port near the nadir end of the Nauka module. The deployable radiator will be used to add additional cooling capability to Nauka, which will enable the module to host more scientific experiments. The airlock will be used only to pass experiments inside and outside the module, with the aid of ERAvery similar to the Japanese airlock and Nanoracks Bishop Airlock on the U.S. segment of the station. The ERA will be used to remove the radiator and airlock from Rassvet and transfer them over to Nauka. This process is expected to take several months. A portable work platform will also be transferred over, which can attach to the end of the ERA to allow cosmonauts to "ride" on the end of the arm during spacewalks. Leonardo The Leonardo Permanent Multipurpose Module (PMM) is a module of the International Space Station. It was flown into space aboard the Space Shuttle on STS-133 on 24 February 2011 and installed on 1 March. Leonardo is primarily used for storage of spares, supplies and waste on the ISS, which was until then stored in many different places within the space station. It is also the personal hygiene area for the astronauts who live in the US Orbital Segment. The Leonardo PMM was a Multi-Purpose Logistics Module (MPLM) before 2011, but was modified into its current configuration. It was formerly one of two MPLM used for bringing cargo to and from the ISS with the Space Shuttle. The module was named for Italian polymath Leonardo da Vinci. Bigelow Expandable Activity Module The Bigelow Expandable Activity Module (BEAM) is an experimental expandable space station module developed by Bigelow Aerospace, under contract to NASA, for testing as a temporary module on the International Space Station (ISS) from 2016 to at least 2020. It arrived at the ISS on 10 April 2016, was berthed to the station on 16 April at Tranquility Node 3, and was expanded and pressurized on 28 May 2016. International Docking Adapters The International Docking Adapter (IDA) is a spacecraft docking system adapter developed to convert APAS-95 to the NASA Docking System (NDS). An IDA is placed on each of the ISS's two open Pressurized Mating Adapters (PMAs), both of which are connected to the Harmony module. Two International Docking Adapters are currently installed aboard the Station. Originally, IDA-1 was planned to be installed on PMA-2, located at Harmony's forward port, and IDA-2 would be installed on PMA-3 at Harmony's zenith. After IDA 1 was destroyed in a launch incident, IDA-2 was installed on PMA-2 on 19 August 2016, while IDA-3 was later installed on PMA-3 on 21 August 2019. Bishop Airlock Module The NanoRacks Bishop Airlock Module is a commercially funded airlock module launched to the ISS on SpaceX CRS-21 on 6 December 2020. The module was built by NanoRacks, Thales Alenia Space, and Boeing. It will be used to deploy CubeSats, small satellites, and other external payloads for NASA, CASIS, and other commercial and governmental customers. Nauka Nauka (), also known as the Multipurpose Laboratory Module-Upgrade (MLM-U), (Russian: Многоцелевой лабораторный модуль, усоверше́нствованный, or МЛМ-У), is a Roscosmos-funded component of the ISS that was launched on 21 July 2021, 14:58 UTC. In the original ISS plans, Nauka was to use the location of the Docking and Stowage Module (DSM), but the DSM was later replaced by the Rassvet module and moved to Zaryas nadir port. Nauka was successfully docked to Zvezdas nadir port on 29 July 2021, 13:29 UTC, replacing the Pirs module. It had a temporary docking adapter on its nadir port for crewed and uncrewed missions until Prichal arrival, where just before its arrival it was removed by a departuring Progress spacecraft. Prichal Prichal, also known as Uzlovoy Module or UM (), is a ball-shaped module that will provide the Russian segment additional docking ports to receive Soyuz MS and Progress MS spacecraft. UM was launched in November 2021. It was integrated with a special version of the Progress cargo spacecraft and launched by a standard Soyuz rocket, docking to the nadir port of the Nauka module. One port is equipped with an active hybrid docking port, which enables docking with the MLM module. The remaining five ports are passive hybrids, enabling docking of Soyuz and Progress vehicles, as well as heavier modules and future spacecraft with modified docking systems. The node module was intended to serve as the only permanent element of the cancelled Orbital Piloted Assembly and Experiment Complex (OPSEK). Unpressurised elements The ISS has a large number of external components that do not require pressurisation. The largest of these is the Integrated Truss Structure (ITS), to which the station's main solar arrays and thermal radiators are mounted. The ITS consists of ten separate segments forming a structure long. The station was intended to have several smaller external components, such as six robotic arms, three External Stowage Platforms (ESPs) and four ExPRESS Logistics Carriers (ELCs). While these platforms allow experiments (including MISSE, the STP-H3 and the Robotic Refueling Mission) to be deployed and conducted in the vacuum of space by providing electricity and processing experimental data locally, their primary function is to store spare Orbital Replacement Units (ORUs). ORUs are parts that can be replaced when they fail or pass their design life, including pumps, storage tanks, antennas, and battery units. Such units are replaced either by astronauts during EVA or by robotic arms. Several shuttle missions were dedicated to the delivery of ORUs, including STS-129, STS-133 and STS-134. , only one other mode of transportation of ORUs had been utilisedthe Japanese cargo vessel HTV-2which delivered an FHRC and CTC-2 via its Exposed Pallet (EP). There are also smaller exposure facilities mounted directly to laboratory modules; the Kibō Exposed Facility serves as an external "porch" for the Kibō complex, and a facility on the European Columbus laboratory provides power and data connections for experiments such as the European Technology Exposure Facility and the Atomic Clock Ensemble in Space. A remote sensing instrument, SAGE III-ISS, was delivered to the station in February 2017 aboard CRS-10, and the NICER experiment was delivered aboard CRS-11 in June 2017. The largest scientific payload externally mounted to the ISS is the Alpha Magnetic Spectrometer (AMS), a particle physics experiment launched on STS-134 in May 2011, and mounted externally on the ITS. The AMS measures cosmic rays to look for evidence of dark matter and antimatter. The commercial Bartolomeo External Payload Hosting Platform, manufactured by Airbus, was launched on 6 March 2020 aboard CRS-20 and attached to the European Columbus module. It will provide an additional 12 external payload slots, supplementing the eight on the ExPRESS Logistics Carriers, ten on Kibō, and four on Columbus. The system is designed to be robotically serviced and will require no astronaut intervention. It is named after Christopher Columbus's younger brother. Robotic arms and cargo cranes The Integrated Truss Structure serves as a base for the station's primary remote manipulator system, the Mobile Servicing System (MSS), which is composed of three main components: Canadarm2, the largest robotic arm on the ISS, has a mass of and is used to: dock and manipulate spacecraft and modules on the USOS; hold crew members and equipment in place during EVAs; and move Dextre around to perform tasks. Dextre is a robotic manipulator that has two arms and a rotating torso, with power tools, lights, and video for replacing orbital replacement units (ORUs) and performing other tasks requiring fine control. The Mobile Base System (MBS) is a platform that rides on rails along the length of the station's main truss, which serves as a mobile base for Canadarm2 and Dextre, allowing the robotic arms to reach all parts of the USOS. A grapple fixture was added to Zarya on STS-134 to enable Canadarm2 to inchworm itself onto the Russian Orbital Segment. Also installed during STS-134 was the Orbiter Boom Sensor System (OBSS), which had been used to inspect heat shield tiles on Space Shuttle missions and which can be used on the station to increase the reach of the MSS. Staff on Earth or the ISS can operate the MSS components using remote control, performing work outside the station without the need for space walks. Japan's Remote Manipulator System, which services the Kibō Exposed Facility, was launched on STS-124 and is attached to the Kibō Pressurised Module. The arm is similar to the Space Shuttle arm as it is permanently attached at one end and has a latching end effector for standard grapple fixtures at the other. The European Robotic Arm, which will service the Russian Orbital Segment, was launched alongside the Nauka module. The ROS does not require spacecraft or modules to be manipulated, as all spacecraft and modules dock automatically and may be discarded the same way. Crew use the two Strela () cargo cranes during EVAs for moving crew and equipment around the ROS. Each Strela crane has a mass of . Former module Pirs Pirs (Russian: Пирс, lit. 'Pier') was launched on 14 September 2001, as ISS Assembly Mission 4R, on a Russian Soyuz-U rocket, using a modified Progress spacecraft, Progress M-SO1, as an upper stage. Pirs was undocked by Progress MS-16 on 26 July 2021, 10:56 UTC, and deorbited on the same day at 14:51 UTC to make room for Nauka module to be attached to the space station. Prior to its departure, Pirs served as the primary Russian airlock on the station, being used to store and refurbish the Russian Orlan spacesuits. Planned components Axiom segment In January 2020, NASA awarded Axiom Space a contract to build a commercial module for the ISS with a launch date of 2024. The contract is under the NextSTEP2 program. NASA negotiated with Axiom on a firm fixed-price contract basis to build and deliver the module, which will attach to the forward port of the space station's Harmony (Node 2) module. Although NASA has only commissioned one module, Axiom plans to build an entire segment consisting of five modules, including a node module, an orbital research and manufacturing facility, a crew habitat, and a "large-windowed Earth observatory". The Axiom segment is expected to greatly increase the capabilities and value of the space station, allowing for larger crews and private spaceflight by other organisations. Axiom plans to convert the segment into a stand-alone space station once the ISS is decommissioned, with the intention that this would act as a successor to the ISS. Canadarm 2 will also help to berth the Axiom Space Station modules to the ISS and will continue its operations on the Axiom Space Station after the retirement of ISS in late 2020s. Proposed components Xbase Made by Bigelow Aerospace. In August 2016 Bigelow negotiated an agreement with NASA to develop a full-sized ground prototype Deep Space Habitation based on the B330 under the second phase of Next Space Technologies for Exploration Partnerships. The module is called the Expandable Bigelow Advanced Station Enhancement (XBASE), as Bigelow hopes to test the module by attaching it to the International Space Station. Independence-1 Nanoracks, after finalizing its contract with NASA, and after winning NextSTEPs Phase II award, is now developing its concept Independence-1 (previously known as Ixion), which would turn spent rocket tanks into a habitable living area to be tested in space. In Spring 2018, Nanoracks announced that Ixion is now known as the Independence-1, the first 'outpost' in Nanoracks' Space Outpost Program. Nautilus-X Centrifuge Demonstration If produced, this centrifuge will be the first in-space demonstration of sufficient scale centrifuge for artificial partial-g effects. It will be designed to become a sleep module for the ISS crew. Cancelled components Several modules planned for the station were cancelled over the course of the ISS programme. Reasons include budgetary constraints, the modules becoming unnecessary, and station redesigns after the 2003 Columbia disaster. The US Centrifuge Accommodations Module would have hosted science experiments in varying levels of artificial gravity. The US Habitation Module would have served as the station's living quarters. Instead, the living quarters are now spread throughout the station. The US Interim Control Module and ISS Propulsion Module would have replaced the functions of Zvezda in case of a launch failure. Two Russian Research Modules were planned for scientific research. They would have docked to a Russian Universal Docking Module. The Russian Science Power Platform would have supplied power to the Russian Orbital Segment independent of the ITS solar arrays. Science Power Modules 1 and 2 (Repurposed Components) Science Power Module 1 (SPM-1, also known as NEM-1) and Science Power Module 2 (SPM-2, also known as NEM-2) are modules that were originally planned to arrive at the ISS no earlier than 2024, and dock to the Prichal module, which is currently docked to the Nauka module. In April 2021, Roscosmos announced that NEM-1 would be repurposed to function as the core module of the proposed Russian Orbital Service Station (ROSS), launching no earlier than 2025 and docking to the free-flying Nauka module either before or after the ISS has been deorbited. NEM-2 may be converted into another core "base" module, which would be launched in 2028. Onboard systems Life support The critical systems are the atmosphere control system, the water supply system, the food supply facilities, the sanitation and hygiene equipment, and fire detection and suppression equipment. The Russian Orbital Segment's life support systems are contained in the Zvezda service module. Some of these systems are supplemented by equipment in the USOS. The Nauka laboratory has a complete set of life support systems. Atmospheric control systems The atmosphere on board the ISS is similar to that of Earth. Normal air pressure on the ISS is ; the same as at sea level on Earth. An Earth-like atmosphere offers benefits for crew comfort, and is much safer than a pure oxygen atmosphere, because of the increased risk of a fire such as that responsible for the deaths of the Apollo 1 crew. Earth-like atmospheric conditions have been maintained on all Russian and Soviet spacecraft. The Elektron system aboard Zvezda and a similar system in Destiny generate oxygen aboard the station. The crew has a backup option in the form of bottled oxygen and Solid Fuel Oxygen Generation (SFOG) canisters, a chemical oxygen generator system. Carbon dioxide is removed from the air by the Vozdukh system in Zvezda. Other by-products of human metabolism, such as methane from the intestines and ammonia from sweat, are removed by activated charcoal filters. Part of the ROS atmosphere control system is the oxygen supply. Triple-redundancy is provided by the Elektron unit, solid fuel generators, and stored oxygen. The primary supply of oxygen is the Elektron unit which produces and by electrolysis of water and vents overboard. The system uses approximately one litre of water per crew member per day. This water is either brought from Earth or recycled from other systems. Mir was the first spacecraft to use recycled water for oxygen production. The secondary oxygen supply is provided by burning oxygen-producing Vika cartridges (see also ISS ECLSS). Each 'candle' takes 5–20 minutes to decompose at , producing of . This unit is manually operated. The US Orbital Segment has redundant supplies of oxygen, from a pressurised storage tank on the Quest airlock module delivered in 2001, supplemented ten years later by ESA-built Advanced Closed-Loop System (ACLS) in the Tranquility module (Node 3), which produces by electrolysis. Hydrogen produced is combined with carbon dioxide from the cabin atmosphere and converted to water and methane. Power and thermal control Double-sided solar arrays provide electrical power to the ISS. These bifacial cells collect direct sunlight on one side and light reflected off from the Earth on the other, and are more efficient and operate at a lower temperature than single-sided cells commonly used on Earth. The Russian segment of the station, like most spacecraft, uses 28 V low voltage DC from two rotating solar arrays mounted on Zvezda. The USOS uses 130–180 V DC from the USOS PV array, power is stabilised and distributed at 160 V DC and converted to the user-required 124 V DC. The higher distribution voltage allows smaller, lighter conductors, at the expense of crew safety. The two station segments share power with converters. The USOS solar arrays are arranged as four wing pairs, for a total production of 75 to 90 kilowatts. These arrays normally track the Sun to maximise power generation. Each array is about in area and long. In the complete configuration, the solar arrays track the Sun by rotating the alpha gimbal once per orbit; the beta gimbal follows slower changes in the angle of the Sun to the orbital plane. The Night Glider mode aligns the solar arrays parallel to the ground at night to reduce the significant aerodynamic drag at the station's relatively low orbital altitude. The station originally used rechargeable nickel–hydrogen batteries () for continuous power during the 45 minutes of every 90-minute orbit that it is eclipsed by the Earth. The batteries are recharged on the day side of the orbit. They had a 6.5-year lifetime (over 37,000 charge/discharge cycles) and were regularly replaced over the anticipated 20-year life of the station. Starting in 2016, the nickel–hydrogen batteries were replaced by lithium-ion batteries, which are expected to last until the end of the ISS program. The station's large solar panels generate a high potential voltage difference between the station and the ionosphere. This could cause arcing through insulating surfaces and sputtering of conductive surfaces as ions are accelerated by the spacecraft plasma sheath. To mitigate this, plasma contactor units create current paths between the station and the ambient space plasma. The station's systems and experiments consume a large amount of electrical power, almost all of which is converted to heat. To keep the internal temperature within workable limits, a passive thermal control system (PTCS) is made of external surface materials, insulation such as MLI, and heat pipes. If the PTCS cannot keep up with the heat load, an External Active Thermal Control System (EATCS) maintains the temperature. The EATCS consists of an internal, non-toxic, water coolant loop used to cool and dehumidify the atmosphere, which transfers collected heat into an external liquid ammonia loop. From the heat exchangers, ammonia is pumped into external radiators that emit heat as infrared radiation, then back to the station. The EATCS provides cooling for all the US pressurised modules, including Kibō and Columbus, as well as the main power distribution electronics of the S0, S1 and P1 trusses. It can reject up to 70 kW. This is much more than the 14 kW of the Early External Active Thermal Control System (EEATCS) via the Early Ammonia Servicer (EAS), which was launched on STS-105 and installed onto the P6 Truss. Communications and computers Radio communications provide telemetry and scientific data links between the station and mission control centres. Radio links are also used during rendezvous and docking procedures and for audio and video communication between crew members, flight controllers and family members. As a result, the ISS is equipped with internal and external communication systems used for different purposes. The Russian Orbital Segment communicates directly with the ground via the Lira antenna mounted to Zvezda. The Lira antenna also has the capability to use the Luch data relay satellite system. This system fell into disrepair during the 1990s, and so was not used during the early years of the ISS, although two new Luch satellitesLuch-5A and Luch-5Bwere launched in 2011 and 2012 respectively to restore the operational capability of the system. Another Russian communications system is the Voskhod-M, which enables internal telephone communications between Zvezda, Zarya, Pirs, Poisk, and the USOS and provides a VHF radio link to ground control centres via antennas on Zvezda exterior. The US Orbital Segment (USOS) makes use of two separate radio links: S band (audio, telemetry, commanding – located on the P1/S1 truss) and Ku band (audio, video and data – located on the Z1 truss) systems. These transmissions are routed via the United States Tracking and Data Relay Satellite System (TDRSS) in geostationary orbit, allowing for almost continuous real-time communications with Christopher C. Kraft Jr. Mission Control Center (MCC-H) in Houston. Data channels for the Canadarm2, European Columbus laboratory and Japanese Kibō modules were originally also routed via the S band and Ku band systems, with the European Data Relay System and a similar Japanese system intended to eventually complement the TDRSS in this role. Communications between modules are carried on an internal wireless network. UHF radio is used by astronauts and cosmonauts conducting EVAs and other spacecraft that dock to or undock from the station. Automated spacecraft are fitted with their own communications equipment; the ATV uses a laser attached to the spacecraft and the Proximity Communications Equipment attached to Zvezda to accurately dock with the station. The ISS is equipped with about 100 IBM/Lenovo ThinkPad and HP ZBook 15 laptop computers. The laptops have run Windows 95, Windows 2000, Windows XP, Windows 7, Windows 10 and Linux operating systems. Each computer is a commercial off-the-shelf purchase which is then modified for safety and operation including updates to connectors, cooling and power to accommodate the station's 28V DC power system and weightless environment. Heat generated by the laptops does not rise but stagnates around the laptop, so additional forced ventilation is required. Portable Computer System (PCS) laptops connect to the Primary Command & Control computer (C&C MDM) as remote terminals via a USB to 1553 adapter. Station Support Computer (SSC) laptops aboard the ISS are connected to the station's wireless LAN via Wi-Fi and ethernet, which connects to the ground via Ku band. While originally this provided speeds of 10 Mbit/s download and 3 Mbit/s upload from the station, NASA upgraded the system in late August 2019 and increased the speeds to 600 Mbit/s. Laptop hard drives occasionally fail and must be replaced. Other computer hardware failures include instances in 2001, 2007 and 2017; some of these failures have required EVAs to replace computer modules in externally mounted devices. The operating system used for key station functions is the Debian Linux distribution. The migration from Microsoft Windows to Linux was made in May 2013 for reasons of reliability, stability and flexibility. In 2017, an SG100 Cloud Computer was launched to the ISS as part of OA-7 mission. It was manufactured by NCSIST of Taiwan and designed in collaboration with Academia Sinica, and National Central University under contract for NASA. Operations Expeditions Each permanent crew is given an expedition number. Expeditions run up to six months, from launch until undocking, an 'increment' covers the same time period, but includes cargo spacecraft and all activities. Expeditions 1 to 6 consisted of three-person crews. Expeditions 7 to 12 were reduced to the safe minimum of two following the destruction of the NASA Shuttle Columbia. From Expedition 13 the crew gradually increased to six around 2010. With the arrival of crew on US commercial vehicles beginning in 2020, NASA has indicated that expedition size may be increased to seven crew members, the number ISS was originally designed for. Gennady Padalka, member of Expeditions 9, 19/20, 31/32, and 43/44, and Commander of Expedition 11, has spent more time in space than anyone else, a total of 878 days, 11 hours, and 29 minutes. Peggy Whitson has spent the most time in space of any American, totalling 665 days, 22 hours, and 22 minutes during her time on Expeditions 5, 16, and 50/51/52. Private flights Travellers who pay for their own passage into space are termed spaceflight participants by Roscosmos and NASA, and are sometimes referred to as "space tourists", a term they generally dislike. , seven space tourists have visited the ISS; all seven were transported to the ISS on Russian Soyuz spacecraft. When professional crews change over in numbers not divisible by the three seats in a Soyuz, and a short-stay crewmember is not sent, the spare seat is sold by MirCorp through Space Adventures. Space tourism was halted in 2011 when the Space Shuttle was retired and the station's crew size was reduced to six, as the partners relied on Russian transport seats for access to the station. Soyuz flight schedules increased after 2013, allowing five Soyuz flights (15 seats) with only two expeditions (12 seats) required. The remaining seats were to be sold for around US$40 million to members of the public who could pass a medical exam. ESA and NASA criticised private spaceflight at the beginning of the ISS, and NASA initially resisted training Dennis Tito, the first person to pay for his own passage to the ISS. Anousheh Ansari became the first self-funded woman to fly to the ISS as well as the first Iranian in space. Officials reported that her education and experience made her much more than a tourist, and her performance in training had been "excellent." She did Russian and European studies involving medicine and microbiology during her 10-day stay. The 2009 documentary Space Tourists follows her journey to the station, where she fulfilled "an age-old dream of man: to leave our planet as a 'normal person' and travel into outer space." In 2008, spaceflight participant Richard Garriott placed a geocache aboard the ISS during his flight. This is currently the only non-terrestrial geocache in existence. At the same time, the Immortality Drive, an electronic record of eight digitised human DNA sequences, was placed aboard the ISS. Fleet operations A wide variety of crewed and uncrewed spacecraft have supported the station's activities. Flights to the ISS include 37 Space Shuttle missions, 75 Progress resupply spacecraft (including the modified M-MIM2 and M-SO1 module transports), 59 crewed Soyuz spacecraft, 5 European ATVs, 9 Japanese HTVs, 22 SpaceX Dragon and 16 Cygnus missions. There are currently twelve available docking ports for visiting spacecraft: Harmony forward (with PMA 2 / IDA 2) Harmony zenith (with PMA 3 / IDA 3) Harmony nadir Unity nadir Prichal nadir Prichal aft Prichal forward Prichal starboard Prichal port Nauka forward Poisk zenith Rassvet nadir Zvezda aft Crewed , 251 people from 19 countries had visited the space station, many of them multiple times. The United States sent 155 people, Russia sent 52, 11 were Japanese, eight were Canadian, five were Italian, four were French, four were German, and there were one each from Belgium, Brazil, Denmark, Great Britain, Kazakhstan, Malaysia, the Netherlands, South Africa, South Korea, Spain, Sweden and the United Arab Emirates. Uncrewed Uncrewed spaceflights to the ISS are made primarily to deliver cargo, however several Russian modules have also docked to the outpost following uncrewed launches. Resupply missions typically use the Russian Progress spacecraft, European ATVs, Japanese Kounotori vehicles, and the American Dragon and Cygnus spacecraft. The primary docking system for Progress spacecraft is the automated Kurs system, with the manual TORU system as a backup. ATVs also use Kurs, however they are not equipped with TORU. Progress and ATV can remain docked for up to six months. The other spacecraftthe Japanese HTV, the SpaceX Dragon (under CRS phase 1), and the Northrop Grumman Cygnusrendezvous with the station before being grappled using Canadarm2 and berthed at the nadir port of the Harmony or Unity module for one to two months. Under CRS phase 2, Cargo Dragon docks autonomously at IDA-2 or IDA-3. , Progress spacecraft have flown most of the uncrewed missions to the ISS. Currently docked/berthed Modules/ Spacecrafts Pending Relocation Scheduled missions All dates are UTC. Dates are the earliest possible dates and may change. Forward ports are at the front of the station according to its normal direction of travel and orientation (attitude). Aft is at the rear of the station, used by spacecraft boosting the station's orbit. Nadir is closest the Earth, Zenith is on top. Port is to the left if pointing one's feet towards the Earth and looking in the direction of travel; starboard to the right. Docking All Russian spacecraft and self-propelled modules are able to rendezvous and dock to the space station without human intervention using the Kurs radar docking system from over 200 kilometres away. The European ATV uses star sensors and GPS to determine its intercept course. When it catches up it uses laser equipment to optically recognise Zvezda, along with the Kurs system for redundancy. Crew supervise these craft, but do not intervene except to send abort commands in emergencies. Progress and ATV supply craft can remain at the ISS for six months, allowing great flexibility in crew time for loading and unloading of supplies and trash. From the initial station programs, the Russians pursued an automated docking methodology that used the crew in override or monitoring roles. Although the initial development costs were high, the system has become very reliable with standardisations that provide significant cost benefits in repetitive operations. Soyuz spacecraft used for crew rotation also serve as lifeboats for emergency evacuation; they are replaced every six months and were used after the Columbia disaster to return stranded crew from the ISS. The average expedition requires of supplies, and by 9 March 2011, crews had consumed a total of around . Soyuz crew rotation flights and Progress resupply flights visit the station on average two and three times respectively each year. Other vehicles berth instead of docking. The Japanese H-II Transfer Vehicle parked itself in progressively closer orbits to the station, and then awaited 'approach' commands from the crew, until it was close enough for a robotic arm to grapple and berth the vehicle to the USOS. Berthed craft can transfer International Standard Payload Racks. Japanese spacecraft berth for one to two months. The berthing Cygnus and SpaceX Dragon were contracted to fly cargo to the station under phase 1 of the Commercial Resupply Services program. From 26 February 2011 to 7 March 2011 four of the governmental partners (United States, ESA, Japan and Russia) had their spacecraft (NASA Shuttle, ATV, HTV, Progress and Soyuz) docked at the ISS, the only time this has happened to date. On 25 May 2012, SpaceX delivered the first commercial cargo with a Dragon spacecraft. Launch and docking windows Prior to a spacecraft's docking to the ISS, navigation and attitude control (GNC) is handed over to the ground control of the spacecraft's country of origin. GNC is set to allow the station to drift in space, rather than fire its thrusters or turn using gyroscopes. The solar panels of the station are turned edge-on to the incoming spacecraft, so residue from its thrusters does not damage the cells. Before its retirement, Shuttle launches were often given priority over Soyuz, with occasional priority given to Soyuz arrivals carrying crew and time-critical cargoes, such as biological experiment materials. Repairs Orbital Replacement Units (ORUs) are spare parts that can be readily replaced when a unit either passes its design life or fails. Examples of ORUs are pumps, storage tanks, controller boxes, antennas, and battery units. Some units can be replaced using robotic arms. Most are stored outside the station, either on small pallets called ExPRESS Logistics Carriers (ELCs) or share larger platforms called External Stowage Platforms which also hold science experiments. Both kinds of pallets provide electricity for many parts that could be damaged by the cold of space and require heating. The larger logistics carriers also have local area network (LAN) connections for telemetry to connect experiments. A heavy emphasis on stocking the USOS with ORU's occurred around 2011, before the end of the NASA shuttle programme, as its commercial replacements, Cygnus and Dragon, carry one tenth to one quarter the payload. Unexpected problems and failures have impacted the station's assembly time-line and work schedules leading to periods of reduced capabilities and, in some cases, could have forced abandonment of the station for safety reasons. Serious problems include an air leak from the USOS in 2004, the venting of fumes from an Elektron oxygen generator in 2006, and the failure of the computers in the ROS in 2007 during STS-117 that left the station without thruster, Elektron, Vozdukh and other environmental control system operations. In the latter case, the root cause was found to be condensation inside electrical connectors leading to a short circuit. During STS-120 in 2007 and following the relocation of the P6 truss and solar arrays, it was noted during unfurling that the solar array had torn and was not deploying properly. An EVA was carried out by Scott Parazynski, assisted by Douglas Wheelock. Extra precautions were taken to reduce the risk of electric shock, as the repairs were carried out with the solar array exposed to sunlight. The issues with the array were followed in the same year by problems with the starboard Solar Alpha Rotary Joint (SARJ), which rotates the arrays on the starboard side of the station. Excessive vibration and high-current spikes in the array drive motor were noted, resulting in a decision to substantially curtail motion of the starboard SARJ until the cause was understood. Inspections during EVAs on STS-120 and STS-123 showed extensive contamination from metallic shavings and debris in the large drive gear and confirmed damage to the large metallic bearing surfaces, so the joint was locked to prevent further damage. Repairs to the joints were carried out during STS-126 with lubrication and the replacement of 11 out of 12 trundle bearings on the joint. In September 2008, damage to the S1 radiator was first noticed in Soyuz imagery. The problem was initially not thought to be serious. The imagery showed that the surface of one sub-panel has peeled back from the underlying central structure, possibly because of micro-meteoroid or debris impact. On 15 May 2009 the damaged radiator panel's ammonia tubing was mechanically shut off from the rest of the cooling system by the computer-controlled closure of a valve. The same valve was then used to vent the ammonia from the damaged panel, eliminating the possibility of an ammonia leak. It is also known that a Service Module thruster cover struck the S1 radiator after being jettisoned during an EVA in 2008, but its effect, if any, has not been determined. In the early hours of 1 August 2010, a failure in cooling Loop A (starboard side), one of two external cooling loops, left the station with only half of its normal cooling capacity and zero redundancy in some systems. The problem appeared to be in the ammonia pump module that circulates the ammonia cooling fluid. Several subsystems, including two of the four CMGs, were shut down. Planned operations on the ISS were interrupted through a series of EVAs to address the cooling system issue. A first EVA on 7 August 2010, to replace the failed pump module, was not fully completed because of an ammonia leak in one of four quick-disconnects. A second EVA on 11 August successfully removed the failed pump module. A third EVA was required to restore Loop A to normal functionality. The USOS's cooling system is largely built by the US company Boeing, which is also the manufacturer of the failed pump. The four Main Bus Switching Units (MBSUs, located in the S0 truss), control the routing of power from the four solar array wings to the rest of the ISS. Each MBSU has two power channels that feed 160V DC from the arrays to two DC-to-DC power converters (DDCUs) that supply the 124V power used in the station. In late 2011 MBSU-1 ceased responding to commands or sending data confirming its health. While still routing power correctly, it was scheduled to be swapped out at the next available EVA. A spare MBSU was already on board, but a 30 August 2012 EVA failed to be completed when a bolt being tightened to finish installation of the spare unit jammed before the electrical connection was secured. The loss of MBSU-1 limited the station to 75% of its normal power capacity, requiring minor limitations in normal operations until the problem could be addressed. On 5 September 2012, in a second six-hour EVA, astronauts Sunita Williams and Akihiko Hoshide successfully replaced MBSU-1 and restored the ISS to 100% power. On 24 December 2013, astronauts installed a new ammonia pump for the station's cooling system. The faulty cooling system had failed earlier in the month, halting many of the station's science experiments. Astronauts had to brave a "mini blizzard" of ammonia while installing the new pump. It was only the second Christmas Eve spacewalk in NASA history. Mission control centres The components of the ISS are operated and monitored by their respective space agencies at mission control centres across the globe, including RKA Mission Control Center, ATV Control Centre, JEM Control Center and HTV Control Center at Tsukuba Space Center, Christopher C. Kraft Jr. Mission Control Center, Payload Operations and Integration Center, Columbus Control Center and Mobile Servicing System Control. Life aboard Crew activities A typical day for the crew begins with a wake-up at 06:00, followed by post-sleep activities and a morning inspection of the station. The crew then eats breakfast and takes part in a daily planning conference with Mission Control before starting work at around 08:10. The first scheduled exercise of the day follows, after which the crew continues work until 13:05. Following a one-hour lunch break, the afternoon consists of more exercise and work before the crew carries out its pre-sleep activities beginning at 19:30, including dinner and a crew conference. The scheduled sleep period begins at 21:30. In general, the crew works ten hours per day on a weekday, and five hours on Saturdays, with the rest of the time their own for relaxation or work catch-up. The time zone used aboard the ISS is Coordinated Universal Time (UTC). The windows are covered during night hours to give the impression of darkness because the station experiences 16 sunrises and sunsets per day. During visiting Space Shuttle missions, the ISS crew mostly followed the shuttle's Mission Elapsed Time (MET), which was a flexible time zone based on the launch time of the Space Shuttle mission. The station provides crew quarters for each member of the expedition's crew, with two "sleep stations" in the Zvezda, one in Nauka and four more installed in Harmony. The USOS quarters are private, approximately person-sized soundproof booths. The ROS crew quarters in Zvezda include a small window, but provide less ventilation and sound proofing. A crew member can sleep in a crew quarter in a tethered sleeping bag, listen to music, use a laptop, and store personal items in a large drawer or in nets attached to the module's walls. The module also provides a reading lamp, a shelf and a desktop. Visiting crews have no allocated sleep module, and attach a sleeping bag to an available space on a wall. It is possible to sleep floating freely through the station, but this is generally avoided because of the possibility of bumping into sensitive equipment. It is important that crew accommodations be well ventilated; otherwise, astronauts can wake up oxygen-deprived and gasping for air, because a bubble of their own exhaled carbon dioxide has formed around their heads. During various station activities and crew rest times, the lights in the ISS can be dimmed, switched off, and colour temperatures adjusted. Food and personal hygiene On the USOS, most of the food aboard is vacuum sealed in plastic bags; cans are rare because they are heavy and expensive to transport. Preserved food is not highly regarded by the crew and taste is reduced in microgravity, so efforts are taken to make the food more palatable, including using more spices than in regular cooking. The crew looks forward to the arrival of any spacecraft from Earth as they bring fresh fruit and vegetables. Care is taken that foods do not create crumbs, and liquid condiments are preferred over solid to avoid contaminating station equipment. Each crew member has individual food packages and cooks them using the on-board galley. The galley features two food warmers, a refrigerator (added in November 2008), and a water dispenser that provides both heated and unheated water. Drinks are provided as dehydrated powder that is mixed with water before consumption. Drinks and soups are sipped from plastic bags with straws, while solid food is eaten with a knife and fork attached to a tray with magnets to prevent them from floating away. Any food that floats away, including crumbs, must be collected to prevent it from clogging the station's air filters and other equipment. Showers on space stations were introduced in the early 1970s on Skylab and Salyut 3. By Salyut 6, in the early 1980s, the crew complained of the complexity of showering in space, which was a monthly activity. The ISS does not feature a shower; instead, crewmembers wash using a water jet and wet wipes, with soap dispensed from a toothpaste tube-like container. Crews are also provided with rinseless shampoo and edible toothpaste to save water. There are two space toilets on the ISS, both of Russian design, located in Zvezda and Tranquility. These Waste and Hygiene Compartments use a fan-driven suction system similar to the Space Shuttle Waste Collection System. Astronauts first fasten themselves to the toilet seat, which is equipped with spring-loaded restraining bars to ensure a good seal. A lever operates a powerful fan and a suction hole slides open: the air stream carries the waste away. Solid waste is collected in individual bags which are stored in an aluminium container. Full containers are transferred to Progress spacecraft for disposal. Liquid waste is evacuated by a hose connected to the front of the toilet, with anatomically correct "urine funnel adapters" attached to the tube so that men and women can use the same toilet. The diverted urine is collected and transferred to the Water Recovery System, where it is recycled into drinking water. In 2021, the arrival of the Nauka module also brought a third toilet to the ISS. Crew health and safety Overall On 12 April 2019, NASA reported medical results from the Astronaut Twin Study. Astronaut Scott Kelly spent a year in space on the ISS, while his twin spent the year on Earth. Several long-lasting changes were observed, including those related to alterations in DNA and cognition, when one twin was compared with the other. In November 2019, researchers reported that astronauts experienced serious blood flow and clot problems while on board the ISS, based on a six-month study of 11 healthy astronauts. The results may influence long-term spaceflight, including a mission to the planet Mars, according to the researchers. Radiation The ISS is partially protected from the space environment by Earth's magnetic field. From an average distance of about from the Earth's surface, depending on Solar activity, the magnetosphere begins to deflect solar wind around Earth and the space station. Solar flares are still a hazard to the crew, who may receive only a few minutes warning. In 2005, during the initial "proton storm" of an X-3 class solar flare, the crew of Expedition 10 took shelter in a more heavily shielded part of the ROS designed for this purpose. Subatomic charged particles, primarily protons from cosmic rays and solar wind, are normally absorbed by Earth's atmosphere. When they interact in sufficient quantity, their effect is visible to the naked eye in a phenomenon called an aurora. Outside Earth's atmosphere, ISS crews are exposed to approximately one millisievert each day (about a year's worth of natural exposure on Earth), resulting in a higher risk of cancer. Radiation can penetrate living tissue and damage the DNA and chromosomes of lymphocytes; being central to the immune system, any damage to these cells could contribute to the lower immunity experienced by astronauts. Radiation has also been linked to a higher incidence of cataracts in astronauts. Protective shielding and medications may lower the risks to an acceptable level. Radiation levels on the ISS are about five times greater than those experienced by airline passengers and crew, as Earth's electromagnetic field provides almost the same level of protection against solar and other types of radiation in low Earth orbit as in the stratosphere. For example, on a 12-hour flight, an airline passenger would experience 0.1 millisieverts of radiation, or a rate of 0.2 millisieverts per day; this is only one fifth the rate experienced by an astronaut in LEO. Additionally, airline passengers experience this level of radiation for a few hours of flight, while the ISS crew are exposed for their whole stay on board the station. Stress There is considerable evidence that psychosocial stressors are among the most important impediments to optimal crew morale and performance. Cosmonaut Valery Ryumin wrote in his journal during a particularly difficult period on board the Salyut 6 space station: "All the conditions necessary for murder are met if you shut two men in a cabin measuring 18 feet by 20 [5.5 m × 6 m] and leave them together for two months." NASA's interest in psychological stress caused by space travel, initially studied when their crewed missions began, was rekindled when astronauts joined cosmonauts on the Russian space station Mir. Common sources of stress in early US missions included maintaining high performance under public scrutiny and isolation from peers and family. The latter is still often a cause of stress on the ISS, such as when the mother of NASA astronaut Daniel Tani died in a car accident, and when Michael Fincke was forced to miss the birth of his second child. A study of the longest spaceflight concluded that the first three weeks are a critical period where attention is adversely affected because of the demand to adjust to the extreme change of environment. ISS crew flights typically last about five to six months. The ISS working environment includes further stress caused by living and working in cramped conditions with people from very different cultures who speak a different language. First-generation space stations had crews who spoke a single language; second- and third-generation stations have crew from many cultures who speak many languages. Astronauts must speak English and Russian, and knowing additional languages is even better. Due to the lack of gravity, confusion often occurs. Even though there is no up and down in space, some crew members feel like they are oriented upside down. They may also have difficulty measuring distances. This can cause problems like getting lost inside the space station, pulling switches in the wrong direction or misjudging the speed of an approaching vehicle during docking. Medical The physiological effects of long-term weightlessness include muscle atrophy, deterioration of the skeleton (osteopenia), fluid redistribution, a slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, and puffiness of the face. Sleep is regularly disturbed on the ISS because of mission demands, such as incoming or departing spacecraft. Sound levels in the station are unavoidably high. The atmosphere is unable to thermosiphon naturally, so fans are required at all times to process the air which would stagnate in the freefall (zero-G) environment. To prevent some of the adverse effects on the body, the station is equipped with: two TVIS treadmills (including the COLBERT); the ARED (Advanced Resistive Exercise Device), which enables various weightlifting exercises that add muscle without raising (or compensating for) the astronauts' reduced bone density; and a stationary bicycle. Each astronaut spends at least two hours per day exercising on the equipment. Astronauts use bungee cords to strap themselves to the treadmill. Microbiological environmental hazards Hazardous moulds that can foul air and water filters may develop aboard space stations. They can produce acids that degrade metal, glass, and rubber. They can also be harmful to the crew's health. Microbiological hazards have led to a development of the LOCAD-PTS which identifies common bacteria and moulds faster than standard methods of culturing, which may require a sample to be sent back to Earth. Researchers in 2018 reported, after detecting the presence of five Enterobacter bugandensis bacterial strains on the ISS (none of which are pathogenic to humans), that microorganisms on the ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts. Contamination on space stations can be prevented by reduced humidity, and by using paint that contains mould-killing chemicals, as well as the use of antiseptic solutions. All materials used in the ISS are tested for resistance against fungi. In April 2019, NASA reported that a comprehensive study had been conducted into the microorganisms and fungi present on the ISS. The results may be useful in improving the health and safety conditions for astronauts. Noise Space flight is not inherently quiet, with noise levels exceeding acoustic standards as far back as the Apollo missions. For this reason, NASA and the International Space Station international partners have developed noise control and hearing loss prevention goals as part of the health program for crew members. Specifically, these goals have been the primary focus of the ISS Multilateral Medical Operations Panel (MMOP) Acoustics Subgroup since the first days of ISS assembly and operations. The effort includes contributions from acoustical engineers, audiologists, industrial hygienists, and physicians who comprise the subgroup's membership from NASA, Roscosmos, the European Space Agency (ESA), the Japanese Aerospace Exploration Agency (JAXA), and the Canadian Space Agency (CSA). When compared to terrestrial environments, the noise levels incurred by astronauts and cosmonauts on the ISS may seem insignificant and typically occur at levels that would not be of major concern to the Occupational Safety and Health Administration – rarely reaching 85 dBA. But crew members are exposed to these levels 24 hours a day, seven days a week, with current missions averaging six months in duration. These levels of noise also impose risks to crew health and performance in the form of sleep interference and communication, as well as reduced alarm audibility. Over the 19 plus year history of the ISS, significant efforts have been put forth to limit and reduce noise levels on the ISS. During design and pre-flight activities, members of the Acoustic Subgroup have written acoustic limits and verification requirements, consulted to design and choose quietest available payloads, and then conducted acoustic verification tests prior to launch. During spaceflights, the Acoustics Subgroup has assessed each ISS module's in flight sound levels, produced by a large number of vehicle and science experiment noise sources, to assure compliance with strict acoustic standards. The acoustic environment on ISS changed when additional modules were added during its construction, and as additional spacecraft arrive at the ISS. The Acoustics Subgroup has responded to this dynamic operations schedule by successfully designing and employing acoustic covers, absorptive materials, noise barriers, and vibration isolators to reduce noise levels. Moreover, when pumps, fans, and ventilation systems age and show increased noise levels, this Acoustics Subgroup has guided ISS managers to replace the older, noisier instruments with quiet fan and pump technologies, significantly reducing ambient noise levels. NASA has adopted most-conservative damage risk criteria (based on recommendations from the National Institute for Occupational Safety and Health and the World Health Organization), in order to protect all crew members. The MMOP Acoustics Subgroup has adjusted its approach to managing noise risks in this unique environment by applying, or modifying, terrestrial approaches for hearing loss prevention to set these conservative limits. One innovative approach has been NASA's Noise Exposure Estimation Tool (NEET), in which noise exposures are calculated in a task-based approach to determine the need for hearing protection devices (HPDs). Guidance for use of HPDs, either mandatory use or recommended, is then documented in the Noise Hazard Inventory, and posted for crew reference during their missions. The Acoustics Subgroup also tracks spacecraft noise exceedances, applies engineering controls, and recommends hearing protective devices to reduce crew noise exposures. Finally, hearing thresholds are monitored on-orbit, during missions. There have been no persistent mission-related hearing threshold shifts among US Orbital Segment crewmembers (JAXA, CSA, ESA, NASA) during what is approaching 20 years of ISS mission operations, or nearly 175,000 work hours. In 2020, the MMOP Acoustics Subgroup received the Safe-In-Sound Award for Innovation for their combined efforts to mitigate any health effects of noise. Fire and toxic gases An onboard fire or a toxic gas leak are other potential hazards. Ammonia is used in the external radiators of the station and could potentially leak into the pressurised modules. Orbit Altitude and orbital inclination The ISS is currently maintained in a nearly circular orbit with a minimum mean altitude of and a maximum of , in the centre of the thermosphere, at an inclination of 51.6 degrees to Earth's equator with an eccentricity of 0.007. This orbit was selected because it is the lowest inclination that can be directly reached by Russian Soyuz and Progress spacecraft launched from Baikonur Cosmodrome at 46° N latitude without overflying China or dropping spent rocket stages in inhabited areas. It travels at an average speed of , and completes orbits per day (93 minutes per orbit). The station's altitude was allowed to fall around the time of each NASA shuttle flight to permit heavier loads to be transferred to the station. After the retirement of the shuttle, the nominal orbit of the space station was raised in altitude (from about 350 km to about 400 km). Other, more frequent supply spacecraft do not require this adjustment as they are substantially higher performance vehicles. Atmospheric drag reduces the altitude by about 2 km a month on average. Orbital boosting can be performed by the station's two main engines on the Zvezda service module, or Russian or European spacecraft docked to Zvezda aft port. The Automated Transfer Vehicle is constructed with the possibility of adding a second docking port to its aft end, allowing other craft to dock and boost the station. It takes approximately two orbits (three hours) for the boost to a higher altitude to be completed. Maintaining ISS altitude uses about 7.5 tonnes of chemical fuel per annum at an annual cost of about $210 million. The Russian Orbital Segment contains the Data Management System, which handles Guidance, Navigation and Control (ROS GNC) for the entire station. Initially, Zarya, the first module of the station, controlled the station until a short time after the Russian service module Zvezda docked and was transferred control. Zvezda contains the ESA built DMS-R Data Management System. Using two fault-tolerant computers (FTC), Zvezda computes the station's position and orbital trajectory using redundant Earth horizon sensors, Solar horizon sensors as well as Sun and star trackers. The FTCs each contain three identical processing units working in parallel and provide advanced fault-masking by majority voting. Orientation Zvezda uses gyroscopes (reaction wheels) and thrusters to turn itself around. Gyroscopes do not require propellant; instead they use electricity to 'store' momentum in flywheels by turning in the opposite direction to the station's movement. The USOS has its own computer-controlled gyroscopes to handle its extra mass. When gyroscopes 'saturate', thrusters are used to cancel out the stored momentum. In February 2005, during Expedition 10, an incorrect command was sent to the station's computer, using about 14 kilograms of propellant before the fault was noticed and fixed. When attitude control computers in the ROS and USOS fail to communicate properly, this can result in a rare 'force fight' where the ROS GNC computer must ignore the USOS counterpart, which itself has no thrusters. Docked spacecraft can also be used to maintain station attitude, such as for troubleshooting or during the installation of the S3/S4 truss, which provides electrical power and data interfaces for the station's electronics. Orbital debris threats The low altitudes at which the ISS orbits are also home to a variety of space debris, including spent rocket stages, defunct satellites, explosion fragments (including materials from anti-satellite weapon tests), paint flakes, slag from solid rocket motors, and coolant released by US-A nuclear-powered satellites. These objects, in addition to natural micrometeoroids, are a significant threat. Objects large enough to destroy the station can be tracked, and are not as dangerous as smaller debris. Objects too small to be detected by optical and radar instruments, from approximately 1 cm down to microscopic size, number in the trillions. Despite their small size, some of these objects are a threat because of their kinetic energy and direction in relation to the station. Spacewalking crew in spacesuits are also at risk of suit damage and consequent exposure to vacuum. Ballistic panels, also called micrometeorite shielding, are incorporated into the station to protect pressurised sections and critical systems. The type and thickness of these panels depend on their predicted exposure to damage. The station's shields and structure have different designs on the ROS and the USOS. On the USOS, Whipple Shields are used. The US segment modules consist of an inner layer made from aluminium, a intermediate layers of Kevlar and Nextel (a ceramic fabric), and an outer layer of stainless steel, which causes objects to shatter into a cloud before hitting the hull, thereby spreading the energy of impact. On the ROS, a carbon fibre reinforced polymer honeycomb screen is spaced from the hull, an aluminium honeycomb screen is spaced from that, with a screen-vacuum thermal insulation covering, and glass cloth over the top. Space debris is tracked remotely from the ground, and the station crew can be notified. If necessary, thrusters on the Russian Orbital Segment can alter the station's orbital altitude, avoiding the debris. These Debris Avoidance Manoeuvres (DAMs) are not uncommon, taking place if computational models show the debris will approach within a certain threat distance. Ten DAMs had been performed by the end of 2009. Usually, an increase in orbital velocity of the order of 1 m/s is used to raise the orbit by one or two kilometres. If necessary, the altitude can also be lowered, although such a manoeuvre wastes propellant. If a threat from orbital debris is identified too late for a DAM to be safely conducted, the station crew close all the hatches aboard the station and retreat into their Soyuz spacecraft in order to be able to evacuate in the event the station was seriously damaged by the debris. This partial station evacuation has occurred on 13 March 2009, 28 June 2011, 24 March 2012 and 16 June 2015. In November 2021, a debris cloud from the destruction of Kosmos 1408 by an anti-satellite weapons test threatened the ISS, leading to the announcement of a yellow alert, leading to crew sheltering in the crew capsules. A couple of weeks later, it had to perform an unscheduled maneuver to drop the station by 310 meters to avoid a collision with hazardous space debris. Sightings from Earth Naked-eye visibility The ISS is visible to the naked eye as a slow-moving, bright white dot because of reflected sunlight, and can be seen in the hours after sunset and before sunrise, when the station remains sunlit but the ground and sky are dark. The ISS takes about 10 minutes to pass from one horizon to another, and will only be visible part of that time because of moving into or out of the Earth's shadow. Because of the size of its reflective surface area, the ISS is the brightest artificial object in the sky (excluding other satellite flares), with an approximate maximum magnitude of −4 when overhead (similar to Venus). The ISS, like many satellites including the Iridium constellation, can also produce flares of up to 16 times the brightness of Venus as sunlight glints off reflective surfaces. The ISS is also visible in broad daylight, albeit with a great deal more difficulty. Tools are provided by a number of websites such as Heavens-Above (see Live viewing below) as well as smartphone applications that use orbital data and the observer's longitude and latitude to indicate when the ISS will be visible (weather permitting), where the station will appear to rise, the altitude above the horizon it will reach and the duration of the pass before the station disappears either by setting below the horizon or entering into Earth's shadow. In November 2012 NASA launched its "Spot the Station" service, which sends people text and email alerts when the station is due to fly above their town. The station is visible from 95% of the inhabited land on Earth, but is not visible from extreme northern or southern latitudes. Under specific conditions, the ISS can be observed at night on five consecutive orbits. Those conditions are 1) a mid-latitude observer location, 2) near the time of the solstice with 3) the ISS passing in the direction of the pole from the observer near midnight local time. The three photos show the first, middle and last of the five passes on 5–6 June 2014. Astrophotography Using a telescope-mounted camera to photograph the station is a popular hobby for astronomers, while using a mounted camera to photograph the Earth and stars is a popular hobby for crew. The use of a telescope or binoculars allows viewing of the ISS during daylight hours. Some amateur astronomers also use telescopic lenses to photograph the ISS while it transits the Sun, sometimes doing so during an eclipse (and so the Sun, Moon, and ISS are all positioned approximately in a single line). One example is during the 21 August solar eclipse, where at one location in Wyoming, images of the ISS were captured during the eclipse. Similar images were captured by NASA from a location in Washington. Parisian engineer and astrophotographer Thierry Legault, known for his photos of spaceships transiting the Sun, travelled to Oman in 2011 to photograph the Sun, Moon and space station all lined up. Legault, who received the Marius Jacquemetton award from the Société astronomique de France in 1999, and other hobbyists, use websites that predict when the ISS will transit the Sun or Moon and from what location those passes will be visible. International co-operation Involving five space programs and fifteen countries, the International Space Station is the most politically and legally complex space exploration programme in history. The 1998 Space Station Intergovernmental Agreement sets forth the primary framework for international cooperation among the parties. A series of subsequent agreements govern other aspects of the station, ranging from jurisdictional issues to a code of conduct among visiting astronauts. On 24 February 2022, following to the Russian invasion of Ukraine, continued cooperation between Russia and other countries on the Internation Space Station has been put into question. United Kingdom Prime Minister Boris Johnson commented on the current status of cooperation, saying "I have been broadly in favor of continuing artistic and scientific collaboration, but in the current circumstances it's hard to see how even those can continue as normal." On the same day, Roscosmos Director General Dmitry Rogozin insinuated that Russia could cause the International Space Station to de-orbit, writing in a series of tweets, "If you block co-operation with us, who will save the ISS from an uncontrolled deorbit and fall into the United States or Europe? There is also the option of dropping a 500-ton structure to India and China. Do you want to threaten them with such a prospect?" Participating countries (1997–2007) European Space Agency End of mission According to the Outer Space Treaty, the United States and Russia are legally responsible for all modules they have launched. Several possible disposal options were considered: Natural orbital decay with random reentry (as with Skylab), boosting the station to a higher altitude (which would delay reentry), and a controlled targeted de-orbit to a remote ocean area. In late 2010, the preferred plan was to use a slightly modified Progress spacecraft to de-orbit the ISS. This plan was seen as the simplest, cheapest and with the . OPSEK was previously intended to be constructed of modules from the Russian Orbital Segment after the ISS is decommissioned. The modules under consideration for removal from the current ISS included the Multipurpose Laboratory Module (Nauka), launched in July 2021, and the other new Russian modules that are proposed to be attached to Nauka. These newly launched modules would still be well within their useful lives in 2024. At the end of 2011, the Exploration Gateway Platform concept also proposed using leftover USOS hardware and Zvezda 2 as a refuelling depot and service station located at one of the Earth-Moon Lagrange points. However, the entire USOS was not designed for disassembly and will be discarded. In February 2015, Roscosmos announced that it would remain a part of the ISS programme until 2024. Nine months earlierin response to US sanctions against Russia over the annexation of CrimeaRussian Deputy Prime Minister Dmitry Rogozin had stated that Russia would reject a US request to prolong the orbiting station's use beyond 2020, and would only supply rocket engines to the US for non-military satellite launches. On 28 March 2015, Russian sources announced that Roscosmos and NASA had agreed to collaborate on the development of a replacement for the current ISS. Igor Komarov, the head of Russia's Roscosmos, made the announcement with NASA administrator Charles Bolden at his side. In a statement provided to SpaceNews on 28 March, NASA spokesman David Weaver said the agency appreciated the Russian commitment to extending the ISS, but did not confirm any plans for a future space station. On 30 September 2015, Boeing's contract with NASA as prime contractor for the ISS was extended to 30 September 2020. Part of Boeing's services under the contract will relate to extending the station's primary structural hardware past 2020 to the end of 2028. There have also been suggestions that the station could be converted to commercial operations after it is retired by government entities. In July 2018, the Space Frontier Act of 2018 was intended to extend operations of the ISS to 2030. This bill was unanimously approved in the Senate, but failed to pass in the U.S. House. In September 2018, the Leading Human Spaceflight Act was introduced with the intent to extend operations of the ISS to 2030, and was confirmed in December 2018. In January 2022, NASA announced a planned date of January 2031 to de-orbit the ISS and direct any remnants into a remote area of the South Pacific Ocean. Cost The ISS has been described as the most expensive single item ever constructed. As of 2010, the total cost was US$150 billion. This includes NASA's budget of $58.7 billion ($89.73 billion in 2021 dollars) for the station from 1985 to 2015, Russia's $12 billion, Europe's $5 billion, Japan's $5 billion, Canada's $2 billion, and the cost of 36 shuttle flights to build the station, estimated at $1.4 billion each, or $50.4 billion in total. Assuming 20,000 person-days of use from 2000 to 2015 by two- to six-person crews, each person-day would cost $7.5 million, less than half the inflation-adjusted $19.6 million ($5.5 million before inflation) per person-day of Skylab. See also A Beautiful Planet – 2016 IMAX documentary film showing scenes of Earth, as well as astronaut life aboard the ISS Center for the Advancement of Science in Space – operates the US National Laboratory on the ISS List of commanders of the International Space Station List of space stations List of spacecraft deployed from the International Space Station Science diplomacy Space Station 3D – 2002 Canadian documentary Notes References Further reading O'Sullivan, John. European Missions to the International Space Station: 2013 to 2019 (Springer Nature, 2020). Ruttley, Tara M., Julie A. Robinson, and William H. Gerstenmaier. "The International Space Station: Collaboration, Utilization, and Commercialization." Social Science Quarterly 98.4 (2017): 1160–1174. online External links ISS Location Agency ISS websites  Canadian Space Agency  European Space Agency  Centre national d'études spatiales (National Centre for Space Studies)  German Aerospace Center  Italian Space Agency  Japan Aerospace Exploration Agency  S.P. Korolev Rocket and Space Corporation Energia  Russian Federal Space Agency  National Aeronautics and Space Administration Research NASA: Daily ISS Reports NASA: Station Science ESA: Columbus RSC Energia: Science Research on ISS Russian Segment Live viewing Live ISS webcam by NASA at uStream.tv Live HD ISS webcams by NASA HDEV at uStream.tv Sighting opportunities at NASA.gov Real-time position at Heavens-above.com Real-time tracking and position at uphere.space Multimedia Johnson Space Center image gallery at Flickr.com ISS tour with Sunita Williams by NASA at YouTube.com Journey to the ISS by ESA at YouTube.com The Future of Hope, Kibō module documentary by JAXA at YouTube.com Seán Doran's compiled videos of orbital photography from the ISS: Orbit – Remastered, Orbit: Uncut; The Four Seasons, Nocturne – Earth at Night, Earthbound, The Pearl (see Flickr album for more) Satellites in low Earth orbit Populated places established in 1998 Spacecraft launched in 1998 Articles containing video clips International science experiments Science diplomacy Canada–United States relations Japan–United States relations Russia–United States relations
6730079
https://en.wikipedia.org/wiki/Software%20Engineering%20Process%20Group
Software Engineering Process Group
A Software Engineering Process Group (SEPG) is an organization's focal point for software process improvement activities. These individuals perform assessments of organizational capability, develop plans to implement needed improvements, coordinate the implementation of those plans, and measure the effectiveness of these efforts. Successful SEPGs require specialized skills and knowledge of many areas outside traditional software engineering. Following are ongoing activities of the process group: Obtains and maintains the support of all levels of management. Facilitates software process assessments. Works with line managers whose projects are affected by changes in software engineering practice, providing a broad perspective of the improvement effort and helping them set expectations. Maintains collaborative working relationships with software engineers, especially to obtain, plan for, and install new practices and technologies. Arranges for any training or continuing education related to process improvements. Tracks, monitors, and reports on the status of particular improvement efforts. Facilitates the creation and maintenance of process definitions, in collaboration with managers and engineering staff. Maintains a process database. Provides process consultation to development projects and management. Sorts of SEPGs Every SEPG has a different approach and mission. Some of the flavors include: "Working" SEPGs that actually develop and deploy process as a type of internal consulting team. "Oversight" SEPGs that oversee the process architecture, approve it, manage changes, and prioritize it (sort of a process CCB) "Deliberative" SEPGs that debate the process approach and develop strategy for a process architecture and deployment "Virtual" SEPGs that are made up of representatives from throughout the organization that dedicate a certain amount of time to the effort and are responsible for deploying and training everyone else in the organization See also Capability Maturity Model Integration References External links Software Engineering Institute European SEPG Software development process Quality
51010708
https://en.wikipedia.org/wiki/List%20of%20genetic%20engineering%20software
List of genetic engineering software
This article provides a list of genetic engineering software. Cloud-based freemium software Varstation NGS variants processing and analysis tool BaseSpace Variant Interpreter by Illumina Closed-source software BlueTractorSoftware DNADynamo Agilent Technologies RFLP Decoder Software, Fish Species Applied Biosystems GeneMapper Joint BioEnergy Institute j5 CLC bio CLC DNA Workbench Software CLC bio CLC Free Workbench Software CLC bio CLC Sequence Viewer CLC bio Protein Workbench Software DNASTAR Lasergene Geneious LabVantage Solutions Inc. LabVantage Sapphire LabVantage Solutions Inc. LV LIMS SnapGene The GeneRecommender Open-source software Autodesk Genetic Constructor (suspended) BIOFAB Clotho BIOFAB Edition BIOFAB BIOFAB Studio EGF Codons and EGF CUBA (Collection of Useful Biological Apps) by the Edinburgh Genome Foundry Integrative Genomics Viewer (part of Google Genomics) Mengqvist's DNApy See also Geppetto (3D engine); an open-source 3D engine for genetic engineering-related functions;also used in the OpenWorm project Bioinformatics software Software E Genetic
24918238
https://en.wikipedia.org/wiki/Quirky%20%28disambiguation%29
Quirky (disambiguation)
Quirky is another name for eccentric behavior or something out of the ordinary in general. It may also refer to: Quirky (company), an industrial design company Quirky, a 2011 reality television show set at the company that aired on the Sundance Channel Quirky subject, a linguistic phenomenon Quirky!, an abstract strategy board game; see List of board games Quirky Linux, an experimental Linux distribution related to Puppy Linux Quirky (book), a 2018 book by Melissa Schilling See also Quirk (disambiguation)
4452046
https://en.wikipedia.org/wiki/Woodland%20Conference
Woodland Conference
The Woodland Conference is a high school athletics conference in Southeastern Wisconsin. It is overseen by the Wisconsin Interscholastic Athletic Association (WIAA). Members of the conference are: Brown Deer, Cudahy, Greendale, Greenfield, New Berlin Eisenhower, New Berlin West, Pewaukee, Shorewood, South Milwaukee, Pius XI, Milwaukee Lutheran, Wisconsin Lutheran, West Allis Central and Whitnall. The commissioner is Paul Feldhausen. Membership history 1993-1997 Brookfield Central Brookfield East Cudahy Franklin Greendale Greenfield South Milwaukee Wauwatosa East Wauwatosa West 1997-2006 Cudahy Greendale Greenfield Milwaukee Thomas More New Berlin Eisenhower New Berlin West Wauwatosa West Whitnall Divisions (2006-2008) With the addition of four new members for 2006-07 academic year from the now-defunct Parkland Conference, the Woodland Conference split into two six-team divisions: North Division Brown Deer Falcons Pewaukee Pirates New Berlin Eisenhower Lions New Berlin West Vikings Shorewood Greyhounds Wauwatosa West Trojans South Division Cudahy Packers Greendale Panthers Greenfield Hustlin' Hawks St. Francis Mariners St. Thomas More Cavaliers Whitnall Falcons Divisions (2009-2011) South Milwaukee joined the conference for the 2009-2010 academic year. The Woodland Conference rearranged into Black and Blue Divisions, based on school enrollment. Black Division Greendale Panthers Greenfield Hustlin' Hawks New Berlin Eisenhower Lions South Milwaukee Rockets Wauwatosa West Trojans Whitnall Falcons Blue Division Brown Deer Falcons Cudahy Packers New Berlin West Vikings Pewaukee Pirates St. Francis Mariners St. Thomas More Cavaliers Shorewood Greyhounds (independent for football only) Divisions (2012-2016 ) Milwaukee Pius XI joined the Woodland Conference, leaving the Classic Eight. Thomas More leaves the Woodland to join a new conference, the Metro Classic. St. Francis leaves the Woodland to join a new conference, the Midwest Classic. Woodland East Brown Deer Falcons Cudahy Packers Greenfield Hustlin' Hawks South Milwaukee Rockets Shorewood Greyhounds Whitnall Falcons Woodland West Greendale Panthers New Berlin Eisenhower Lions New Berlin West Vikings Pewaukee Pirates Pius XI Popes Wauwatosa West Trojans Divisions (2017- ) Milwaukee Lutheran, Wisconsin Lutheran, and West Allis Central all joined the Woodland in the 2017-18 academic year. Wauwatosa West left for the Greater Metro Conference. Woodland East Brown Deer Falcons Cudahy Packers Greenfield Hustlin' Hawks Milwaukee Lutheran Red Knights South Milwaukee Rockets Shorewood Greyhounds Whitnall Falcons Woodland West Greendale Panthers New Berlin Eisenhower Lions New Berlin West Vikings Pewaukee Pirates Pius XI Popes West Allis Central Bulldogs Wisconsin Lutheran Vikings Conference champions Football New Woodland Conference Football Championship (2013- ) Legend Boys' basketball Girls' basketball Baseball Wrestling Boys' soccer Girls' soccer State tournament appearances 1993-1994: Cudahy, Football Division 2 Champion 1993-1994: Brookfield East, Boys' Soccer Division 1 Champion 1993-1994: Brookfield Central, Girls' Tennis Runner Up 1994-1995: New Berlin Eisenhower, Division 3 Champion 1994-1995: Greendale, Summer Baseball Champion 1994-1995: Brookfield Central, Boys' Swimming Division 2 Champion 1994-1995: Greendale, Girls' Tennis Division 2 Champion 1994-1995: Brookfield East, Boys' Swimming Division 2 Runner Up 1994-1995: Greendale, Girls' Basketball Division 2 Semi-Final 1994-1995: Greendale, Boys' Basketball Division 2 -Final (loss) 1994-1995: Brookfield Central, Boys' Soccer Division 1 Quarterfinal 1995-1996: New Berlin Eisenhower, Division 3 Champion 1995-1996: Brookfield East, Boys' Soccer Division 1 Champion 1995-1996: Wauwatosa East, Girls' Soccer Division 1 Champion 1995-1996: Brookfield Central, Boys' Swimming Division 2 Champion 1995-1996: Brookfield East, Boys' Swimming Division 2 Runner Up 1995-1996: Greendale, Boys' Soccer Division 2 Runner Up 1996-1997: Wauwatosa East, Girls' Soccer Division 1 Champion 1996-1997: Brookfield East, Boys' Swimming Division 2 Champion 1996-1997: Wauwatosa East, Boys' Basketball Division 1 Quarterfinal 1996-1997: Wauwatosa East, Summer Baseball Quarterfinal 1996-1997: Greendale, Girls' Tennis Division 2 Runner Up 1996-1997: Broofield Central, Boys' Soccer Division 1 Semifinal 1996-1997: Brookfield Central, Girls' Tennis Division 1 Runner Up 1997-1998: Greendale, Girls' Tennis Division 2 Champion 1997-1998: New Berlin Eisenhower, Boys' Soccer Division 2 Runner Up 1997-1998: Greendale, Fastpitch Softball Division 2 Runner Up 1998-1999: Greenfield, Fastpitch Softball Division 1 Quarterfinal 1998-1999: New Berlin Eisenhower, Girls' Soccer Division 2 Runner Up 1999-2000: New Berlin West, Boys' Basketball Division 2 Champion 1999-2000: Greendale, Girls' Tennis Division 2 Champion 1999-2000: Whitnall, Girls' Swimming Division 2 Champion 1999-2000: New Berlin Eisenhower, Fastpitch Softball Division 2 Semifinal 2000-2001: Milw. Thomas More, Girls' Volleyball Division 2 Champion 2000-2001: Whitnall, Girls' Swimming Division 2 Champion 2000-2001: Whitnall, Summer Baseball Quarterfinal 2000-2001: New Berlin Eisenhower, Fastpitch Softball Division 1 Quarterfinal 2001-2002: Whitnall, Girls' Swimming Division 2 Runner Up 2002-2003: Cudahy, Girls' Track & Field Division 2 Runner Up 2003-2004: Whitnall, Boys' Basketball Division 1 Quarterfinal 2003-2004: New Berlin Eisenhower, Fastpitch Softball Division 1 Quarterfinal 2003-2004: Greenfield, Boys' Volleyball Quarterfinal 2004-2005: Whitnall, Girls' Basketball Division 1 Quarterfinal 2004-2005: Greendale, Fastpitch Softball Division 2 Semi-Final 2004-2005: Brown Deer, Boys' Track and Field Division 2 Runner Up 2005-2006: Greendale, Fastpitch Softball Division 2 Champion 2005-2006: Brown Deer, Boys' Track and Field Division 2 Runner Up 2005-2006: Shorewood, Boys' Cross Country Division 2 Champion 2005-2006: Cudahy, Summer Baseball Semi-Final 2005-2006: Greenfield, Fastpitch Softball Division 1 Quarterfinal 2005-2006: Shorewood, Boys' Volleyball Semi-Final 2006-2007: Greendale, Football State Final 2006-2007: Brown Deer, Girls' Track & Field Division 2 Champion 2006-2007: Shorewood, Boys' Cross Country Division 2 Champion 2006-2007: Milw. Thomas More, Boys' Soccer Division 3 Champion 2006-2007: Greendale, Girls' Basketball Division 2 Semi-Final 2006-2007: Greenfield, Boys' Volleyball Quarterfinal 2006-2007: Brown Deer, Girls' Track and Field Division 2 Champions 2007-2008: Wauwatosa West, Boys' Soccer Division 1 Semi-Final 2007-2008: Shorewood, Boys' Cross Country Division 2 Runner Up 2007-2008: New Berlin Eisenhower, Boys' Basketball Division 2 Champions 2007-2008: Pewaukee, Wrestling Division 2 Semi-Final 2007-2008: New Berlin Eisenhower, Fastpitch Softball Division 2 Champions 2007-2008: Brown Deer, Girls' Track and Field Division 2 Runner Up 2007-2008: Brown Deer, Boys' Track and Field Division 2 Champions 2007-2008: Brown Deer, Summer Baseball Semi-Final 2008-2009: Pewaukee, Boys' Soccer Division 2 Semi-Final 2008-2009: Pewaukee, Wrestling Division 2 Semi-Final 2009-2010: Pewaukee, Wrestling Division 2 Finals 2012-2013: New Berlin West, Baseball State Champs 2012-2013: New Berlin West, Girls' Softball Semi-Final 2015-2016: New Berlin West, Girls' Softball State Champions See also List of high school athletic conferences in Wisconsin References Conference Champions. Woodland Conference. Retrieved on 2008-05-26. External links Woodland Conference Wisconsin Interscholastic Athletic Association Brown Deer High School Cudahy High School Greendale High School Greenfield High School New Berlin Eisenhower High School New Berlin West High School Pewaukee High School Shorewood High School St. Francis High School South Milwaukee High School St. Thomas More High School Wauwatosa West High School Whitnall High School Wisconsin high school sports conferences High school sports conferences and leagues in the United States
34577155
https://en.wikipedia.org/wiki/Netrunner%20%28operating%20system%29
Netrunner (operating system)
Netrunner is a free operating system for desktop computers, laptops or netbooks, and arm-based device-types like the Odroid C1 microcomputer or the Pinebook. It comes in two versions: Netrunner and Netrunner Core, which are both based on Debian Stable. The Core versions features KDE Plasma plus a minimal selection of applications, multimedia codecs and some Firefox browser plugins. Overview Netrunner is only available as a 64-bit desktop operating system that uses the Calamares graphical installer. Since the August 20th 2019 release Netrunner is based on Debian Stable. Conceptualized for everyday use the desktop environment is based on Plasma Desktop by KDE targeting new users as well as users experienced with Linux. Netrunner is aimed at users who want an operating system to work "out-of-the-box", reducing the time to add other essential programs, multimedia codecs, firmware and other enhancements manually, post installation. Netrunner Core is a desktop version with a minimal amount of application. Core versions also feature both Pinebook and Odroid C1 ARM images. Default software Among the default software selection of Netrunner Desktop are many applications such as: KDE Plasma Desktop Mozilla Firefox (including Plasma integration) Mozilla Thunderbird (including Plasma integration) VLC media player LibreOffice GIMP Krita Gwenview Kdenlive Inkscape Samba Mounter (easy NAS setup) Steam VirtualBox Release history The following is the release history for Netrunner Core and Desktop: The following is the release history for previously Kubuntu based Netrunner versions (discontinued): The following is the release history for the Netrunner Rolling, which has been discontinued in favor of Manjaro collaboration efforts: Reception Jack M Germain reviewed Netrunner 19.01. Hectic Geek reviewed Netrunner 14. LinuxInsider wrote a post. Its review of Netrunner 13: Jesse Smith from DistroWatch Weekly wrote review of Netrunner 4.2: Dedoimedo reviewed Netrunner 4.2 "Dryland" with following words: References External links 2010 software KDE
40412424
https://en.wikipedia.org/wiki/Bernhard%20Rumpe
Bernhard Rumpe
Bernhard Rumpe (born 1967) is a German computer scientist, professor of computer science and head of the Software Engineering Department at the RWTH Aachen University. His research focusses on "technologies, methods, tools ... necessary to create software in the necessary quality that is as efficient and sustainable as possible." Biography Born and raised in Abensberg, Germany, Rumpe from 1973 to 1977 attended the Aventinus Primary School Abensberg and from 1977 to 1986 the Donau Gymnasium Kelheim. From 1987 to 1992 he studied computer science and mathematics at the Technical University of Munich (TUM). In 1992 he became research assistant at the Chair for Software and Systems Engineering at the Technical University of Munich, were in 1996 he received his PhD and in 2003 his habilitation in computer science. From 2003 to 2008 Rumpe headed the Institute for Software Systems Engineering at the Braunschweig University of Technology (TUBS). Here in 2007 he headed the participation of the university in the DARPA Urban Challenge. Since early 2009 he is Head of the Department Software Engineering at RWTH Aachen University In 2001 he founded the Springer International Journal Software and Systems Modeling, together with his colleague Robert France and works there as an editor-in-chief. Rumpe contributed to the semantics and the use of modeling languages in software development (requirements, architecture, code generation, system configuration, quality management) based on the work started by his group Language Workbench MontiCore. Selected publications 1996. Formale Methodik des Entwurfs verteilter objektorientierter Systeme, München: Utz, Wiss. 2000. Software Engineering: Schlüssel zu Prozessbeherrschung und Informationsmanagement , TCW, 2001. Übungen zur Einführung in die Informatik., Manfred Broy, Bernhard Rumpe. Springer, 2001. The UML Profile for Framework Architectures, Marcus Fontoura, Wolfgang Pree, Bernhard Rumpe. Addison-Wesley. 2004. Modellierung mit UML, Berlin: Springer, 2004. Agile Modellierung mit UML, Berlin: Springer Berlin, 2011. Modellierung mit UML, 2nd edition, Berlin: Springer, 2012. Agile Modellierung mit UML: Codegenerierung, Testfälle, Refactoring, 2nd edition, Berlin: Springer Berlin, 2014. Architecture and Behavior Modeling of Cyber-Physical Systems with MontiArcAutomaton, J. O. Ringert, B. Rumpe, A. Wortmann. Aachener Informatik-Berichte, Software Engineering Band 20. Shaker Verlag, 2016. Modeling with UML: Language, Concepts, Methods. Springer International. 2016: Engineering Modeling Languages: Turning Domain Knowledge into Tools., B. Combemale, R. France, J. Jézéquel, B. Rumpe, J. Steel, D. Vojtisek. Chapman & Hall/CRC Innovations in Software Engineering and Software Development Series. 2017. Agile Modeling with UML: Code Generation, Testing, Refactoring, Springer International. 2017. Towards a Sustainable Artifact Model, T. Greifenberg, S. Hillemacher, B. Rumpe. Aachener Informatik-Berichte, Software Engineering Band 30. Shaker Verlag, 2020. Towards an Isabelle Theory for distributed, interactive systems - the untimed case, J. C. Bürger, H. Kausch, D. Raco, J. O. Ringert, B. Rumpe, S. Stüber, M. Wiartalla. Shaker Verlag, 2021. Model-Based Engineering of Collaborative Embedded Systems, W. Böhm, M. Broy, C. Klein, K. Pohl, B. Rumpe, S. Schröck (Eds.). Springer, 2021: MontiCore Language Workbench and Library Handbook: Edition 2021, K. Hölldobler, O. Kautz, B. Rumpe. Aachener Informatik-Berichte, Software Engineering Band 48. Shaker Verlag, References External links Homepage at RWTH Aachen University Literature and Research Topics MontiCore Language Workbench Bernhard Rumpe Google Scholar profile 1967 births Living people German computer scientists Technical University of Braunschweig faculty RWTH Aachen University faculty
30277546
https://en.wikipedia.org/wiki/Zap%20Pow
Zap Pow
Zap Pow is a Jamaican reggae band, founded by singer/bassist Michael Williams aka Mikey Zappow and guitarist Dwight Pinkney. Members also included singer Beres Hammond, trumpeter David Madden, saxman Glen DaCosta, and drummer Cornell Marshall. They originally existed from 1969 to 1979. They re-formed in 2016. History The band was formed in 1969, by musicians Michael Williams (bass, guitar, vocals, songwriter, former drummer of Bobby Aitken's Caribbeats), ) and Dwight Pinkney (guitar, vocals, formerly of The Sharks and guitarist on a 1966 session by The Wailers), Max Edwards (drums), Glen DaCosta (tenor saxophone, vocals, flute, a former pupil at Alpha Boys School), Joe McCormack (trombone), and David Madden (trumpet, vocals, another former pupil at Alpha Boys School, who had previously recorded with Cedric Brooks under the name 'Im and Dave'). Pinkney and Williams had previously played together in the band Winston Turner & the Untouchables. The band's name came from a comic book that Williams had read. Several singles were released in 1970-71 including the hit "This is Reggae Music", and in 1971 their debut album, Revolutionary Zap Pow, was released on the Harry J label. In 1975, Beres Hammond joined as lead singer (other singers with the band included Winston "King" Cole, Milton "Prilly" Hamilton, Bunny Rugs and Jacob Miller), and their Tommy Cowan-produced 1976 album, Zap Pow Now topped the reggae chart in the UK. Trojan Records issued Revolution in the same year. Edwards left in 1977, to be replaced by Cornell Marshall. The band split up in 1979 with Hammond going on to a successful solo career. Pinkney went on to play with Roots Radics, and Edwards also pursued a solo career. Williams recorded and performed solo as Mikey Zappow. The horn section of DaCosta, McCormack and Madden were regularly used in recording sessions for other artists including Bob Marley & the Wailers, and they also recorded prolifically as individual session musicians, often being used by Lee "Scratch" Perry for sessions at his Black Ark studio.Katz, David (2000) People Funny Boy: The Genius of Lee "Scratch" Perry, Payback Press, , p. 294 Madden went on to release solo albums, as did DaCosta. Williams died in 2005, aged 61. In 2007 the band were honoured at the Prime Minister's Gala on Jamaican independence day. Pinkney and DaCosta re-formed Zap Pow in 2016, and by 2017 the band also included Lebert "Gibby" Morrison (bass), Richard "T Bird" Johnson (keyboards), Lando Bolt (drums), Everol Wray (trumpet), and singers Geoffrey Forrest and Fiona. They recorded a new album, Zap Pow Again, released in October 2017. Discography Albums Revolutionary Zap Pow (1971), Harry J Zap Pow Now (1976), Vulcan Revolution (1976), Trojan Zap Pow (1978), Island Zap Pow Again (2017), VP Compilation albums Beres Hammond Meets Zappow in Jamaica, Rhino Jungle Beat, Lagoon Love Hits, LMS Reggae Rules, Rhino Revolution (the best of) (2007), Trojan References External links Zap Pow at Roots Archives Jamaican reggae musical groups Trojan Records artists Island Records artists
270233
https://en.wikipedia.org/wiki/GPE%20Palmtop%20Environment
GPE Palmtop Environment
GPE (a recursive acronym for GPE Palmtop Environment) is a graphical user interface environment for handheld computers, such as palmtops and personal digital assistants (PDAs), running some Linux kernel-based operating system. GPE is a complete environment of software components and applications which makes it possible to use a Linux handheld for tasks such as personal information management (PIM), audio playback, email, and web browsing. GPE is free and open-source software, subject to the terms of the GNU General Public License (GPL) or the GNU Lesser General Public License (LGPL). Supported devices GPE is bundled with embedded Linux distributions targeting the following platforms: Sharp Zaurus Hewlett-Packard iPAQ Hewlett-Packard Jornada 72x Siemens AG SIMpad SL4 In addition, GPE maintainers and the open source community are developing ports for additional devices: GamePark Holdings GP2x Nokia 770 Nokia N800 Palm TX Palm Treo 650 HTC Universal HTC Typhoon HTC Tornado HTC Wizard HTC Apache On February 5, 2007, The GPE project announced GPE Phone Edition, a new variant of GPE developed for mobile phones. Software components GPE does not have any of the GNOME Core Applications, but instead software was written from scratch, tailored to the embedded environment. GPE is based on GTK+, and because GTK+ did not gain support for Wayland until versions 3.10, GPE uses X11 as its windowing system, e.g. with the combination X.Org Server/Matchbox. The project provides an infrastructure for easy and powerful application development by providing core software such as shared libraries, database schemata, and building on available technology including SQLite, D-BUS, GStreamer and several of the more common standards defined by freedesktop.org. One of the major goals of the GPE project is to encourage people to work on free software for mobile devices and to experiment with writing a GUI for embedded devices. Some of the applications already developed for GPE include: GPE-Contacts - A contacts manager GPE-Calendar - The calendar application GPE-Edit - A simple text editor GPE-Filemanager - A file manager with MIME type and remote access support GPE-Gallery - Small and easy to use image viewer GPE-Games - A small collection of tiny games GPE-Mini-Browser - A CSS and JavaScript compatible compact web browser GPE-Sketchbook - Create notes and sketches GPE-Soundbite - A voice memo tool GPE-ToDo - A task list manager GPE-Timesheet - Track time spend on tasks Starling - A GStreamer based audio player GPE's PIM applications (GPE-Contacts, GPE-Calendar, GPE-ToDo) can be synchronized with their desktop and web counterparts (such as Novell Evolution, Mozilla Sunbird and Google Calendar) through the use of GPE-Syncd and the OpenSync framework. GPE also contains a number of GUI utilities for configuring 802.11 Wireless LAN, Bluetooth, IrDA, Firewall, ALSA, Package Management, among others. A mobile push e-mail client based on the Tinymail framework is in development. Linux distributions GPE can be found as a primary environment in the following embedded Linux distributions: OpenEmbedded (ex OpenZaurus) Ångström Familiar Linux Though it may not be as highly supported as the distributions listed above, GPE is also available through package management utilities in the following distributions: Ubuntu Debian Internet Tablet OS Controversy There are ongoing controversies surrounding the GPE project regarding a change of hosting service, ownership of an IRC channel, and a trademark dispute. Web hosting Serious issues first began developing over a proposed change of hosting service. GPE had been hosted at Handhelds.org since April 2002. Some of GPE's developers suggested, and later followed through with, a move to Linuxtogo.org by October 2006. Handhelds.org responded by removing the user accounts of the departing developers, and any links or reference to the new GPE Linuxtogo.org location on the original GPE Handhelds.org site. Trademark George France, has filed for trademark registration with the USPTO for GPE, in addition to OPIE and Ipkg as of March 6, 2007. On June 25, 2007, the USPTO declined to accept a screenshot of the Handhelds.org GPE website as proof of Handhelds.org's ownership, and in addition requested a better specimen for a “GPE product”. Handhelds.org, and OSI board member Russ Nelson, assert that the GPE project was given over to Handhelds.org for public development. The GPE developers working at Linuxtogo.org maintain that they represent the active GPE project, and Handhelds.org was only a hosting provider. Furthermore, they point out that the GPE project existed before it was hosted on Handhelds.org. The USPTO issued a final rejection regarding the GPE trademark on February 27, 2008. George France amended the application (removing references to GNU and Linux). The GPE trademark was officially published for opposition June 3, 2008. Despite George France's impending personal GPE trademark, the core GPE development team at Linuxtogo.org has abandoned much of the Handhelds.org GPE infrastructure. Linuxtogo.org developers have switched GPE to a new bootloader and replaced IPKG with OPKG, and made major changes to the GPE gui applications. The Trademark of GPE was registered to George France on Aug 19, 2008 by the USPTO. See also Palm OS Pocket PC Qtopia Windows Mobile References External links GPE web site at LinuxToGo GPE web site at Handhelds.org Desktop environments based on GTK Embedded Linux Graphical user interfaces X Window System
34966718
https://en.wikipedia.org/wiki/PlayStation%202%20accessories
PlayStation 2 accessories
Various accessories for the PlayStation 2 video game console have been produced by Sony, as well as third parties. These include controllers, audio and video input devices like microphones and video cameras, and cables for better sound and picture quality. Game controllers DualShock 2 The DualShock 2 Analog Controller (SCPH-10010) is the standard controller for the PlayStation 2 and is almost identical to the original DualShock controller for the original PlayStation console with only minor changes. All the buttons other than L3, R3 and "Analog" feature analog pressure sensitivity; the connecting cable is slightly longer than the original DualShock and is black rather than grey; the connector is squarer; DualShock 2 is printed on the top of the controller and it features two more levels of vibration feedback. Logitech Cordless Action Controller The Logitech Cordless Action Controller is an officially licensed wireless controller for the PlayStation 2 made by Logitech. It features all of the inputs found the standard DualShock 2 controller, i.e. ten analog (pressure-sensitive) buttons (, , , , L1, R1, L2, R2, Start and Select), three digital buttons (L3, R3 and the analog mode button) and two analog sticks. As its buttons are pressure-sensitive, the controller is compatible with games which require a DualShock 2. The controller also features two vibration motors for haptic feedback, which are compatible with DualShock/DualShock 2 enabled games. As a power saving measure, the vibration may be turned on or off by the user by way of a button on the controller's face. It is powered by two AA batteries. It communicates with the console using a proprietary 2.4 GHz RF protocol wireless by way of a dongle which connects to the PS2's controller port in a similar manner to Nintendo's WaveBird wireless controller. Logitech Cordless Controller Like the Logitech Cordless Action Controller, the Logitech Cordless Controller is an officially licensed wireless PlayStation 2 controller made by Logitech. It features all of the buttons (including analog functionality) of the standard DualShock 2 controller and is compatible with games requiring a DualShock 2. It is powered by two AA batteries, and as a power-saving measure, the vibration function can be turned off. It communicates with the console via a wireless dongle which connects to the PS2's controller port and uses a proprietary 2.4 GHz RF technology. Sega Saturn PS2 Controller The Sega Saturn PS2 Controller is a controller for the PS2 based around the Sega Saturn type-2/Japanese style controller. The controller is officially licensed by both Sony and Sega, and the first version was released in black exclusively in Japan in 2005. A second version was produced in the color purple as part of a joint venture between Sega and Capcom to coincide with the launch of Vampire: Darkstalkers Collection in Japan. Other than the connector, it is almost identical to the original Saturn controller, with a few minor changes. In place of the original Saturn start button are indented PlayStation style start and select buttons. Additionally, the reset, stop, play/pause, rewind and fast-forward labels above the X, Y, Z, L and R buttons have been removed, and labels of the corresponding PlayStation buttons have been added as listed below. Logitech Driving Force GT http://uk.playstation.com/ps2/peripherals/detail/item285752/Driving-Force%E2%84%A2-GT/ Arcade sticks Rhythm game controllers Microphones Various microphones are available for use on the PlayStation 2 with rhythm games such as Sony's own Singstar karaoke games and Harmonix's Rock Band series. Singstar microphones are available in both wired and wireless varieties; both connect to the console via USB. Dance mats/pads On certain PS2 Games that are dance pad compatible allows the player to follow alongside on the game actions as the player must perform, they are usually found on games such as, Dance Dance Revolution games. Buzz! Buzzer The Buzz! Buzzer is a special controller designed specifically for the Buzz! quiz game series. The controllers feature large red buzzer buttons and four smaller coloured buttons for answer selection. Both wired and wireless versions are available and come bundled with Buzz! games. A four-buzzer set acts as a single USB device and connects a USB port on the PlayStation 2. Wireless versions connect via a USB dongle, with each dongle able to support up to 4 wireless buzzers at a time. A second dongle is required for additional buzzers (for 8 player games). Both the wired and wireless versions of the buzzers are compatible with both PlayStation 2 and PlayStation 3. The "big button controllers" available for the Xbox 360 heavily resemble buzzers in many respects, and fulfil the same function. DVD Remote Control The DVD Remote Control is an infrared remote control for the PlayStation 2 designed to allow easier control of DVD movies. The first remote SCPH-10150 came bundled (as SCPH-10170) with an infrared receiver dongle SCPH-10160 which attached to one of the PlayStation 2's controller ports; this dongle is not needed on later PS2 models (beginning from SCPH-500xx) and slimline PS2 models (SCPH-700xx to SCPH-900xx) as they feature an integrated IR port. There are two different models of the DVD remote control released, which only had minor differences. The first released is the SCPH-10150. The second, SCPH-10420, is functionally and visually identical apart from the addition of eject and reset/power buttons. However the eject button will only work on PS2 models SCPH-100xx to -500xx, as the slimline PlayStation 2 models had no motorized disc tray to eject. Both versions of the remote feature all the standard PS2 buttons in addition to DVD playback controls. A/V Cables Various A/V cables have been made available for the PlayStation 2, which offer varying levels of picture quality. Additionally, the PS2 features a TOSLINK port, which facilitates the output of digital S/PDIF audio - 2-channel LPCM, 5.1-channel Dolby Digital and 5.1-channel DTS (the latter two are only available during DVD playback when it is encoded on the disk). The PS2 is compatible with all PlayStation and PlayStation 3 cables which use the AV-multi port. RFU adapter The RFU Adapter (SCPH-1122) is an RF Modulator and cable that carries mono audio and video at 576i/50 Hz (PAL) or 480i/60 Hz (NTSC) via an RF signal and connects using a TV aerial plug. It is similar to the RFU adapter cable available for the PlayStation. A/V (Composite) cable The AV cable (SCPH-10500) is included with the PS2 and carries dual-channel (stereo) audio and composite video at 576i/50 Hz (PAL) or 480i/60 Hz (NTSC). It is identical to the composite cables available for the PlayStation and PlayStation 3. Consoles in PAL territories also come bundled with a composite/stereo SCART adapter block to facilitate connection to SCART enabled TVs. This is merely an adapter and provides no quality improvement over a direct composite connection. S-Video Cable The S-Video cable (SCPH-10060U/97030) carries dual-channel (stereo) audio and s-video at 576i/50 Hz (PAL) or 480i/60 Hz (NTSC), which provides a clearer picture than the standard A/V cable. A/V Adaptor The AV Adaptor with S Video Out Connector (SCPH-10130) is a break-out box which provides an additional AV-Multi out port, as well as composite, s-video and stereo audio connectors to allow connection to an A/V receiver or similar device. EURO A/V (RGB SCART) cable The EURO A/V Cable (SCPH-10142) is a SCART cable capable of carrying 576i/50 Hz or 480i/60 Hz using the RGB standard, as well as standard stereo audio and composite video. It provides a clearer picture than either s-video or composite signals. The Euro A/V Cable can also carry 480p and 1080i signals, but to do so it switches off RGBs (RGB Sync) signals and switches to RGsB (RGB sync on green). This can lead to compatibility issues with certain monitors and even SCART to HDMI upscalers. To use the EURO A/V cable, the PS2 must be set to RGB mode in the options. Component A/V cable The Component A/V Cable (SCPH-10490) is a cable capable of carrying 576i/50 Hz or 480i/60 Hz using the YPBPR and RGB standards, as well as standard stereo audio, via RCA connectors. It provides a clearer picture than either s-video or composite signals. It is also required for games which support other video modes such as "progressive scan" (480p) or 1080i. Most PS1 games output at 240p through the cable, which may cause compatibility issues with some newer TV's. To use the Component A/V cable, the PS2 must be set to YPBPR mode in the options. D-Terminal cable The D-Terminal cable is identical to the component cable other than its connector. It was sold only in Japan and uses the Japanese D-Terminal standard. VGA Cable The PlayStation 2 VGA cable carries RGBHV video via a VGA connector. It is only compatible with progressive scan games and PS2 Linux. Since the PS2 does not output separate sync, sync on green must be used instead, which may be incompatible with some monitors. Other accessories Memory Card The Memory Card (8 MB) (SCPH-10020) Magic Gate is used to store settings, EyeToy video messages and savegames. Official Sony memory cards are only available at a size of 8 MB. Memory cards came in black, satin silver, pink, crimson red, ocean blue and emerald in PAL and NTSC territories, with more exclusive variants in Japan. Later, Sony partnered with a third-party accessories company Katana to make Memory Cards that came in 16 MB and 32 MB. These Memory Cards were officially licensed products and have the PlayStation 2 logo, and say Magic Gate on them. Third party memory cards are available up to 128 MB. The Memory Card (8 MB) is the earliest known commercial product to use ferroelectric RAM (FeRAM). The Memory Card's microcontroller (MCU) contains 32kb (4kB) embedded FeRAM manufactured by Toshiba. It was fabricated using a 500 nm complementary metal-oxide-semiconductor (CMOS) process. Multitap The Multitap for PlayStation 2 allows up to four controllers and four memory cards to be attached to a single controller port and memory card slot. Up to 8 controllers and memory cards may be attached to the console at any one time by using two multitaps simultaneously. Certain Multitaps will not work with specific PS2 models due to slight differences in slot placement. SCPH-10090 was designed to fit the original consoles, while SCPH-70120 was instead designed for the slim consoles. EyeToy The EyeToy is a digital camera device, similar to a webcam, for the PlayStation 2. Originally, EyeToys were manufactured by Logitech (known as "Logicool" in Japan), while later models were manufactured by Namtai. The EyeToy is mainly used for playing specifically-designed EyeToy games, but can also be used to capture images and videos. It is also compatible with the PlayStation 3. Headset The PS2 headset connects via USB 1.1 on the front of the console. The headset is most commonly used in online multiplayer games; however, it can also be used in some karaoke style games, for voice control, and to enhance the immersive experience of some single player games. Headphone Splitter 3.5mm Audio Stereo Y Splitter can transfer audio from PlayStation to 2 output devices including headphones, headset, speakers and more HDD Network Adapter The PlayStation 2 network adapter is an optional accessory for some internet multiplayer compatible games. Although the PS2 Slim had one built in, the PS2 fat network adapter needed to be purchased. Keyboard and Mouse An official Sony PlayStation 2 USB keyboard and mouse came bundled as part of Linux for PlayStation 2 which turns any non slim PS2 into a Linux computer. Any standard USB keyboard and mouse will work. In addition to the Linux kit, there were a handful of games that used a keyboard and mouse or just a mouse or trackball. Vertical Stand The Vertical Stand is attached to the PlayStation 2 console to allow it to stand vertically. Three different versions are available: SCPH-10040 for original (large) consoles, SCPH-70110 for slimline SCPH-700xx consoles and SCPH-90110 for slimline SCPH-900xx consoles. Horizontal Stand The horizontal stand is attached to the base of original "fat" PlayStation 2 consoles to add height, and style. External links Official European peripherals page Official North American accessories page References Video game controllers
3366503
https://en.wikipedia.org/wiki/Andromache%20%28play%29
Andromache (play)
Andromache () is an Athenian tragedy by Euripides. It dramatises Andromache's life as a slave, years after the events of the Trojan War, and her conflict with her master's new wife, Hermione. The date of its first performance is unknown. Some scholars place the date sometime between 428 and 425 BC. Müller places it between 420 and 417 BC. A Byzantine scholion to the play suggests that its first production was staged outside Athens, though modern scholarship regards this claim as dubious. Background During the Trojan War, Achilles killed Andromache's husband Hector. Homer describes Andromache's lament, after Hector's death, that their young son Astyanax will suffer poverty growing up without a father. Instead, the conquering Greeks threw Astyanax to his death from the Trojan walls, for fear that he would grow up to avenge his father and city. Andromache was made a slave of Achilles' son Neoptolemus. Years pass and Andromache has a child with Neoptolemus. Neoptolemus weds Hermione, daughter of Menelaus and Helen. Even though Andromache is still devoted to her dead husband Hector, Hermione is deeply jealous and plots her revenge. Fearing for her life and the life of her child, Andromache hides the child and seeks refuge in the temple of Thetis (who was the mother of Achilles). Plot synopsis Clinging to the altar of the sea-goddess Thetis for sanctuary, Andromache delivers the play's prologue, in which she mourns her misfortune (the destruction of Troy, the deaths of her husband Hector and their child Astyanax, and her enslavement to Neoptolemos) and her persecution at the hands of Neoptolemos' new wife Hermione and her father Menelaus, King of Sparta. She reveals that Neoptolemos has left for the oracle at Delphi and that she has hidden the son she bore him (whose name is Molossos) for fear that Menelaus will try to kill him as well as her. A Maid arrives to warn her that Menelaus knows the location of her son and is on his way to capture him. Andromache persuades her to risk seeking the help of the king, Peleus (husband of Thetis, Achilles' father, and Neoptolemos' grandfather). Andromache laments her misfortunes again and weeps at the feet of the statue of Thetis. The párodos of the chorus follows, in which they express their desire to help Andromache and try to persuade her to leave the sanctuary. Just at the moment that they express their fearfulness of discovery by Hermione, she arrives, boasting of her wealth, status, and liberty. Hermione engages in an extended agôn with Andromache, in which they exchange a long rhetorical speech initially, each accusing the other. Hermione accuses Andromache of practising oriental witchcraft to make her barren and attempting to turn her husband against her and to displace her. "Learn your new-found place," she demands. She condemns the Trojans as barbarians who practise incest and polygamy. Their agon continues in a series of rapid stichomythic exchanges. When Menelaus arrives and reveals that he has found her son, Andromache allows herself to be led away. The intervention of the aged Peleus (the grandfather of Neoptolemus) saves them. Orestes, who has contrived the murder of Neoptolemus at Delphi and who arrives unexpectedly, carries off Hermione, to whom he had been betrothed before Neoptolemus had claimed her. The murder of Neoptolemus by Orestes and men of Delphi is described in detail by the Messenger to Peleus. The goddess Thetis appears as a deus ex machina and divines the future for Neoptolemus' corpse, Peleus, Andromache and Molossus. Context The odious character which Euripides attributes to Menelaus has been seen as according with the feeling against Sparta that prevailed at the time at Athens. He is portrayed as an arrogant tyrant and a physical coward, and his daughter Hermione is portrayed as excessively concerned with her husband's faithfulness, and capable of plotting to kill an innocent child (of Andromache) in order to clear the household of rival sons for the throne; she is also portrayed as wealthy, with her own money, and this is said by some of the characters (notably Andromache and Peleus) to make her high-handed. Peleus curses Sparta several times during the play. Translations Edward P. Coleridge, 1891 – prose, full text at Gilbert Murray, 1901 – prose, 1912 verse Arthur S. Way, 1912 – verse Hugh O. Meredith, 1937 – verse Van L. Johnson, 1955 – prose John Frederick Nims, 1956 – verse: available for digital loan David Kovacs, 1987 – prose, full text at Robert Cannon, 1997 – verse George Theodoridis, 2001 – prose, full text at Bruce Vandeventer, 2012 – verse References Sources Cannon, Robert, trans. 1997. Andromache. In Plays: V. By Euripides. Ed. J. Michael Walton. Classical Greek Dramatists ser. London: Methuen. 1–62. . Ley, Graham. 2007. The Theatricality of Greek Tragedy: Playing Space and Chorus. Chicago and London: U of Chicago P. . Walton, J. Michael. 1997. Introduction. In Plays: V. By Euripides. Ed. J. Michael Walton. Classical Greek Dramatists ser. London: Methuen. vii–xxiii. . Plays by Euripides Trojan War literature Slavery in ancient Greece Peloponnesian War Thessalian mythology Epirotic mythology Plays about slavery Plays set in ancient Greece Delphi in fiction
292012
https://en.wikipedia.org/wiki/GNU%20Mailman
GNU Mailman
GNU Mailman is a computer software application from the GNU Project for managing electronic mailing lists. Mailman is coded primarily in Python and currently maintained by Abhilash Raj. Mailman is free software, licensed under the GNU General Public License. History A very early version of Mailman was written by John Viega while a graduate student, who then lost his copy of the source in a hard drive crash sometime around 1998. Ken Manheimer at CNRI, who was looking for a replacement for Majordomo, then took over development. When Manheimer left CNRI, Barry Warsaw took over. Mailman 3—the first major new version in over a decade—was released in April 2015. Features Mailman runs on most Unix-like systems, including Linux. Since Mailman 3.0 it has required python-3.4 or newer. It works with Unix-style mail servers such as Exim, Postfix, Sendmail and qmail. Features include: A customizable publicly-accessible web page for each maillist. Web application for list administration, archiving of messages, spam filtering, etc. Separate interfaces are available for users (for self-administration), moderators (to accept/reject list posts), and administrators. Support for multiple administrators and moderators for each list. Per-list privacy features, such as closed-subscriptions, private archives, private membership rosters, and sender-based posting rules. Integrated bounce detection and automatic handling of bouncing addresses. Integrated spam filters Majordomo-style email based commands. Support for virtual domains. List archiving. The default archiver provided with Mailman 2 is Pipermail, although other archivers can be used instead. The archiver for Mailman 3 is HyperKitty. See also List of mailing list software Electronic mailing list References Further reading Reviews Mailing List Management Made Easy Other resources List Administrator's Guide "Mailman – An Extensible Mailing List Manager Using Python"; Ken Manheimer, Barry Warsaw, John Viega; presented at the 7th International Python Conference, Nov 10-13, 1998 "Mailman: The GNU Mailing List Manager"; John Viega, Barry Warsaw, Ken Manheimer; presented at the 12th Usenix Systems Administration Conference (LISA '98), Dec 9, 1998 GNU Mailman chapter in The Architecture of Open Source Applications Volume 2 Barry Warsaw presentation on Mailman 3 at PyCon US 2012 External links Mailman Documentation Mailman support mailing lists GNU Project software Free software programmed in Python Free mailing list software Mailing list software for Linux 1999 software
7635674
https://en.wikipedia.org/wiki/ISPConfig
ISPConfig
ISPConfig is a widely used Open Source Hosting Control Panel for Linux, licensed under BSD license and developed by the company ISPConfig UG. The ISPConfig project was started in autumn 2005 by Till Brehm from the German company projektfarm GmbH. Overview ISPConfig allows administrators to manage websites, email addresses, MySQL and MariaDB databases, FTP accounts, Shell accounts and DNS records through a web-based interface. The software has 4 login levels: administrator, reseller, client, and email-user. ISPConfig supports the Linux-based operating systems CentOS, Debian, Fedora, OpenSUSE and Ubuntu. Operating Systems ISPConfig can be used with these Linux distributors. CentOS Debian Ubuntu Features The following services and features are supported: Manage single or multiple servers from one control panel. web server management for Apache HTTP Server and Nginx. Mail server management (with virtual mail users) with spam and antivirus filter using Postfix (software) and Dovecot (software). DNS server management (BIND, Powerdns). Configuration mirroring and clusters. Administrator, reseller, client and mail-user login. Virtual server management for OpenVZ Servers. Website statistics with Webalizer and AWStats.. See feature list reference See also Web hosting control panel Comparison of web hosting control panels References External links ISPConfig Developer Resource Homepage SourceForge Project Homepage ISPConfig GIT Server ISPConfig Installation Tutorials Internet hosting Web applications Web hosting Website management Web server management software Software using the BSD license
37426
https://en.wikipedia.org/wiki/Squeak
Squeak
Squeak is an object-oriented, class-based, and reflective programming language. It was derived from Smalltalk-80 by a group that included some of Smalltalk-80's original developers, initially at Apple Computer, then at Walt Disney Imagineering, where it was intended for use in internal Disney projects. The group would later go on to be supported by HP labs, SAP, and most recently, Y Combinator. Squeak runs on a virtual machine (VM), allowing for a high degree of portability. The Squeak system includes code for generating a new version of the VM on which it runs, along with a VM simulator written in Squeak. Developers Dan Ingalls, an important contributor to the Squeak project, wrote the paper upon which Squeak is built, and constructed the architecture for five generations of the Smalltalk language. Alan Kay is an important contributor to the Squeak project, and Squeak incorporates many elements of his proposed Dynabook concept. User interface frameworks Squeak includes four user interface frameworks: An implementation of Morphic, Self's graphical direct manipulation interface framework. This is Squeak's main interface. Tile-based, limited visual programming scripting in Etoys, based on Morphic. A novel, experimental interface called Tweak. In 2001 it became clear that the Etoy architecture in Squeak had reached its limits in what the Morphic interface infrastructure could do. Hewlett-Packard researcher Andreas Raab proposed defining a "script process" and providing a default scheduling-mechanism that avoids several more general problems. This resulted in a new user interface, proposed to replace the Squeak Morphic user interface in the future. Tweak added mechanisms of islands, asynchronous messaging, players and costumes, language extensions, projects, and tile scripting. Its underlying object system is class-based, but to users, during programming (scripting), it acts like it is prototype-based. Tweak objects are created and run in Tweak project windows. A model–view–controller (MVC) interface was the primary UI in Squeak versions 3.8 and earlier. It derived from the original Smalltalk-80 user interface framework which first introduced and popularized the MVC architectural pattern. MVC takes its name from the three core classes of the framework. Thus, the term "MVC" in the context of Squeak refers to both one of the available user interface frameworks and the pattern the framework follows. MVC is still provided for programmers who wished to use this older type of interface. Uses Many Squeak contributors collaborate on Open Cobalt, a free and open source virtual world browser and construction toolkit built on Squeak. The first version of Scratch was implemented in Squeak. OpenQwaq, a virtual conferencing and collaboration system, is based on Squeak. Squeak is also used in the Nintendo ES operating system License Squeak 4.0 and later may be downloaded at no cost, including source code, as a prebuilt virtual machine image licensed under the MIT License, with the exception of some of the original Apple code, which is governed by the Apache License. Squeak was originally released by Apple under its own Squeak License. While source code was available and modification permitted, the Squeak License contained an indemnity clause that prevented it from qualifying as true free and open-source software. In 2006, Apple relicensed Squeak twice. First, in May, Apple used its own Apple Public Source License, which satisfies the Free Software Foundation's concept of a Free Software License and has attained official approval from the Open Source Initiative as an Open Source License. However, The Apple Public Source License fails to conform to the Debian Free Software Guidelines. To enable inclusion of Etoys in the One Laptop Per Child project, a second relicensing was undertaken using the Apache License. At this point, an effort was also made to address the issue of code contributed by members of the Squeak community, which it was not in Apple's power to unilaterally relicense. For each contribution made under the Squeak License since 1996, a relicensing statement was obtained authorizing distribution under the MIT license, and finally in March 2010, the end result was released as Squeak 4.0, now under combined MIT and Apache licenses. Squeak virtual machine The Squeak virtual machine is a family of virtual machines (VMs) used in Smalltalk programming language implementations. They are an essential part of any Smalltalk implementation. All are open-source software. The current VM is a high performance dynamic translation system. The relevant code is maintained in the OpenSmalltalk/opensmalltalk-vm repository on GitHub. Other Squeak virtual machines CogVM RoarVM SqueakJS Stack interpreter VM RSqueak/VM TruffleSqueak See also List of open-source programming languages Alice (software) Croquet Project Pharo Seaside (software) References External links Programming languages Apple Inc. software Class-based programming languages Disney technology Dynamic programming languages Dynamically typed programming languages Educational programming languages Free educational software Programming languages created by women Smalltalk programming language family Software using the MIT license Visual programming languages High-level programming languages Multi-paradigm programming languages Cross-platform free software Programming languages created in 1996 1996 software
2181361
https://en.wikipedia.org/wiki/Comparison%20of%20vector%20graphics%20editors
Comparison of vector graphics editors
A number of vector graphics editors exist for various platforms. Potential users of these editors will make a comparison of vector graphics editors based on factors such as the availability for the user's platform, the software license, the feature set, the merits of the user interface (UI) and the focus of the program. Some programs are more suitable for artistic work while others are better for technical drawings. Another important factor is the application's support of various vector and bitmap image formats for import and export. The tables in this article compare general and technical information for a number of vector graphics editors. See the article on each editor for further information. This article is neither all-inclusive nor necessarily up-to-date. Some editors in detail Adobe Fireworks (formerly Macromedia Fireworks) is a vector editor with bitmap editing capabilities with its main purpose being the creation of graphics for Web and screen. Fireworks supports RGB color scheme and has no CMYK support. This means it is mostly used for screen design. The native Fireworks file format is editable PNG (FWPNG or PNG). Adobe Fireworks has a competitive price, but its features can seem limited in comparison with other products. It is easier to learn than other products and can produce complex vector artwork. The Fireworks editable PNG file format is not supported by other Adobe products. Fireworks can manage the PSD and AI file formats which enables it to be integrated with other Adobe apps. Fireworks can also open FWPNG/PNG, PSD, AI, EPS, JPG, GIF, BMP, TIFF file formats, and save/export to FWPNG/PNG, PSD, AI (v.8), FXG (v.2.0), JPG, GIF, PDF, SWF and some others. Some support for exporting to SVG is available via a free Export extension. On May 6, 2013, Adobe announced that Fireworks would be phased out. Adobe Flash (formerly a Macromedia product) has straightforward vector editing tools that make it easier for designers and illustrators to use. The most important of these tools are vector lines and fills with bitmap-like selectable areas, simple modification of curves via the "selection" or the control points/handles through "direct selection" tools. Flash uses Actionscript for OOP, and has full XML functionality through E4X support. Adobe FreeHand (formerly Macromedia Freehand and Aldus Freehand) is mainly used by professional graphic designers. The functionality of FreeHand includes flexibility of the application in the wide design environment, catering to the output needs of both traditional image reproduction methods and to contemporary print and digital media with its page-layout capabilities and text attribute controls. Specific functions of FreeHand include a superior image-tracing operation for vector editing, page layout features within multiple-page documents, and embedding custom print-settings (such as variable halftone-screen specifications within a single graphic, etc.) to each document independent of auxiliary printer-drivers. User-operation is considered to be more suited for designers with an artistic background compared to designers with a technical background. When being marketed, FreeHand lacked the promotional backing, development and PR support in comparison to other similar products. FreeHand was transferred to the classic print group after Macromedia was purchased by Adobe in 2005. On May 16, 2007, Adobe announced that no further updates to Freehand would be developed but continues to sell FreeHand MX as a Macromedia product. FreeHand continues to run on Mac OS X Snow Leopard (using an Adobe fix) and on Windows 7. For macOS, Affinity Designer is able to open version 10 & MX Freehand files. Adobe Illustrator is a commonly-used editor because of Adobe's market dominance, but is more expensive than other similar products. It is primarily developed consistently in line with other Adobe products and such is best integrated with Adobe's Creative Suite packages. The ai file format is proprietary, but some vector editors can open and save in that format. Illustrator imports over two dozen formats, including PSD, PDF and SVG, and exports AI, PDF, SVG, SVGZ, GIF, JPG, PNG, WBMP, and SWF. However, the user must be aware of unchecking the "Preserve Illustrator Editing Capabilities" option if he or she desires to generate interoperable SVG files. Affinity Designer by Serif Europe (the successor to their previous product, DrawPlus) is non-subscription-based software that is often described as an alternative to Adobe Illustrator. The application can open Portable Document Format (PDF), Adobe Photoshop, and Adobe Illustrator files, as well as export to those formats and to the Scalable Vector Graphics (SVG) and Encapsulated PostScript (EPS) formats. It also supports import from some Adobe Freehand files (specifically versions 10 & MX). Apache OpenOffice Draw is the vector graphics editor of the Apache OpenOffice open source office suite. It supports many import and export file formats and is available for multiple desktop operating systems. Boxy SVG is a chromium-based vector graphics editor for creating illustrations, as well as logos, icons, and other elements of graphic design. It is primarily focused on editing drawings in the SVG file format. The program is available as both a web app and a desktop application for Windows, macOS, Chrome OS, and Linux-based operating systems. Collabora Online Draw is the vector graphics editor of the Collabora Online open source office suite. It supports many import and export file formats and is accessible via any modern web browser, it also supports desktop editing features, Collabora Office is available for desktop and mobile operating systems, it is the enterprise ready version of LibreOffice. ConceptDraw PRO is a business diagramming tool and vector graphics editor available for both Windows and macOS. It supports multi-page documents, and includes an integrated presentation mode. ConceptDraw PRO supports imports and exports several formats, including Microsoft Visio and Microsoft PowerPoint. Corel Designer (originally Micrografx Designer) is one of the earliest vector-based graphics editors for the Microsoft Windows platform. The product is mainly used for the creation of engineering drawings and is shipped with extensive libraries for the needs of engineers. It is also flexible enough for most vector graphics design applications. CorelDRAW is an editor used in the graphic design, sign making and fashion design industries. CorelDRAW is capable of limited interoperation by reading file formats from Adobe Illustrator. CorelDRAW has over 50 import and export filters, on-screen and dialog box editing and the ability to create multi-page documents. It can also generate TrueType and Type 1 fonts, although refined typographic control is better suited to a more specific application. Some other features of CorelDRAW include the creation and execution of VBA macros, viewing of colour separations in print preview mode and integrated professional imposing options. Dia is a free and open-source diagramming and vector graphics editor available for Windows, Linux and other Unix-based computer operating systems. Dia has a modular design and several shape packages for flowcharting, network diagrams and circuit diagrams. Its design was inspired by Microsoft Visio, although it uses a Single Document Interface similar to other GNOME software (such as GIMP). DrawPlus, first built for the Windows platform in 1993, has matured into a full featured vector graphics editor for home and professional users. Also available as a feature-limited free 'starter edition': DrawPlus SE. DrawPlus developers, Serif Europe, have now ceased its development in order to focus on its successor, Affinity Designer. Edraw Max is a cross-platform diagram software and vector graphics editor available for Windows, Mac and Linux. It supports kinds of diagram types. It supports imports and exports SVG, PDF, HTML, Multiple page TIFF, Microsoft Visio and Microsoft PowerPoint. Embroidermodder is a free machine embroidery software tool that supports a variety of formats and allows the user to add custom modifications to their embroidery designs. Fatpaint is a free, light-weight, browser-based graphic design application with built-in vector drawing tools. It can be accessed through any browser with Flash 9 installed. Its integration with Zazzle makes it particularly suitable for people who want to create graphics for custom printed products such as T-shirts, mugs, iPhone cases, flyers and other promotional products. Figma is a collaborative web-based online vector graphics editor, used primarily for UX design and prototyping. GIMP, which works mainly with raster images, offers a limited set of features to create and record SVG files. It can also load and handle SVG files created with other software like Inkscape. Inkscape is a free and open-source vector editor with the primary native format being SVG. Inkscape is available for Linux, Windows, Mac OS X, and other Unix-based systems. Inkscape can import SVG, SVGZ, AI, PDF, JPEG, PNG, GIF (and other raster graphics formats), WMF, CDR (CorelDRAW), VSD (Visio) file formats and export SVG, SVGZ, PNG, PDF, PostScript, EPS, EPSi, LaTeX, HPGL, SIF (Synfig Animation Studio), HTML5 Canvas, FXG (Flash XML Graphics) and POVRay file formats. Some formats have additional support through Inkscape extensions, including PDF, EPS, Adobe Illustrator, Dia, Xfig, CGM, sK1 and Sketch. The predecessor of Inkscape was Sodipodi. Ipe lets users draw geometric objects such as polylines, arcs and spline curves and text. Ipe supports use of layers and multiple pages. It can paste bitmap images from clipboard or import from JPEG or BMP, and also through a conversion software it can import PDF figures generated by other software. It differentiates itself from similar programs by including advanced snapping tools and the ability to directly include LaTeX text and equations. Ipe is extensible by use of ipelets, which are plugins written in C++ or Lua. LibreOffice Draw is the vector graphics editor of the LibreOffice open source office suite. It supports many import and export file formats and is available for multiple desktop operating systems. The Document Foundation with the help of others is currently developing Android and online versions of the LibreOffice office suite, including Draw. Microsoft Expression Design is a commercial vector and bitmap graphics editor based on Creature House Expression, which was acquired by Microsoft in 2003. It was part of the Microsoft Expression Studio suite. Expression Design is discontinued, and is no longer available for download from Microsoft. It runs on Windows XP, Vista, Windows 7 and 8, and on Windows 8.1 and 10 released after it was discontinued. Microsoft Visio is a diagramming, flow chart, floor plan and vector graphics editor available for Windows. It is commonly used by small and medium-sized businesses, and by Microsoft in their corporate documentation. OmniGraffle, by The Omni Group, is a vector graphics editor available for Macintosh. It is principally used for creating flow charts and other diagrams. OmniGraffle imports and exports several formats, including Microsoft Visio, SVG, and PDF. PhotoLine is mainly a raster graphics editor but also offers a comprehensive set of vector drawing tools including multiple paths per layer, layer groups, color management and full color space support including CMYK and Lab color spaces, and multipage documents. PhotoLine can import and export PDF and SVG files as well as all major bitmap formats. sK1 is a free and open-source cross-platform vector editor for Linux and Windows which is oriented for "prepress ready" PostScript & PDF output. The major sK1 features are CMYK colorspace support; CMYK support in Postscript; Cairo-based engine; color management; multiple document interface; Pango-based text engine; Universal CDR importer (7-X4 versions); native wxWidgets based user interface. sK1 can import postscript-based AI files, CDR, CDT, CCX, CDRX, CMX, XAR, PS, EPS, CGM, WMF, XFIG, SVG, SK, SK1, AFF, PLT, CPL, ASE, ACO, JCW, GPL, SOC, SKP file formats. It can export AI, SVG, SK, SK1, CGM, WMF, PDF, PS, PLT, CPL, ASE, ACO, JCW, GPL, SOC, SKP file formats. SaviDraw, by Silicon Beach Software, is a modern vector drawing program for Windows 10. It is available only from the Microsoft app store. It is designed to work well with touch screens - no functions require keyboard modifiers. It features a new way to draw vector curves (very different from the traditional Pen tool) and has voice-command shortcuts. Sketch is a commercial vector graphics application for macOS. SketchUp is a free vector graphics program with a paid pro-version. SketchUp is focused primarily on 3D sketching, with many features specifically designed to simplify architectural sketching. Google integrated an online model sharing database called 3D Warehouse to allow sharing of 3D sketches. SketchUp was purchased by Trimble on 1 June 2012. SVG-edit is a FOSS web-based, JavaScript-driven SVG editor that works in any modern browser. Synfig Studio (also known as Synfig) is a free and open-source 2D vector graphics and timeline-based computer animation program created by Robert Quattlebaum. Synfig is available for Linux, Windows, macOS. Synfig stores its animations in its own XML file format, SIF (uncompressed) or SIFZ (compressed) and can import SVG. VectorStyler by Numeric Path is a professional vector graphics app, currently in advanced beta, available for both macOS and Windows 10 systems. It offers a comprehensive set of vector drawing tools, vector-based brushes, shape and image effects, corner shapes, mesh and shape-based gradients, collision snapping, multi-page documents, and full color space support including CMYK. The application can open Portable Document Format (PDF), Scalable Vector Graphics (SVG), Adobe Illustrator, EPS and also Adobe Photoshop files, as well as export to those formats. WinFIG is a shareware Editor with crossplattform. It use the Format of Xfig. Xara Photo & Graphic Designer and Designer Pro (formerly Xara Xtreme and Xtreme Pro) are vector graphics editors for Windows developed by Xara. Xara Photo & Graphic Designer has high usability compared to other similar products and has very fast rendering. Xara Photo & Graphic Designer (and earlier product ArtWorks) was the first vector graphics software product to provide fully antialiased display, advanced gradient fill and transparency tools. The current version supports multi-page documents, and includes a capable integrated photo tool making it an option for any sort of DTP work. The Pro version includes extra features such as Pantone and color separation support, as well as comprehensive web page design features. Xara Xtreme LX is a partially open source version of Xara Photo & Graphic Designer for Linux. Xfig is an Xlib, open source editor started by Supoj Sutanthavibul in 1985, and maintained by various people. It has a technical library. General information This table gives basic general information about the different vector graphics editors: Operating system support This table lists the operating systems that different editors can run on without emulation: Basic features Notes File format support Import Notes See also Comparison of 3D computer graphics software Comparison of graphics file formats Raster graphics Comparison of raster-to-vector conversion software Comparison of raster graphics editors Vector graphics Notes Vector graphics editors Comparison
66127617
https://en.wikipedia.org/wiki/Hardik%20Gohel
Hardik Gohel
Hardik Gohel is a computer scientist, academician, artificial intelligence, digital healthcare, cybersecurity, and advanced computing researcher. He is a faculty member and director of Applied Artificial Intelligence(AAI) laboratory at University of Houston–Victoria (UHV). Gohel is also a Senior Member of the Institute of Electrical and Electronics Engineers (IEEE) and executive faculty advisor of an international science group at UHV. Gohel worked as a task leader in federal research projects at applied research center at Florida International University(FIU). Gohel also served as a postdoctoral advisory board member at FIU. Education Gohel has received his Ph.D. in computer science from University of Hertfordshire, England in 2015. He has received his bachelor's and master's degree in computer science from Saurashtra University and Sardar Patel University, India. Research Gohel has an extensive research experience in artificial intelligence and cyber test automation and monitoring, smart bandages for wound monitoring, bigdata for security intelligence, trustworthy cyberspace for security and privacy of social media, predictive maintenance for nuclear infrastructure, and database and mobile forensics infrastructure. He has 70 publications in the journals and proceedings of national and international conferences. In December 2020, his collaborative research on COVID-19 was recognized and listed in global literature by the World Health Organization(WHO). Grants In January 2019, Gohel received postdoctoral travel grant from FIU-OPSS to participate and present research in top tier cybersecurity conference. In February 2020, Gohel's graduate student also received internet society travel grant to attend the same top tier conference. In April 2020, Gohel received Jr. faculty summer research grant and In November 2020, he received multiyear collaborative federal STEM grant to develop students workforce in data science, cybersecurity, artificial intelligence and other advanced technical research areas to prepare students career in US defense system. In April 2021, Gohel received internal research grant award. Community outreach In March 2021, Gohel has joined University of Houston’s Hewlett Packard Enterprise Data Science Institute as an AI/ML faculty researcher. This is to promote research, education, services, operations and outreach in data science and artificial intelligence across the Houston, Katy and Victoria, Texas areas. In December 2020, Gohel has formed an UHV student branch of the Institute of Electrical and Electronics Engineers (IEEE), the world's largest technical professional organization. Gohel is also associated with virtual converges of international tech communities by virtual webinars and datascience bootcamps for high school students. Books and book chapters Data Visualization: Trends and Challenges Toward Multidisciplinary Perception Human Brain Computer Interface (H-BCI) Introduction to Network & Cyber Security Applied ICT - Beyond Oceans & Spaces Developing Security Intelligence in Big Data Awards and honors Gohel has received an academic and research excellence awards from Computer Society of India in 2015 and 2018. The Institute of Electrical and Electronics Engineers (IEEE) awarded him Senior Membership in 2020. References Living people Fellows of the American Association for the Advancement of Science Senior Members of the IEEE Indian computer scientists American computer scientists University of Houston System Indian emigrants to the United States 1986 births Gujarati people Indian Hindus Members of the United States National Academy of Engineering University of Houston faculty
350752
https://en.wikipedia.org/wiki/Video%20game%20music
Video game music
Video game music or (VGM) is the soundtrack that accompanies video games. Early video game music was once limited to sounds of early programmable sound generator (PSG) chips. These limitations have led to the style of music known as chiptune, and became the sound of the first video games. With advances in technology, video game music has grown to include a larger variety of sounds. Music in video games can be heard over a game's title screen, menus and during gameplay. Game soundtracks can also change depending on a player's actions or situation, such as indicating missed actions in rhythm games, informing the player they are in a dangerous situation or rewarding them for certain achievements. Video game music can be one of two kinds: original or licensed. The popularity of video game music has created education and job opportunities, generated awards, and led to video game soundtracks to be commercially sold and performed in concerts. History Early video game technology and computer chip music At the time video games had emerged as a popular form of entertainment in the late 1970s, music was stored on physical medium in analog waveforms such as compact cassettes and phonograph records. Such components were expensive and prone to breakage under heavy use, making them less than ideal for use in an arcade cabinet, though in rare cases such as Journey, they were used. A more affordable method of having music in a video game was to use digital means, where a specific computer chip would change electrical impulses from computer code into analog sound waves on the fly for output on a speaker. Sound effects for the games were also generated in this fashion. An early example of such an approach to video game music was the opening chiptune in Tomohiro Nishikado's Gun Fight (1975). While this allowed for inclusion of music in early arcade video games, it was usually monophonic, looped or used sparingly between stages or at the start of a new game, such as the Namco titles Pac-Man (1980) composed by Toshio Kai or Pole Position (1982) composed by Nobuyuki Ohnogi. The first game to use a continuous background soundtrack was Tomohiro Nishikado's Space Invaders, released by Taito in 1978. It had four descending chromatic bass notes repeating in a loop, though it was dynamic and interacted with the player, increasing pace as the enemies descended on the player. The first video game to feature continuous, melodic background music was Rally-X, released by Namco in 1980, featuring a simple tune that repeats continuously during gameplay. The decision to include any music into a video game meant that at some point it would have to be transcribed into computer code. Some music was original, some was public domain music such as folk songs. Sound capabilities were limited; the popular Atari 2600 home system, for example, was capable of generating only two tones at a time. As advances were made in silicon technology and costs fell, a definitively new generation of arcade machines and home consoles allowed for great changes in accompanying music. In arcades, machines based on the Motorola 68000 CPU and accompanying various Yamaha YM programmable sound generator sound chips allowed for several more tones or "channels" of sound, sometimes eight or more. The earliest known example of this was Sega's 1980 arcade game Carnival, which used an AY-3-8910 chip to create an electronic rendition of the classical 1889 composition "Over The Waves" by Juventino Rosas. Konami's 1981 arcade game Frogger introduced a dynamic approach to video game music, using at least eleven different gameplay tracks, in addition to level-starting and game over themes, which change according to the player's actions. This was further improved upon by Namco's 1982 arcade game Dig Dug, where the music stopped when the player stopped moving. Dig Dug was composed by Yuriko Keino, who also composed the music for other Namco games such as Xevious (1982) and Phozon (1983). Sega's 1982 arcade game Super Locomotive featured a chiptune rendition of Yellow Magic Orchestra's "Rydeen" (1979); several later computer games also covered the song, such as Trooper Truck (1983) by Rabbit Software as well as Daley Thompson's Decathlon (1984) and Stryker's Run (1986) composed by Martin Galway. Home console systems also had a comparable upgrade in sound ability beginning with the ColecoVision in 1982 capable of four channels. However, more notable was the Japanese release of the Famicom in 1983 which was later released in the US as the Nintendo Entertainment System in 1985. It was capable of five channels, one being capable of simple PCM sampled sound. The home computer Commodore 64 released in 1982 was capable of early forms of filtering effects, different types of waveforms and eventually the undocumented ability to play 4-bit samples on a pseudo fourth sound channel. Its comparatively low cost made it a popular alternative to other home computers, as well as its ability to use a TV for an affordable display monitor. Approach to game music development in this time period usually involved using simple tone generation and/or frequency modulation synthesis to simulate instruments for melodies, and use of a "noise channel" for simulating percussive noises. Early use of PCM samples in this era was limited to short sound bites (Monopoly), or as an alternate for percussion sounds (Super Mario Bros. 3). The music on home consoles often had to share the available channels with other sound effects. For example, if a laser beam was fired by a spaceship, and the laser used a 1400 Hz square wave, then the square wave channel that was in use by music would stop playing music and start playing the sound effect. The mid-to-late 1980s software releases for these platforms had music developed by more people with greater musical experience than before. Quality of composition improved noticeably, and evidence of the popularity of music of this time period remains even today. Composers who made a name for themselves with their software include Koichi Sugiyama (Dragon Quest), Nobuo Uematsu (Final Fantasy), Rob Hubbard (Monty On the Run, International Karate), Koji Kondo (Super Mario Bros., The Legend of Zelda), Miki Higashino (Gradius, Yie-Ar Kung Fu, Teenage Mutant Ninja Turtles), Hiroshi Kawaguchi (Space Harrier, Hang-On, Out Run), Hirokazu Tanaka (Metroid, Kid Icarus, EarthBound), Martin Galway (Daley Thompson's Decathlon, Stryker's Run, Times of Lore), David Wise (Donkey Kong Country), Yuzo Koshiro (Dragon Slayer, Ys, Shinobi, ActRaiser, Streets of Rage), Mieko Ishikawa (Dragon Slayer, Ys), and Ryu Umemoto (visual novels, shoot 'em ups). By the late 1980s, video game music was being sold as cassette tape soundtracks in Japan, inspiring American companies such as Sierra, Cinemaware and Interplay to give more serious attention to video game music by 1988. The Golden Joystick Awards introduced a category for Best Soundtrack of the Year in 1986, won by Sanxion. Some games for cartridge systems have been sold with extra audio hardware on board, including Pitfall II for the Atari 2600 and several late Famicom titles. These chips add to the existing sound capabilities. Early digital synthesis and sampling From around 1980, some arcade games began taking steps toward digitized, or sampled, sounds. Namco's 1980 arcade game Rally-X was the first known game to use a digital-to-analog converter (DAC) to produce sampled tones instead of a tone generator. That same year, the first known video game to feature speech synthesis was also released: Sunsoft's shoot 'em up game Stratovox. Around the same time, the introduction of frequency modulation synthesis (FM synthesis), first commercially released by Yamaha for their digital synthesizers and FM sound chips, allowed the tones to be manipulated to have different sound characteristics, where before the tone generated by the chip was limited to the design of the chip itself. Konami's 1983 arcade game Gyruss used five synthesis sound chips along with a DAC, which were used to create an electronic version of J. S. Bach's Toccata and Fugue in D minor. Beyond arcade games, significant improvements to personal computer game music were made possible with the introduction of digital FM synth boards, which Yamaha released for Japanese computers such as the NEC PC-8801 and PC-9801 in the early 1980s, and by the mid-1980s, the PC-8801 and FM-7 had built-in FM sound. The sound FM synth boards produced are described as "warm and pleasant sound". Musicians such as Yuzo Koshiro and Takeshi Abo utilized to produce music that is still highly regarded within the chiptune community. The widespread adoption of FM synthesis by consoles would later be one of the major advances of the 16-bit era, by which time 16-bit arcade machines were using multiple FM synthesis chips. One of the earliest home computers to make use of digital signal processing in the form of sampling was the Commodore Amiga in 1985. The computer's sound chip featured four independent 8-bit digital-to-analog converters. Developers could use this platform to take samples of a music performance, sometimes just a single note long, and play it back through the computer's sound chip from memory. This differed from Rally-X in that its hardware DAC was used to play back simple waveform samples, and a sampled sound allowed for a complexity and authenticity of a real instrument that an FM simulation could not offer. For its role in being one of the first and affordable, the Amiga would remain a staple tool of early sequenced music composing, especially in Europe. The Amiga offered these features before most other competing home computer platforms though the Macintosh which had been introduced a year earlier had similar capabilities. The Amiga's main rival, the Atari ST, sourced the Yamaha YM2149 Programmable Sound Generator (PSG). Compared to the in-house designed Amiga sound engine, the PSG could only handle 1 channel of sampled sound, and needed the computer's CPU to process the data for it. This made it impractical for game development use until 1989 with the release of the Atari STE which used DMA techniques to play back PCM samples at up to 50 kHz. The ST, however, remained relevant as it was equipped with a MIDI controller and external ports. It became the choice of by many professional musicians as a MIDI programming device. IBM PC clones in 1985 would not see any significant development in multimedia abilities for a few more years, and sampling would not become popular in other video game systems for several years. Though sampling had the potential to produce much more realistic sounds, each sample required much more data in memory. This was at a time when all memory, solid-state (ROM cartridge), magnetic (floppy disk) or otherwise was still very costly per kilobyte. Sequenced soundchip-generated music, on the other hand, was generated with a few lines of comparatively simple code and took up far less precious memory. Arcade systems pushed game music forward in 1984 with the introduction of FM (Frequency Modulation) synthesis, providing more organic sounds than previous PSGs. The first such game, Marble Madness used the Yamaha YM2151 FM synthesis chip. As home consoles moved into the fourth generation, or 16-bit era, the hybrid approach (sampled and tone) to music composing continued to be used. The Sega Genesis offered advanced graphics over the NES and improved sound synthesis features (also using a Yamaha chip, the YM2612), but largely held the same approach to sound design. Ten channels in total for tone generation with one for PCM samples were available in stereo instead of the NES's five channels in mono, one for PCM. As before, it was often used for percussion samples. The Genesis did not support 16-bit sampled sounds. Despite the additional tone channels, writing music still posed a challenge to traditional composers and it forced much more imaginative use of the FM synthesizer to create an enjoyable listening experience. The composer Yuzo Koshiro utilized the Genesis hardware effectively to produce "progressive, catchy, techno-style compositions far more advanced than what players were used to" for games such as The Revenge of Shinobi (1989) and the Streets of Rage series, setting a "new high watermark for what music in games could sound like." The soundtrack for Streets of Rage 2 (1992) in particular is considered "revolutionary" and "ahead of its time" for its blend of house music with "dirty" electro basslines and "trancey electronic textures" that "would feel as comfortable in a nightclub as a video game." Another important FM synth composer was the late Ryu Umemoto, who composed music for many visual novels and shoot 'em ups during the 1990s. As the cost of magnetic memory declined in the form of diskettes, the evolution of video game music on the Amiga, and some years later game music development in general, shifted to sampling in some form. It took some years before Amiga game designers learned to wholly use digitized sound effects in music (an early exception case was the title music of text adventure game The Pawn, 1986). By this time, computer and game music had already begun to form its own identity, and thus many music makers intentionally tried to produce music that sounded like that heard on the Commodore 64 and NES, which resulted in the chiptune genre. The release of a freely-distributed Amiga program named Soundtracker by Karsten Obarski in 1987 started the era of MOD-format which made it easy for anyone to produce music based on digitized samples. Module files were made with programs called "trackers" after Obarski's Soundtracker. This MOD/tracker tradition continued with PC computers in the 1990s. Examples of Amiga games using digitized instrument samples include David Whittaker's soundtrack for Shadow of the Beast, Chris Hülsbeck's soundtrack for Turrican 2 and Matt Furniss's tunes for Laser Squad. Richard Joseph also composed some theme songs featuring vocals and lyrics for games by Sensible Software most famous being Cannon Fodder (1993) with a song "War Has Never Been So Much Fun" and Sensible World of Soccer (1994) with a song "Goal Scoring Superstar Hero". These songs used long vocal samples. A similar approach to sound and music developments had become common in the arcades by this time and had been used in many arcade system boards since the mid-1980s. This was further popularized in the early 1990s by games like Street Fighter II (1991) on the CPS-1, which used voice samples extensively along with sampled sound effects and percussion. Neo Geo's MVS system also carried powerful sound development which often included surround sound. The evolution also carried into home console video games, such as the release of the Super Famicom in 1990, and its US/EU version Super NES in 1991. It sported a specialized custom Sony chip for both the sound generation and for special hardware DSP. It was capable of eight channels of sampled sounds at up to 16-bit resolution, had a wide selection of DSP effects, including a type of ADSR usually seen in high-end synthesizers of the time, and full stereo sound. This allowed experimentation with applied acoustics in video games, such as musical acoustics (early games like Super Castlevania IV, F-Zero, Final Fantasy IV, Gradius III, and later games like Chrono Trigger), directional (Star Fox) and spatial acoustics (Dolby Pro Logic was used in some games, like King Arthur's World and Jurassic Park), as well as environmental and architectural acoustics (A Link to the Past, Secret of Evermore). Many games also made heavy use of the high-quality sample playback capabilities (Super Star Wars, Tales of Phantasia). The only real limitation to this powerful setup was the still-costly solid state memory. Other consoles of the generation could boast similar abilities yet did not have the same circulation levels as the Super NES. The Neo-Geo home system was capable of the same powerful sample processing as its arcade counterpart but was several times the cost of a Super NES. The Sega CD (the Mega CD outside North America) hardware upgrade to the Mega Drive (Genesis in the US) offered multiple PCM channels, but they were often passed over instead to use its capabilities with the CD-ROM itself. The popularity of the Super NES and its software remained limited to regions where NTSC television was the broadcast standard. Partly because of the difference in frame rates of PAL broadcast equipment, many titles released were never redesigned to play appropriately and ran much slower than had been intended, or were never released. This showed a divergence in popular video game music between PAL and NTSC countries that still shows to this day. This divergence would be lessened as the fifth generation of home consoles launched globally, and as Commodore began to take a back seat to general-purpose PCs and Macs for developing and gaming. Though the Mega CD/Sega CD, and to a greater extent the PC Engine in Japan, would give gamers a preview of the direction video game music would take in streaming music, the use of both sampled and sequenced music continues in game consoles even today. The huge data storage benefit of optical media would be coupled with progressively more powerful audio generation hardware and higher quality samples in the Fifth Generation. In 1994, the CD-ROM equipped PlayStation supported 24 channels of 16-bit samples of up to 44.1 kHz sample rate, samples equal to CD audio in quality. It also sported a few hardware DSP effects like reverb. Many Square titles continued to use sequenced music, such as Final Fantasy VII, Legend of Mana, and Final Fantasy Tactics. The Sega Saturn also with a CD drive supported 32 channels of PCM at the same resolution as the original PlayStation. In 1996, the Nintendo 64, still using a solid-state cartridge, actually supported an integrated and scalable sound system that was potentially capable of 100 channels of PCM, and an improved sample rate of 48 kHz. Games for the N64, because of the cost of the solid-state memory, typically had samples of lesser quality than the other two, however, and music tended to be simpler in construct. The more dominant approach for games based on CDs, however, was shifting toward streaming audio. MIDI on the PC In the same timeframe of the late 1980s to mid-1990s, the IBM PC clones using the x86 architecture became more ubiquitous, yet had a very different path in sound design than other PCs and consoles. Early PC gaming was limited to the PC speaker, and some proprietary standards such as the IBM PCjr 3-voice chip. While sampled sound could be achieved on the PC speaker using pulse width modulation, doing so required a significant proportion of the available processor power, rendering its use in games rare. With the increase of x86 PCs in the market, there was a vacuum in sound performance in home computing that expansion cards attempted to fill. The first two recognizable standards were the Roland MT-32, followed by the AdLib sound card. Roland's solution was driven by MIDI sequencing using advanced LA synthesizers. This made it the first choice for game developers to produce upon, but its higher cost as an end-user solution made it prohibitive. The AdLib used a low-cost FM synthesis chip from Yamaha, and many boards could operate compatibly using the MIDI standard. The AdLib card was usurped in 1989 by Creative's Sound Blaster, which used the same Yamaha FM chip in the AdLib, for compatibility, but also added 8-bit 22.05 kHz (later 44.1 kHz) digital audio recording and playback of a single stereo channel. As an affordable end-user product, the Sound Blaster constituted the core sound technology of the early 1990s; a combination of a simple FM engine that supported MIDI, and a DAC engine of one or more streams. Only a minority of developers ever used Amiga-style tracker formats in commercial PC games, (Unreal) typically preferring to use the MT-32 or AdLib/SB-compatible devices. As general purpose PCs using x86 became more ubiquitous than the other PC platforms, developers drew their focus towards that platform. The last major development before streaming music came in 1992: Roland Corporation released the first General MIDI card, the sample-based SCC-1, an add-in card version of the SC-55 desktop MIDI module. The comparative quality of the samples spurred similar offerings from Soundblaster, but costs for both products were still high. Both companies offered 'daughterboards' with sample-based synthesizers that could be later added to a less expensive soundcard (which only had a DAC and a MIDI controller) to give it the features of a fully integrated card. Unlike the standards of Amiga or Atari, a PC using x86 even then could be using a broad mix of hardware. Developers increasingly used MIDI sequences: instead of writing soundtrack data for each type of soundcard, they generally wrote a fully featured data set for the Roland application that would be compatible with lesser featured equipment so long as it had a MIDI controller to run the sequence. However, different products used different sounds attached to their MIDI controllers. Some tied into the Yamaha FM chip to simulate instruments, some daughterboards of samples had very different sound qualities; meaning that no single sequence performance would be accurate to every other General MIDI device. All of these considerations in the products reflected the high cost of memory storage which rapidly declined with the optical CD format. Pre-recorded and streaming music Taking entirely pre-recorded music had many advantages over sequencing for sound quality. Music could be produced freely with any kind and number of instruments, allowing developers to simply record one track to be played back during the game. Quality was only limited by the effort put into mastering the track itself. Memory space costs that was previously a concern was somewhat addressed with optical media becoming the dominant media for software games. CD quality audio allowed for music and voice that had the potential to be truly indistinguishable from any other source or genre of music. In fourth generation home video games and PCs this was limited to playing a Mixed Mode CD audio track from a CD while the game was in play (such as Sonic CD). The earliest examples of Mixed Mode CD audio in video games include the TurboGrafx-CD RPG franchises Tengai Makyō, composed by Ryuichi Sakamoto from 1989, and the Ys series, composed by Yuzo Koshiro and and arranged by in 1989. The Ys soundtracks, particularly Ys I & II (1989), are still regarded as some of the most influential video game music ever composed. However, there were several disadvantages of regular CD-audio. Optical drive technology was still limited in spindle speed, so playing an audio track from the game CD meant that the system could not access data again until it stopped the track from playing. Looping, the most common form of game music, was also a problem as when the laser reached the end of a track, it had to move itself back to the beginning to start reading again causing an audible gap in playback. To address these drawbacks, some PC game developers designed their own container formats in house, for each application in some cases, to stream compressed audio. This would cut back on memory used for music on the CD, allowed for much lower latency and seek time when finding and starting to play music, and also allowed for much smoother looping due to being able to buffer the data. A minor drawback was that use of compressed audio meant it had to be decompressed which put load on the CPU of a system. As computing power increased, this load became minimal, and in some cases, dedicated chips in a computer (such as a sound card) would actually handle all the decompressing. Fifth generation home console systems also developed specialised streaming formats and containers for compressed audio playback. Games would take full advantage of this ability, sometimes with highly praised results (Castlevania: Symphony of the Night). Games ported from arcade machines, which continued to use FM synthesis, often saw superior pre-recorded music streams on their home console counterparts (Street Fighter Alpha 2). Even though the game systems were capable of "CD quality" sound, these compressed audio tracks were not true "CD quality." Many of them had lower sampling rates, but not so significant that most consumers would notice. Using a compressed stream allowed game designers to play back streamed music and still be able to access other data on the disc without interruption of the music, at the cost of CPU power used to render the audio stream. Manipulating the stream any further would require a far more significant level of CPU power available in the 5th generation. Some games, such as the Wipeout series, continued to use full Mixed Mode CD audio for their soundtracks. This overall freedom offered to music composers gave video game music the equal footing with other popular music it had lacked. A musician could now, with no need to learn about programming or the game architecture itself, independently produce the music to their satisfaction. This flexibility would be exercised as popular mainstream musicians would be using their talents for video games specifically. An early example is Way of the Warrior on the 3DO, with music by White Zombie. A more well-known example is Trent Reznor's score for Quake. An alternate approach, as with the TMNT arcade, was to take pre-existing music not written exclusively for the game and use it in the game. The game Star Wars: X-Wing vs. TIE Fighter and subsequent Star Wars games took music composed by John Williams for the Star Wars films of the 1970s and 1980s and used it for the game soundtracks. Both using new music streams made specifically for the game, and using previously released/recorded music streams are common approaches for developing sound tracks to this day. It is common for X-games sports-based video games to come with some popular artists recent releases (SSX, Tony Hawk, Initial D), as well as any game with heavy cultural demographic theme that has tie-in to music (Need For Speed: Underground, Gran Turismo, and Grand Theft Auto). Sometimes a hybrid of the two are used, such as in Dance Dance Revolution. Sequencing samples continue to be used in modern gaming where fully recorded audio is not viable. Until the mid-2000s, many larger games on home consoles used sequenced audio to save space. Additionally, most games on the Game Boy Advance and Nintendo DS used sequenced music due to storage limitations. Sometimes a cross between sequencing samples, and streaming music is used. Games such as Republic: The Revolution (music composed by James Hannigan) and Command & Conquer: Generals (music composed by Bill Brown) have utilised sophisticated systems governing the flow of incidental music by stringing together short phrases based on the action on screen and the player's most recent choices (see dynamic music). Other games dynamically mixed the sound on the game based on cues of the game environment. As processing power increased dramatically in the 6th generation of home consoles, it became possible to apply special effects in realtime to streamed audio. In SSX, a recent video game series, if a snowboarder takes to the air after jumping from a ramp, the music softens or muffles a bit, and the ambient noise of wind and air blowing becomes louder to emphasize being airborne. When the snowboarder lands, the music resumes regular playback until its next "cue". The LucasArts company pioneered this interactive music technique with their iMUSE system, used in their early adventure games and the Star Wars flight simulators Star Wars: X-Wing and Star Wars: TIE Fighter. Action games such as these will change dynamically to match the amount of danger. Stealth-based games will sometimes rely on such music, either by handling streams differently, or dynamically changing the composition of a sequenced soundtrack. Personalized soundtracks Being able to play one's own music during a game in the past usually meant turning down the game audio and using an alternative music player. Some early exceptions were possible on PC/Windows gaming in which it was possible to independently adjust game audio while playing music with a separate program running in the background. Some PC games, such as Quake, play music from the CD while retrieving game data exclusively from the hard disk, thereby allowing the game CD to be swapped for any music CD. The first PC game to introduce in-game support for custom soundtracks was Lionhead Studio's Black and White. The 2001 game included an in-game interface for Winamp that enabled the players to play audio tracks from their own playlists. In addition, this would sometimes trigger various reactions from the player's Creature, like dancing or laughing. Some PlayStation games supported this by swapping the game CD with a music CD, although when the game needed data, players had to swap the CDs again. One of the earliest games, Ridge Racer, was loaded entirely into RAM, letting the player insert a music CD to provide a soundtrack throughout the entirety of the gameplay. In Vib Ribbon, this became a gameplay feature, with the game generating levels based entirely on the music on whatever CD the player inserted. Microsoft's Xbox allowed music to be copied from a CD onto its internal hard drive, to be used as a "Custom Soundtrack", if enabled by the game developer. The feature carried over into the Xbox 360 where it became supported by the system software and could be enabled at any point. The Wii is also able to play custom soundtracks if it is enabled by the game (Excite Truck, Endless Ocean). The PlayStation Portable can, in games like Need for Speed Carbon: Own the City and FIFA 08, play music from a Memory Stick. The PlayStation 3 has the ability to utilize custom soundtracks in games using music saved on the hard drive, however few game developers used this function. MLB 08: The Show, released in 2008, has a My MLB sound track feature that allows the user to play music tracks of their choice saved on the hard drive of their PS3, rather than the preprogrammed tracks incorporated into the game by the developer. An update to Wipeout HD, released on the PlayStation Network, was made to also incorporate this feature. In the video game Audiosurf, custom soundtracks are the main aspect of the game. Users have to pick a music file to be analyzed. The game will generate a race track based on tempo, pitch and complexity of the sound. The user will then race on this track, synchronized with the music. Games in the Grand Theft Auto series have supported custom soundtracks, using them as a separate in-game radio station. The feature was primarily exclusive to PC versions, and was adopted to a limited degree on console platforms. On a PC, inserting custom music into the stations is done by placing music files into a designated folder. For the Xbox version, a CD must be installed into the console's hard drive. For the iPhone version of Grand Theft Auto: Chinatown Wars, players create an iTunes playlist which is then played by the game. Forza Horizon 3 used a similar technology of custom soundtracks with the help of Groove Music. Developments in the 2000s The Xbox 360 supports Dolby Digital software, sampling and playback rate of 16-bit @ 48 kHz (internal; with 24-bit hardware D/A converters), hardware codec streaming, and potential of 256 audio simultaneous channels. While powerful and flexible, none of these features represent any major change in how game music is made from the last generation of console systems. PCs continue to rely on third-party devices for in-game sound reproduction, and SoundBlaster is largely the only major player in the entertainment audio expansion card business. The PlayStation 3 handles multiple types of surround sound technology, including Dolby TrueHD and DTS-HD Master Audio, with up to 7.1 channels, and with sampling rates of up to 192 kHz. Nintendo's Wii console shares many audio components with the Nintendo GameCube from the previous generation, including Dolby Pro Logic II. These features are extensions of technology already currently in use. The game developer of today has many choices on how to develop music. More likely, changes in video game music creation will have very little to do with technology and more to do with other factors of game development as a business whole. Video game music has diversified much to the point where scores for games can be presented with a full orchestra or simple 8/16-bit chiptunes. This degree of freedom has made the creative possibilities of video game music limitless to developers. As sales of video game music diverged from the game itself in the West (compared to Japan where game music CDs had been selling for years), business elements also wield a new level of influence. Music from outside the game developer's immediate employment, such as music composers and pop artists, have been contracted to produce game music just as they would for a theatrical movie. Many other factors have growing influence, such as editing for content, politics on some level of the development, and executive input. Impact and importance Many video game players believe that music can enhance game play and outlets such as Popular Science have stated that it is designed to "simultaneously stimulate your senses and blend into the background of your brain, because that's the point of the soundtrack. It has to engage you, the player, in a task without distracting from it. In fact, the best music would actually direct the listener to the task." Sound effects within game play are also believed to impact game performance. Ambient sounds such as those present in Resident Evil are seen to enhance the tension felt by players, something that GameSpot stated was also used in cinema. Speeding up the sound effects and music in games such as Space Invaders is also stated to have a strong impact on the gaming experience when done properly. Properly done, this can help create realism within virtuality and alert players to important scenes and information. Music and sound effects can become memorable, enabling people to instantly recognize music or sound effects as well as hum or mimic the tune or sound effect. Polygon has stated that despite the popularity of video game music, people may not always know the name of the composer. Licensing Using licensed music for video games became more popular as the medium used to distribute games grew large enough to accommodate songs alongside a game's other assets. Additionally, with the large growth of the video game market in the 2000s, song licensing became a lucrative route for music rights holders to gain part of that revenue. Games like those in the Grand Theft Auto series became showcases of licensed music. Music licensing is generally complicated due to various copyright laws, typically with at least two separate copyrights to consider: the songwriters' and the performers' contributions. Most large video game developers and publishers who use licensed music typically have staff proficient in licensing to clear songs for use in video games with the various music labels and other creative persons. Games with licensed music can have problems well past release if perpetual rights for the music are not secured for the game. Early games before the onset of digital distribution would have perpetual right for the music since there was no practical way to update the game following release at retail to deal with curtailed rights. However, digital distribution platforms, like Steam, Xbox Live, and PlayStation Network keep games up-to-date automatically. Music licenses for games sold through digital distribution may include limited terms, requiring the publisher to re-negotiate rights with the music's owner, or otherwise the music must be removed from the game through these updates. Notably, Alan Wake by Remedy Entertainment, first released in 2010, had to be pulled from digital sale in 2017 due to expiring music rights. However, with Microsoft's help, Remedy was able to re-secure these rights a year later and returned the game for sale. Alpha Protocol by Obsidian Entertainment was also pulled from sale in 2019 due to expiring music license rights, though there are no known plans if publisher Sega will seek to renew these. Licensed music in video games has also affected video game streaming, such as Let's Play videos. Due to the Digital Millennium Copyright Act (DMCA), most popular video sharing and streaming site implement automatic forms of detecting copyrighted music from most music labels, and flag or block user videos that employ that music, such as YouTube's ContentID system. These actions apply equally to videos of people playing video games, flagging the video from the licensed music in the game. To avoid this, games using licensed music may offer a "stream-safe" music option, either disabling the music playback or replacing the licensed music with copyright-free or royalty-free music. Game music as a genre Many games for the Nintendo Entertainment System and other early game consoles feature a similar style of musical composition that is sometimes described as the "video game genre." Some aspects of this style continue to influence certain music today, though gamers do not associate many modern game soundtracks with the older style. The genre's compositional elements largely developed due to technological restraints, while also being influenced by electronic music bands, particularly Yellow Magic Orchestra (YMO), who were popular during the late 1970s to 1980s. YMO sampled sounds from several classic arcade games in their early albums, most notably Space Invaders in the 1978 hit song "Computer Game". In turn, the band would have a major influence on much of the video game music produced during the 8-bit and 16-bit eras. Features of the video game music genre include: Pieces designed to repeat indefinitely, rather than having an arranged ending or fading out (they however create an atmosphere, especially in important scenes of the game. They introduce a philosophical dimension in the game, as they may introduce questioning in the mind of players, in relationship with their next action). Pieces lacking lyrics and playing over gameplay sounds. Limited polyphony. Only three notes can be played simultaneously on the Nintendo Entertainment System. A great deal of effort was put into composition to create the illusion of more notes playing at once. Although the tones featured in NES music can be thought of as emulating a traditional four-piece rock band (triangle wave used as a bass, two pulse waves analogous to two guitars, and a white noise channel used for drums), composers would often go out of their way to compose complex and rapid sequences of notes, in part due to the restrictions mentioned above. This is similar to music composition during the Baroque period, when composers, particularly when creating solo pieces, focused on musical embellishments to compensate for instruments such as the harpsichord that do not allow for expressive dynamics. For the same reason, many early compositions also feature a distinct jazz influence. These would overlap with later influences from heavy metal and J-pop music, resulting in an equally distinct compositional style in the 16-bit era. In an unrelated but parallel course in the European and North American developer scene, similar limitations were driving the musical style of home computer games. Module file format music, particularly MOD, used similar techniques but was more heavily influenced by the electronic music scene as it developed, and resulted in another very distinct subgenre. Demos and the developing demoscene played a big part in the early years, and still influence video game music today. As technological limitations gradually lifted, composers were given more freedom and, with the advent of CD-ROM, pre-recorded soundtracks came to dominate, resulting in a noticeable shift in composition and voicing style. Popular early CD-ROM titles were released with high-resolution graphics and recorded music. Since the audio was not reliant on a sound-card's synthesis, CD-ROM technology ensured that composers and sound designers could know what audio would sound like on most consumer configurations and could also record sound effects, live instruments, vocals, and in-game dialogue. As the divisions between movies and video games has blurred, so have divisions between film scores and video game scores. Adventure and fantasy movies have similar needs to adventure and fantasy games, i.e. fanfare, traveling, hero's theme and so on. Some composers have written scores in both genres. One noted example is U.S. composer Michael Giacchino who composed the soundtrack for the game Medal of Honor and later composed for the television series Lost and wrote scores for movies such as The Incredibles (2004) and Star Trek (2009). Outside video games Appreciation for video game music is strong among fans and composers, particularly for music from the third and fourth generations of home video game consoles, and sometimes newer generations. This appreciation has been shown outside the context of a video game, in the form of CDs, sheet music, public performances, art installations, and popular music. CDs and sheet music Selling video game soundtracks separately as CDs has become increasingly popular in the industry. Interpretive albums, remixes, and live performance albums were also common variations to original soundtracks (OSTs). Koichi Sugiyama was an early figure in this practice, and following the release of the first Dragon Quest game in 1986, a live performance CD of his compositions was released and performed by the London Philharmonic Orchestra (then later by other groups including the Tokyo Philharmonic Orchestra, and NHK Symphony). By 1987, Sega were selling 50,000 to 100,000 game soundtrack CDs annually. Yuzo Koshiro, another early figure, released a live performance of the Actraiser soundtrack. Both Koshiro's and fellow Falcom composer Mieko Ishikawa's contributions to Ys music would have such long-lasting impact that there were more albums released of Ys music than of almost all other game-type music. Like anime soundtracks, these soundtracks and even sheet music books were usually marketed exclusively in Japan. Therefore, interested non-Japanese gamers had to import the soundtracks and/or sheet music books through on or offline firms specifically dedicated to video game soundtrack imports. This has been somewhat less of an issue more recently as domestic publishers of anime and video games have been producing western equivalent versions of the OSTs for sale in UK and US, though these are often for more popular titles. Video game music companies like Materia Collective have pursued and produced published book editions of video game music. The sale of video game soundtracks has created a growing symbiotic relationship between the music industry and the games industry. Commonly, games are being used to promote and sell licensed music, rather than just original score, and recording artists are being used to market and sell games. Music marketing agency Electric Artists conducted a study that revealed a number of interesting statistics surrounding ‘‘hard-core gamers’’ and their music habits: 40% of hard-core gamers bought the CD after hearing a song they liked in a video game, 73% of gamers said soundtracks within games help sell more CDs, and 40% of respondents said a game introduced them to a new band or song, then 27% of them went out and bought what they heard. Some game soundtracks have become so popular they have reached platinum status, such as NBA Live 2003. Public performance Many original composers have publicly exhibited their music through symphonic concert performances. Once again, Koichi Sugiyama was the first to execute this practice in 1987 with his "Family Classic Concert" and has continued these concert performances almost annually. In 1991, he also formed a series called Orchestral Game Music Concerts, notable for featuring other talented game composers such as Yoko Kanno (Nobunaga's Ambition, Romance of the Three Kingdoms, Uncharted Waters), Nobuo Uematsu (Final Fantasy), Keiichi Suzuki (Mother/Earthbound), and Kentaro Haneda (Wizardry). Following suit, compositions by Nobuo Uematsu on Final Fantasy IV were arranged into Final Fantasy IV: Celtic Moon, a live performance by string musicians with strong Celtic influence recorded in Ireland. The Love Theme from the same game has been used as an instructional piece of music in Japanese schools. With the success of Square's 1990s games Final Fantasy VI, Final Fantasy VII and Final Fantasy VIII by Nobuo Uematsu, and Chrono Trigger, Xenogears and Chrono Cross by Yasunori Mitsuda, public performance began to gain international popularity. On August 20, 2003, music written for video games such as Final Fantasy and The Legend of Zelda was performed for the first time outside Japan, by the Czech National Symphony Orchestra in a Symphonic Game Music Concert in Leipzig, Germany at the Gewandhaus concert hall. This event was held as the official opening ceremony of Europe's biggest trading fair for video games, the GC Games Convention and repeated in 2004, 2005, 2006 and 2007. On November 17, 2003, Square Enix launched the Final Fantasy Radio on America Online. The radio station has initially featured complete tracks from Final Fantasy XI and Final Fantasy XI: Rise of Zilart and samplings from Final Fantasy VII through Final Fantasy X. The first officially sanctioned Final Fantasy concert in the United States was performed by the Los Angeles Philharmonic Orchestra at Walt Disney Concert Hall in Los Angeles, California, on May 10, 2004. All seats at the concert were sold out in a single day. "Dear Friends: Music from Final Fantasy" followed and was performed at various cities across the United States. Nobuo Uematsu has also performed a variety of Final Fantasy compositions live with his rock band, The Black Mages. On July 6, 2005, the Los Angeles Philharmonic Orchestra also held a Video Games Live concert at the Hollywood Bowl, an event founded by video game music composers Tommy Tallarico and Jack Wall. This concert featured a variety of video game music, ranging from Pong to Halo 2. It also incorporated real-time video feeds that were in sync with the music, as well as laser and light special effects. Media outside the video game industry, such as NPR and The New York Times, have covered their subsequent world tours. On August 20, 2006, the Malmö Symphonic Orchestra with host Orvar Säfström performed the outdoor game music concert Joystick in Malmö, Sweden before an audience of 17,000, holding the current record of attendance for a game music concert. Säfström has since continued to produce game music concerts around Europe under the names Joystick and Score. From April 20–27, 2007, Eminence Symphony Orchestra, an orchestra dedicated to video game and anime music, performed the first part of their annual tour, the "A Night in Fantasia" concert series in Australia. Whilst Eminence had performed video game music as part of their concerts since their inception, the 2007 concert marked the first time ever that the entire setlist was pieces from video games. Up to seven of the world's most famous game composers were also in attendance as special guests. Music performed included Red Alert 3 Theme: Soviet March by James Hannigan and Shadow of the Colossus by Kow Otani. Since 2010, video games-themed "pops" concerts have become a major proportion of the revenue in many United States concert halls, as traditional classical music performances decline in popularity. On March 16, 2012 the Smithsonian American Art Museum's "The Art of Video Games" exhibit opened featuring a chipmusic soundtrack at the entrance by artists 8 Bit Weapon & ComputeHer. 8 Bit Weapon also created a track called "The art of Video Games Anthem" for the exhibit as well. In popular music In the popular music industry, video game music and sounds have appeared in songs by various popular artists. Arcade game sounds had a particularly strong influence on the hip hop, pop music (particularly synthpop) and electro music genres during the golden age of arcade video games in the early 1980s. Arcade game sounds had an influence on synthpop pioneers Yellow Magic Orchestra, who sampled Space Invaders sounds in their influential 1978 debut album, particularly the hit song "Computer Game". In turn, the band would have a major influence on much of the video game music produced during the 8-bit and 16-bit eras. Other pop songs based on Space Invaders soon followed, including "Disco Space Invaders" (1979) by Funny Stuff, "Space Invaders" (1980) by Playback, and the hit songs "Space Invader" (1980) by The Pretenders and "Space Invaders" (1980) by Uncle Vic. Buckner & Garcia produced a successful album dedicated to video game music in 1982, Pac-Man Fever. Former YMO member Haruomi Hosono also released a 1984 album produced entirely from Namco arcade game samples entitled Video Game Music, an early example of a chiptune record and the first video game music album. Warp's record "Testone" (1990) by Sweet Exorcist sampled video game sounds from YMO's "Computer Game" and defined Sheffield's bleep techno scene in the early 1990s. More recently, "video game beats" have appeared in popular songs such as Kesha's "Tik Tok", the best-selling single of 2010, as well as "U Should Know Better" by Robyn featuring Snoop Dogg, and "Hellbound" by Eminem. The influence of video game music can also be seen in contemporary electronica music by artists such as Dizzee Rascal and Kieran Hebden. Grime music in particular samples sawtooth wave sounds from video games which were popular in East London. English power metal band DragonForce is also known for their "retro video game influenced" sound. Video game music education Video game music has become part of the curriculum at the degree, undergraduate, and graduate levels in many traditional colleges and universities. According to the Entertainment Software Association, there are over 400 schools offering courses and degrees in video game design in the United States, many of which include sound and music design. Berklee College of Music, Yale University, New York University, and the New England Conservatory have all introduced game music into their music programs. These programs offer immersive education in music composition, orchestration, editing and production. Other post-secondary schools have more games-focused programs, such as DigiPen Institute of Technology, Columbia College Chicago, and Academy of Art University, who all offer programs in Music and Sound Design. These programs include courses in sound effect creation, interactive sound design, and scripting music. Similar programs have gained popularity in Europe. The Utrecht School of the Arts (Faculty of Art, Media and Technology) has offered a Game Sound and Music Design program since 2003. The University of Hertfordshire has a program in Music Composition and Technology for Film and Games, Leeds Beckett University offers Sound and Music for Interactive Games, and dBs Music Bristol teaches Sound for Games and Apps. More informal institutions, like the training seminars at GameSoundCon also feature classes in how to compose video game music. Extracurricular organizations devoted to the performance of video game music have also been implemented in tandem with these new curriculum programs. The Gamer Symphony Orchestra at the University of Maryland performs self-arranged video game music and the Video Game Orchestra is a semiprofessional outgrowth of students from the Berklee College of Music and other Boston-area schools. According to the National Association for Music Education, video game music is now being taught at elementary and secondary school levels to aid in the understanding of music composition. Students at Magruder High School in Montgomery County, Maryland have even started a student-run gamer orchestra, and many high school bands perform game music. Academic study Academic research on video game music began in the late 1990s, and developed through the mid 2000s. Early research on the topic often involved historical studies of game music, or comparative studies of video game music and film music (see, for instance, Zach Whalen's article "Play Along – An Approach to Videogame Music" which includes both). The study of video game music is also known by some as "ludomusicology" — a portmanteau of "ludology" (the study of games and gameplay) and "musicology" (the study and analysis of music) — a term coined independently by Guillaume Laroche and Roger Moseley. A prominent figure in early video game music and audio research is Karen Collins, who is associate professor at the University of Waterloo and Canada Research Chair in Interactive Audio at the University of Waterloo Games Institute. Her monograph Game Sound: An Introduction to the History, Theory and Practice of Video Game Music and Sound Design (MIT Press 2008) is considered a seminal work in the field, and was influential in the subsequent development of video game music studies. The Ludomusicology Research Group is an inter-university research organisation focusing on the study of music in games, music games and music in video game culture, composed of four researchers: Michiel Kamp, Tim Summers, Melanie Fritsch, and Mark Sweeney. Together they organise an annual international conference held in the UK or Europe (at the time of writing, the most recent was the Ludo2017 conference held at Bath Spa University). The group was founded by Kamp, Summers and Sweeney in August 2011, who have also edited a collection of essays based around the study of game sound entitled Ludomusicology: Approaches to Video Game Music, published in July 2016. They also edited a double special issue of The Soundtrack and initiated a new book series for the Study in Game Sound and Music in 2017. In September 2016, Tim Summers' book 'Understanding Video Game Music' was published by Cambridge University Press. Fritsch officially joined the group in 2016. She had edited the 2nd issue of the online journal ACT – Zeitschrift für Musik und Performance, published in July 2011, which included ludomusicological contributions written by Tim Summers, Steven B. Reale and Jason Brame. She had been a regular at the conferences since 2012 and published several book chapters on the topic. Whereas Kamp, Summers and Sweeney have a background in musicology, Fritsch's background is in performance studies. The North American Conference on Video Game Music (NACVGM) is an international conference on video game music held annually in North America since 2014. It is organised by Neil Lerner, Steven Beverburg Reale and William Gibbons. In late 2016 the Society for the Study of Sound and Music in Games (SSSMG) was launched by the Ludomusicology Research Group in conjunction with the organisers of the North American Conference on Video Game Music and the Audio Mostly conference. The SSSMG has the aim of bringing together both practitioners and researchers from across the globe in order to develop the field's understanding of sound and video game music and audio. Its focus is the use of its website as a "hub" for communication and resource centralisation, including a video game music research bibliography (a project initially begun by the Ludomusicology Research Group). The Ludomusicology Society of Australia was launched by Barnabas Smith in April 2017, during the Ludo2017 conference in Bath, UK; it aims to "offer a centralised and local professional body nurturing game music studies for academics, people in industry and game music fans alike in the Australasian region." Composers Creating and producing video game music requires strong teams and coordination among the different divisions of game development. As the market has expanded, so have the types of jobs in game music. The process often starts with the game designer, who will have a specific musical theme or genre in mind for the game. Their options include contracting original composers or licensing existing music, both of which require other music experts. During the arcade and early console era (1983 to the mid 1990s), most game music was composed by full-time employees of the particular game company producing the game. This was largely due to the very specialized nature of video game music, where each system had its own technology and tool sets. It was not uncommon for a game company like Capcom or Konami to have a room full of composers, each at their own workstation with headphones writing music. Once the CD-era hit and studio recorded music became more ubiquitous in games, it became increasingly common for game music to be composed by independent contractors, hired by the game developer on a per-project basis. Most bigger budget games such as Call of Duty, Mass Effect, Ghost Recon, or Lost Planet hire composers in this fashion. Approximately 50% of game composers are freelance, the remaining being employees of a game company. Original score and soundtrack may require the hiring of a Music Director, who will help create the game music as well as help book the resources needed for performing and recording the music. Some music directors may work with a game's Sound Designer to create a dynamic score. Notable exceptions include composer Koji Kondo, who remains an employee at Nintendo, and Martin O'Donnell, who worked at Bungie until early 2014. The growth of casual, mobile and social games has greatly increased opportunities for game music composers, with job growth in the US market increasing more than 150% over five years. Independently developed games are a frequent place where beginning game composers gain experience composing for video games. Game composers, particularly for smaller games, are likely to provide other services such as sound design (76% of game composers also do some sound design), integration (47% of game composers also integrate their music into audio middleware), or even computer coding or scripting (15%). With the rising use of licensed popular music in video games, job opportunities in game music have also come to include the role of a music supervisor. Music supervisors work on behalf of a game developer or game publisher to source pre-existing music from artists and music publishers. These supervisors can be hired on a per-project basis or can work in-house, like the Music Group for Electronic Arts (EA) that has a team of music supervisors. A music supervisor is needed to not only help select music that will suit the game, but to also ensure the music is fully licensed in order to avoid lawsuits or conflicts. Music supervisors may also help negotiate payment, which for artists and songwriters is often a one-time buy-out fee, because games do not generate music royalties when they are sold. A growing trend is to contract artists to write original songs for games, to add to their value and exclusivity, and once again supervisors can be a part of that process. Awards Current Defunct Other In 2011, video game music made its first appearance at the Grammy Awards when "Baba Yetu", a song from Civilization IV, won the 53rd annual music awards' Best Instrumental Arrangement Accompanying Vocalists, making it the first video game music to be nominated for (or to win) a Grammy. The song won for its placement on Christopher Tin’s album Calling All Dawns, but had been used in the game six years prior. Other video game awards include the International Film Music Critics Association (IFMCA) award for Best Original Score for Interactive Media and Machinima.com's Inside Gaming Awards for Best Original Score and Best Sound Design. In addition to recognizing the composers of original score, the Guild of Music Supervisors offer a GMS Award to the music supervisors that select and coordinate licensed music for video games. Fan culture Video game fans have created their own fan sites "dedicated to the appreciation and promotion of video game music", including OverClocked ReMix and Rainwave. Fans also make their own song remixes and compilations, like insaneintherainmusic, and have built online remixing communities through the ease of internet distribution. There are over 50 podcasts dedicated to the topic of video game music. Most notable among these are the Super Marcato Bros., Rhythm and Pixels, and Game That Tune. Japanese dōjin music scene is notable for producing albums of arranged videogame music which derived from popular retro franchises such as Mega Man, Chrono Trigger or Final Fantasy, from dōjin games, such as Touhou Project, studio Key visual novels and When They Cry series, from popular franchises on Comiket, such as Type-Moon Fate series or Kantai Collection. There have been over six thousand dōjin albums of Touhou Project music released. See also Circuit bending Game rip (audio) IEZA Framework List of video game musicians List of video game soundtracks released on vinyl List of video game soundtracks on music streaming platforms MAGFest Music video game OverClocked ReMix Rainwave VGMusic.com Video Games Live References External links VGMdb Video Game Music and Anime Soundtrack Database | VGMdb GamesSound.com Academic articles on video game sound and music Early Video Game Soundtracks 2001 article on video game music, orig. published in In Magazine High Score: The New Era of Video Game Music at Tracksounds "The Evolution of Video Game Music", All Things Considered, April 12, 2008 List of games with non-original music at uvlist.net Pretty Ugly Gamesound Study Website studying pretty and ugly game music and sound. CaptivatingSound.com Resources for design of game sound and music. Audio and Immersion PhD thesis about game audio and immersion. Diggin' in the Carts: A Documentary Series About Japanese Video Game Music, Red Bull Music Academy. Video game composer, Chris Shutt (Composer). Video game music Music
56912320
https://en.wikipedia.org/wiki/Gaming%20Innovation%20Group
Gaming Innovation Group
Gaming Innovation Group Inc. (GIG) is an iGaming company offering cloud-based product and platform services and performance marketing to its B2B partners. The company is listed on the Oslo Stock Exchange under the ticker symbol "GIG" and on Nasdaq Stockholm under the ticker symbol "GIGSEK". The company is a full-service iGaming software services provider and media partner for online gambling operators. Over the last couple of years, the company has expanded into most segments of the supplier value chain in iGaming. Performance-based/affiliate marketing (GiG Media, formerly known as Innovation Labs Ltd.), software platform services (GiG Core), formerly known as iGaming Cloud, and sportsbook (GiG Sports). GiG offers a toolbox of software and services to operators that can purchase in full or standalone. The company previously operated several consumer brands including Rizk.com and Guts.com before divesting them to Betsson in April 2020. History Gaming Innovation Group Ltd. was incorporated as Donkr International Ltd. in 2008 in Malta. It is the holding company of Innovation Labs Ltd., the trading company operating the online poker forum Donkr.com. In 2012, Frode Fagerli and Robin Reed had become the owners of the company and named it Gaming Innovation Group Ltd. In 2012, Guts Gaming Ltd. (now MT SecureTrade Limited) was incorporated as a fully owned subsidiary of Gaming Innovation Group Ltd. In May 2013 the company launched Guts.com, a website offering sports betting and casino games. In early 2015, iGamingCloud Ltd., a B2B platform service for the iGaming industry, was launched (GiG Core). The product went live in February 2015. In the same month, Gaming Innovation Group and Nio Inc. signeknown as iGaming Cloud), and sportsbook (GiG Sports) supported by GiG's fully managed services thus offering a full turn key solution. GiG offers a toolbox of software and services to operators that can purchase in full or standalonet. The company's platform of services is licensed by the MGA, UKGC and offered under a TW issued by the DGE in New Jersey. It is certified in Sweden, Spain, Iowa (USA), Croatia, Latvia and is also compliant with GLI33 and GLI16 platform standards and ISO27001 for GiG Core and GiG Data. In May 2015 Optimizer Invest acquired 10% of iGaming Cloud Ltd for consideration of €1m. In addition, there was a Share Exchange Agreement to exchange the entire issued share capital of Gaming Innovation Group PLC. for shares in Nio Inc. Subsequently, Nio inc. adopted the name of Gaming Innovation Group Inc. and appointed Robin Reed as its new CEO. The month after Gaming Innovation Group was listed on the Oslo Stock Exchange. The company expanded into the affiliate industry in iGaming by a series of acquisitions. GiG subsidiary Innovation Labs (GiG Media) acquired affiliate network Spaseeba for 27m GiG shares. GiG Media also acquired affiliate networks in respectively Finland and Estonia. In January 2016 the company launched the B2C casino brand Rizk.com. Later on in March, the company acquired the sports betting company OddsModel AS for consideration of 21.74 million new Shares. In June 2016 Gaming Innovation Group acquired Betit Holdings adding the three iGaming operators Kaboo.com, SuperLenny.com, and Thrills.com to its portfolio of B2C focused brands (notably Rizk.com and Guts.com before the Betit acquisition). As part of the deal, Optimizer Invest increased its ownership of GiG shares. The company also continued its expansion within the media business by acquiring Dutch affiliate network Delta Markets B.V. as well as Swedish affiliate network Magenti Media. The expansion in the media business continued in 2017 with the acquisition of the technology-driven performance marketing company Rebel Penguin ApS for a total amount of €13 million. This acquisition concluded GiG Media being established as a multi-channel affiliate, operating both a publishing unit (organic rankings on search engines) as well as a paid marketing unit. Gaming Innovation Group also launched a new B2C brand called Highroller. Highroller was later divested in 2019 to Ellmount Gaming. Finally, by the end of the year, the company announced the opening of a new casino game studio, GiG Games. Further investments in GiG Games were halted in 2019. By the end of 2017, GiG Core had 34 clients on its iGaming platform. In 2018, the most notable event was the signing of a turnkey deal with Hard Rock Casino, with an initial launch in Atlantic City, New Jersey. The company also launched its marketing compliance tool targeted at the online gambling segment; GiG Comply. An in-house sportsbook on Rizk.com was launched just ahead of the 2018 FIFA World Cup. During the year of 2018, GiG obtained a betting license in the German State Schleswig-Holstein which makes part of the DACH region, a key growth region for the organisation and its Rizk brand. E-sports hub GLHF.GG was signed on GiG Core as well as Sia Mr. Green Latvia for the provision of platform license, online casino, sportsbook, and front end services. In January 2019, GiG launched the Hard Rock omnichannel sports offline and online solutions in New Jersey. On 26 March 2019, Gaming Innovation Group began trading on the Nasdaq Stockholm under the ticker symbol 'GIGSEK'. GiG Core also signed Metalcasino to its sportsbook offering. An agreement to provide MegaLotto with GiG's technical platform and casino games were also signed. This meant that GiG had entered the lottery segment for the first time. In August 2019, Gaming Innovation Group sold its 'HighRoller' brand to Ellmount Gaming stating that their core focus will be primarily on growing Rizk as a casino brand and Guts as a sports betting brand. The HighRoller brand was sold for approx. €7m (3.5 PV). During the year of 2019, Kindred Group was signed on the new GiG Comply product. Gaming Innovation Group also partnered with Skycity, a market leader in New Zealand, for a turnkey online casino solution. In September 2019, GiG launched its digital and retail sportsbook with Hard Rock Cafe International in Iowa. In April 2020, Gaming Innovation Group divested its B2C operations to Betsson, for a total of MEUR 31 with a premium fee on the platform service. Upfront payment was used to repay MSEK 300 bond. From this point, GiG would be purely B2B focused on its platform services (GiG Core) as well as its media/affiliate services (GiG Media). In June 2020, Gaming Innovation Group signed a long-term agreement with GS Technologies Limited for the provision of GiG's platform and front-end development to a new casino brand with its license from the Maltese Gaming Authority. In the following month, Gaming Innovation Group signed a head of terms agreement with K.A.K. DOO Skopje, Hospitality, tourism, and services company (K.A.K.) for the provision of GiG's platform, front-end development, and managed media services to launch their digital operation in the regulated North Macedonian market. In the same month, GiG subsidiary iGaming Cloud Inc. was granted a permanent casino service industry enterprise license (CSIE) from the Division of Gaming Enforcement (DGE) in New Jersey, USA. The application was filed in June 2018 with GiG supporting Hard Rock International's casino operations in New Jersey under a so-called transactional waiver until the final approval was awarded. GiG signed a long-term agreement with a subsidiary of Casumo, Mill of Magic Ltd, for the provision of GiG's platform, data platform, and GiG Logic in July 2020. Mill of Magic will use GiG's products to power its new Pay N Play casino offering under its license from the Maltese Gaming Authority. In August 2020, GiG signed the head of terms agreement with Grupo Slots LOTBA S.E, Argentina's premier gaming and entertainment group. In September, GiG and Betgenius signed a deal to join forces in a global sportsbook platform deal. Organizational structure Gaming Innovation Group Inc. (GIG) is a publicly traded US corporation, incorporated in the state of Delaware with its registered offices at 10700 Stringfellow Rd. 10, Bokeelia, FL 33922, Florida, USA, where accounting is located. GiG has been registered in the Norwegian company registry as a “Norwegian Registered Foreign Entity”(NUF) with the organization number 988 015 849. It has also been registered as a branch in the Malta Business Registry, under Registration No: 2309086. GIG is a holding company, with all commercial activities being carried out by its subsidiaries. GIG has subsidiaries located in Malta, Denmark, Gibraltar and Spain. Some of the subsidiaries based in Malta hold remote operator licences in the UK, Malta and Sweden. Furthermore, the two main segments of the business (GIG Media and GIG Core) are operated as follows: GIG Media is operated through Innovation Labs Ltd (a Malta based subsidiary) and iGamingCloud Inc. (a US based subsidiary), and GIG Core is mainly operated through iGamingCloud Limited (a Malta based subsidiary). Business units Gaming Innovation Group operates two overall business units both focusing on the B2B Business-to-business market in iGaming. The media division (GiG Media) focuses on generating leads to operators in iGaming. GiG Media operates several media websites and traffic generating channels engaging the end-user, turning users into leads to operators. Notable websites that GiG Media operates are superlenny.com, casinotopsonline.com, and wsn.com. The platform division (GiG Core) delivers platform services to iGaming operators with an online B2C Business-to-consumer brand within casino and sports, offering products and services such as GiG Core (PAM), GiG Data a real-time-data platform, GiG Sports, GiG Logic a real-time rules-based engine and fully managed services including CRM, Media, Operations and Sports trading. World Sports Network (WSN.com) Through its sports website, WSN.com, Gaming Innovation Group entered the regulated US sports betting market in January 2019. It was first granted a license to operate in New Jersey, followed by Indiana, Pennsylvania, Colorado and West Virginia. In June 2020, WSN.com started an online gambling podcast with sports betting personality Bill Krackomberger. Casinotopsonline.com GiG acquired Casinotopsonline.com in March 2017 for the amount of €11.5 million, at the time it was deemed as one of Europe's largest online casino review portals. The site focuses on news and casino reviews operating on a worldwide scale. SuperLenny.com The website SuperLenny.com was part of the Betit Holdings acquisition performed by GiG in June 2016. SuperLenny.com started as a casino brand with a Scandinavian focus but was later changed into an affiliate focused website driving traffic and leads to operators. Sportnco In Dec 2021, GiG confirmed its acquisition of Sportnco, with the aim of strengthening its status as a platform and media provider to the betting and gaming industry. Licences and certificates GiG's software platform has been certified in Latvia, Croatia, Spain, Sweden and Iowa. By virtue of partnerships GiG has entered into with operators, GiG provides its software platform services in these regulated markets. Financial data See also List of companies listed on the Oslo Stock Exchange References Online poker companies Online gambling companies of Malta Companies listed on the Oslo Stock Exchange
2452943
https://en.wikipedia.org/wiki/Tunnel%20broker
Tunnel broker
In the context of computer networking, a tunnel broker is a service which provides a network tunnel. These tunnels can provide encapsulated connectivity over existing infrastructure to another infrastructure. There are a variety of tunnel brokers, including IPv4 tunnel brokers, though most commonly the term is used to refer to an IPv6 tunnel broker as defined in RFC:3053. IPv6 tunnel brokers typically provide IPv6 to sites or end users over IPv4. In general, IPv6 tunnel brokers offer so called 'protocol 41' or proto-41 tunnels. These are tunnels where IPv6 is tunneled directly inside IPv4 packets by having the protocol field set to '41' (IPv6) in the IPv4 packet. In the case of IPv4 tunnel brokers IPv4 tunnels are provided to users by encapsulating IPv4 inside IPv6 as defined in RFC:2473. Automated configuration Configuration of IPv6 tunnels is usually done using the Tunnel Setup Protocol (TSP), or using Tunnel Information Control protocol (TIC). A client capable of this is AICCU (Automatic IPv6 Connectivity Client Utility). In addition to IPv6 tunnels TSP can also be used to set up IPv4 tunnels. NAT issues Proto-41 tunnels (direct IPv6 in IPv4) may not operate well situated behind NATs. One way around this is to configure the actual endpoint of the tunnel to be the DMZ on the NAT-utilizing equipment. Another method is to either use AYIYA or TSP, both of which send IPv6 inside UDP packets, which is able to cross most NAT setups and even firewalls. A problem that still might occur is that of the timing-out of the state in the NAT machine. As a NAT remembers that a packet went outside to the Internet it allows another packet to come back in from the Internet that is related to the initial proto-41 packet. When this state expires, no other packets from the Internet will be accepted. This therefore breaks the connectivity of the tunnel until the user's host again sends out a packet to the tunnel broker. Dynamic endpoints When the endpoint isn't a static IP address, the user, or a program, has to instruct the tunnel broker to update the endpoint address. This can be done using the tunnel broker's web site or using an automated protocol like TSP or Heartbeat, as used by AICCU. In the case of a tunnel broker using TSP, the client automatically restarting the tunnel will cause the endpoint address and port to be updated. Implementations The first implementation of an IPv6 Tunnel Broker was at the Italian CSELT S.p.A. by Ivano Guardini, the author of RFC3053 There are a variety of tunnel brokers that provide their own custom implementations based on different goals. Listed here are the common implementations as used by the listed IPv6 tunnel brokers. Gogo6 gogoSERVER gogoSERVER (formerly Gateway6) is used by the Freenet6 service, which is the second IPv6 tunnel broker service, going into production in 1999. It was started as a project of Viagenie and then Hexago was spun off as a commercial company selling Gateway6, which powered Freenet6, as their flagship product. In June 2009, Hexago became gogo6 through a management buyout and the Freenet6 service became part of gogoNET, a social network for IPv6 professionals. On 23 March 2016 all services of Freenet6/Gogo6 were halted. SixXS sixxsd SixXS's sixxsd is what powers all the SixXS PoPs. It is custom built software for the purpose of tunneling at high performance with low latency. Development of sixxsd started in 2002 and has evolved into the current v4 version of the software. The software is made available for ISPs who provide and run SixXS PoPs. Originally, in 2000, SixXS used shell bash scripts. Due to scalability issues and other problems sixxsd was designed and developed. After 17 years, the SixXS tunnel sunset on 2017-06-06. CITC ddtb CITC Tunnel Broker, run by the Saudi Arabia IPv6 Task Force, uses their own implementation of the TSP RFC named 'ddtb'. See also List of IPv6 tunnel brokers 6in4 References Broker IPv6
1807848
https://en.wikipedia.org/wiki/Karl%20Dorrell
Karl Dorrell
Karl James Dorrell (born December 18, 1963) is an American football coach who is the head coach at the University of Colorado. Dorrell most notably served as the head football coach of the University of California, Los Angeles (UCLA) from 2003 to 2007, compiling a record of 35–27. He led the UCLA Bruins to five bowl appearance in five seasons, but did not coach in the fifth after he was fired in December 2007. Dorrell was the first and only African American head football coach in UCLA's history to date. In 2020, Dorrell was hired as the new head coach of the Colorado Buffaloes. In his first season at CU, he was named Pac-12 Coach of the Year. Early life and playing career Karl attended Helix High School in La Mesa, California, where he played football. He was a two-time all-league selection and an honorable mention All-American as a senior. He led Helix to the CIF San Diego Section second place in 1981. Karl went on to play football at UCLA, earning four varsity letters in football. He was one of the most successful wide receivers at UCLA with 1,517 receiving yards on 108 receptions. He suffered a shoulder injury in 1984 and was granted an extra year of eligibility by the NCAA. He played on a team that won the Rose Bowl in 1983, 1984, and 1986, and that won the Freedom Bowl in 1986. During the 1983 season, he was a teammate of quarterback Rick Neuheisel, who would be his eventual successor as UCLA head coach. He caught touchdowns from Neuheisel during the season, including two in the 1984 Rose Bowl. In the 1986 UCLA vs. USC game, Dorrell was on the receiving end of a play that the Los Angeles Times dubbed "Hail Mary, and in your face." On the last play of the first half, UCLA quarterback Matt Stevens faked a kneeldown, then pulled up and threw a Hail Mary pass, which was tipped into the hands of the flanker, Dorrell, to put the Bruins up 31–0 at the half. They would go on to win 45–20. He had a brief career as a player in the NFL with the Dallas Cowboys in the 1987 season, but he was placed on the injured reserve. Coaching career Assistant coach Dorrell's first job as a coach was in 1988, as a graduate assistant for Terry Donahue at UCLA. That season the Bruins finished the season with a record of 10-2 and defeated the Arkansas Razorbacks in the Cotton Bowl Classic. In 1989, he became a wide receivers coach at UCF. In 1990 and 1991 he was the offensive coordinator and receivers coach at Northern Arizona. Under his tutelage, the NAU offense set a school record with 255 first downs in 1991, amassing the second-most total offense (4,539 yards) in a season. From 1992 to 1993, Dorrell coached wide receivers at Colorado. In his first year with the Buffaloes, two of his receivers, Charles Johnson and Michael Westbrook, became just the fourth pair of receivers on the same team in NCAA history to each have over 1,000 receiving yards. He then served as wide receivers coach at Arizona State in 1994 before returning to Colorado when they hired his former UCLA teammate, Rick Neuheisel, as their head coach. This time, he would serve as wide receivers coach and offensive coordinator from 1995 to 1998. When Neuheisel left Colorado for Washington, he brought four assistant coaches with him – including Dorrell, who served as the Huskies' offensive coordinator and receivers coach in 1999. In both 1993 and 1999, Dorrell was a recipient of Denver Broncos Minority Coaching Fellowships, which allowed him to spend time in the Broncos' training camp. He would return to the team in 2000 to serve as the receivers coach under head coach Mike Shanahan He held this position for three years, coaching players like Rod Smith, a two-time selection to the NFL's Pro Bowl, and Ed McCaffrey, a one time Pro Bowl selection. With the help of Dorrell, Smith and McCaffrey became only the second wide receiver duo to each catch 100 passes in a single season (2000). UCLA Karl Dorrell was hired as the head coach at UCLA, replacing Bob Toledo, who was released at the end of the 2002 regular season. Between Toledo and Dorrell, Ed Kezirian, an athletic department official who oversees the academics for the football team, served as interim coach for the 2002 Las Vegas Bowl. Under Kezirian, the Bruins won the bowl game over New Mexico, 27–13. Dorrell's hiring as head coach was announced on December 19, 2002 by UCLA athletic director Dan Guerrero. Ed Kezirian remained on the football staff. Dorrell was brought in at UCLA to clean up a program marred by off-the-field problems in the final years of Bob Toledo's tenure. 2003–2004 seasons The UCLA Bruins football team under Dorrell recorded a mark of 6–7 in his first season as head coach in 2003, with an appearance in the Silicon Valley Bowl, and a loss to Fresno State. In 2004, his second season, the team finished with a record of 6–6 an appearance in the Las Vegas Bowl, with a loss to Wyoming. 2005 season In 2005, his third season as head football coach, Dorrell was able get his first win against a ranked opponent, No. 21 Oklahoma, featuring Adrian Peterson. On October 1, 2005, head coach Tyrone Willingham and his Washington Huskies came to the Rose Bowl for a Pacific-10 Conference game to play UCLA. This was the first time two black head coaches faced each other in a Pac-10 conference game. At the time, Sylvester Croom of Mississippi State was the only other black coach heading an NCAA Division I football program. Dorrell achieved his first win against a top-ten opponent with a 47–40 upset win over No. 10-ranked Washington. Three Bruin wins in the 2005 season set new school records for biggest comebacks earning the nickname "The Cardiac Kids." They came thanks largely to the heroics of quarterback Drew Olson and tailback Maurice Jones-Drew. In the regular season the Bruins came from down 21 points to win in overtime against both Washington State and Stanford. In the Stanford comeback, the Bruins scored 21 points in the final 7:04 of the fourth quarter. In the 2005 Sun Bowl, the Bruins set the record again by coming back from 22 points down. This also is the Sun Bowl record. The Bruins were ranked No. 7 in the nation until a 52–14 blowout loss to a 3–8 Arizona team. The Bruins came into the UCLA–USC rivalry last regular season game ranked No. 11. They suffered a 66–19 defeat to the No. 1 2005 USC Trojans football team. This was the largest margin of defeat since the series began in 1929 with a 76–0 defeat. The Bruins finished third in the Pac-10 standings. On December 30, 2005 his Bruins defeated the Northwestern Wildcats in the 2005 Sun Bowl, 50–38, finishing the season with a 10–2 record. At the end of the 2005 season, Dorrell and fellow UCLA coach Ben Howland received pay bonuses for coaching successful seasons. Karl was named Pac-10 co-coach of the year along with USC head coach Pete Carroll. 2006 season In 2006, Dorrell's fourth season, he guided the Bruins to a 7–6 season (5-4 in conference) and a fourth-place Pac-10 finish. UCLA played its first game at the University of Notre Dame since the 1960s and was leading 17–13, but the Irish scored a touchdown in the final minute to win. The most notable victory of his coaching career at UCLA was a 13–9 defeat of No. 2-ranked and Bowl Championship Series title-game-bound USC on December 2, 2006. The win kept the Trojans out of the title game and broke a seven-game UCLA losing streak to the Trojans, thereby preserving the Bruins' eight-game win streak over USC from 1991 to 1998 as the longest run in the history of the rivalry. The victory also clinched a winning season for UCLA. The Bruins played in the Emerald Bowl in San Francisco against a Bobby Bowden-coached Florida State team on December 27, 2006 and lost, 44–27. 2007 season In Dorrell's fifth season at UCLA, with 20 returning starters and a team of his own recruits, hopes were high for the Bruins in 2007. After starting the season with a couple of wins over Stanford and BYU, and achieving a No. 11 AP Poll ranking, however, UCLA stumbled against an injured, winless, and unranked Utah Utes team, 44–6. Four weeks later, Dorrell's Bruins fell again; this time 20–6 to an unranked, winless Notre Dame team. The Bruins did, however, post wins against seemingly more difficult PAC-10 opponents, including a No. 10 Cal team. However; the bad taste of losses to teams the Bruins were favored to beat (including an embarrassing 27–7 loss to Washington State) raised questions about Dorrell's play-calling and ability to motivate his players. After the Washington State loss, UCLA Athletic Director Dan Guerrero addressed UCLA's inconsistent football performances for the first time, stating "I will be very interested to see how we finish the season. And you can use that." Many took this as a hint that Dorrell's job might be in serious jeopardy. The Bruins would go on to lose to Arizona and Arizona State by a combined score of 58–47, but surprisingly shut out an Oregon Ducks team that a week earlier lost starting quarterback and Heisman Trophy Candidate Dennis Dixon to a knee injury. Heading into the final game of the regular season against crosstown-rival USC, the Bruins still had an outside chance at a Rose Bowl berth that might have saved Dorrell's job; with a victory over USC and some help from Arizona (with a win over ASU), the Bruins could have been the first-ever five-loss team to play in the Rose Bowl. It wasn't to be, however, and the Bruins finished the 2007 Regular season with a miserable offensive performance in a 24–7 loss to USC and a record of 6–6. On December 3, 2007, Dorrell was fired during a meeting with athletic director Dan Guerrero. He was offered the choice to coach in the Las Vegas Bowl but decided not to. Defensive coordinator DeWayne Walker served as interim coach for the game, where UCLA lost to BYU. UCLA eventually selected Rick Neuheisel, a former UCLA teammate of Dorrell, as his successor. Second stint as assistant coach Miami Dolphins (2008–2011) Dorrell interviewed at Duke University and was a finalist along with eventual hire David Cutcliffe for the head coaching position vacated by Ted Roof. He was also dealt as a candidate for the vacant offensive coordinator position for the Houston Texans. Former Texans offensive coordinator Mike Sherman left for Texas A&M University in November 2007. That position, however, eventually went to Kyle Shanahan. After rumors that he was a candidate to succeed Mike Heimerdinger as Denver Broncos assistant head coach, Dorrell eventually was hired as wide receivers coach for the Miami Dolphins, after having also interviewed with the Kansas City Chiefs. He was named quarterbacks coach on January 26, 2011. Houston Texans (2012–2013) Dorrell was hired as quarterbacks coach for the Houston Texans in 2012, coaching Matt Schaub through a Pro Bowl season as the Texans went 12–4. He left the team after the 2013 season. Vanderbilt (2014) Dorrell was reunited with newly hired Vanderbilt University head coach Derek Mason, joining his staff as the offensive coordinator in January 2014. Mason was a player at Northern Arizona while Dorrell was coaching there early in his career. Dorrell' stint with Vanderbilt lasted only one season. Chris Foster of the Los Angeles Times wrote, "Dorrell's West Coast offense did not fare much better in the East than it did in Westwood. The Commodores averaged 17.2 points a game and finished with a 3–9 record." New York Jets (2015–2018) On January 23, 2015, Dorrell was named the New York Jets wide receivers coach. In the 2015 New York Jets season, Dorrell established the Jets elite wide receiver duo of Brandon Marshall and Eric Decker. Both of them had one thousand yard seasons (Decker – 1,027) (Marshall – 1,502**), combined for an NFL record 26 touchdowns as a duo (Marshall – 14*) (Decker – 12) and a total of 189 combined catches (Marshall – 109**) (Decker – 80). Although the 2016 New York Jets season was a disastrous season for the Jets, the wide receivers were a bright spot in their offense. Although Brandon Marshall declined, and Eric Decker was injured early in the season, third-string wide receiver Quincy Enunwa had a breakout season for the Jets, with 58 catches for 857 yards and 4 touchdowns. Undrafted rookie wide receiver Robby Anderson also had a good season, with 42 receptions, 587 yards and 2 touchdowns. ** = New York Jets record * = NFL record Second stint with Miami Dolphins (2019) On February 8, 2019, the Miami Dolphins announced they had hired Dorrell as the team's wide receivers coach. Colorado Buffaloes On February 23, 2020, Dorrell was named the 27th full time head coach for the University of Colorado. He signed a 5-year, $18 million contract that would pay him $3.2 million for the first year with $200,000 annual raises in subsequent years. During the 2021 season, following a blowout loss to USC, a frustrated Dorrell shoved a photojournalist's camera while trotting off the field. The next day, Dorrell apologized for the incident in a statement. Family Dorrell and his wife, Kim, have two children, Chandler and Lauren. Head coaching record Notes References External links Colorado profile 1963 births Living people American football wide receivers Arizona State Sun Devils football coaches Colorado Buffaloes football coaches Dallas Cowboys players Denver Broncos coaches Houston Texans coaches Miami Dolphins coaches New York Jets coaches Northern Arizona Lumberjacks football coaches UCF Knights football coaches UCLA Bruins football coaches UCLA Bruins football players Vanderbilt Commodores football players Washington Huskies football coaches People from Alameda, California People from La Mesa, California Sportspeople from San Diego Players of American football from San Diego African-American coaches of American football African-American players of American football
2289648
https://en.wikipedia.org/wiki/Windows%20Vista
Windows Vista
Windows Vista is a major release of the Windows NT operating system developed by Microsoft. It was the direct successor to Windows XP, which was released five years prior, at the time being the longest time span between successive releases of Microsoft Windows desktop operating systems. Development was completed on November 8, 2006, and over the following three months, it was released in stages to computer hardware and software manufacturers, business customers and retail channels. On January 30, 2007, it was released internationally and was made available for purchase and download from the Windows Marketplace; it is the first release of Windows to be made available through a digital distribution platform. New features of Windows Vista include an updated graphical user interface and visual style dubbed Aero, a new search component called Windows Search, redesigned networking, audio, print and display sub-systems, and new multimedia tools such as Windows DVD Maker. Vista aimed to increase the level of communication between machines on a home network, using peer-to-peer technology to simplify sharing files and media between computers and devices. Windows Vista included version 3.0 of the .NET Framework, allowing software developers to write applications without traditional Windows APIs. While these new features and security improvements garnered positive reviews, Vista was also the target of much criticism and negative press. Criticism of Windows Vista includes its high system requirements, its more restrictive licensing terms, lack of compatibility, longer boot time, and excessive authorization prompts from User Account Control. As a result of these and other issues, Windows Vista saw initial adoption and satisfaction rates lower than Windows XP. However, Vista usage had surpassed Microsoft's pre-launch two-year-out expectations of achieving 200 million users, with an estimated 330 million Internet users in January 2009. On October 22, 2010, Microsoft ceased sales of retail copies of Windows Vista, and the original equipment manufacturer sales for Vista ceased a year later. Official mainstream support for Vista ended on April 10, 2012, and extended support ended on April 11, 2017, while Windows Server 2008 mainstream support ended on January 13, 2015, and extended support ended on January 14, 2020. Both versions were succeeded by Windows 7 and Windows Server 2008 R2, respectively. , Vista's market share has declined to 0.23% of Windows' total market share. The server equivalent received security updates until January 2020, which unofficial methods can apply these updates to the retail Windows Vista. Development Microsoft began work on Windows Vista, known at the time by its codename "Longhorn", in May 2001, five months before the release of Windows XP. It was originally expected to ship in late 2003 as a minor step between Windows XP and "Blackcomb", which was planned to be the company's next major operating system release. Gradually, "Longhorn" assimilated many of the important new features and technologies slated for Blackcomb, resulting in the release date being pushed back several times in three years. In some builds of Longhorn, their license agreement said "For the Microsoft product codenamed 'Whistler'". Many of Microsoft's developers were also re-tasked to build updates to Windows XP and Windows Server 2003 to strengthen security. Faced with ongoing delays and concerns about feature creep, Microsoft announced on August 27, 2004, that it had revised its plans. For this reason, Longhorn was reset to start work on componentizing the Windows Server 2003 Service Pack 1 codebase, and over time re-incorporating the features that would be intended for an actual operating system release. However, some previously announced features such as WinFS were dropped or postponed, and a new software development methodology called the Security Development Lifecycle was incorporated to address concerns with the security of the Windows codebase, which is programmed in C, C++ and assembly. Longhorn became known as Vista in 2005. Longhorn The early development stages of Longhorn were generally characterized by incremental improvements and updates to Windows XP. During this period, Microsoft was fairly quiet about what was being worked on, as their marketing and public relations efforts were more strongly focused on Windows XP, and Windows Server 2003, which was released in April 2003. Occasional builds of Longhorn were leaked onto popular file sharing networks such as IRC, BitTorrent, eDonkey and various newsgroups, and so most of what is known about builds before the first sanctioned development release of Longhorn in May 2003 is derived from these builds. After several months of relatively little news or activity from Microsoft with Longhorn, Microsoft released Build 4008, which had made an appearance on the Internet around February 28, 2003. It was also privately handed out to a select group of software developers. As an evolutionary release over build 3683, it contained several small improvements, including a modified blue "Plex" theme and a new, simplified Windows Image-based installer that operates in graphical mode from the outset, and completed an install of the operating system in approximately one third the time of Windows XP on the same hardware. An optional "new taskbar" was introduced that was thinner than the previous build and displayed the time differently. The most notable visual and functional difference, however, came with Windows Explorer. The incorporation of the Plex theme made blue the dominant color of the entire application. The Windows XP-style task pane was almost completely replaced with a large horizontal pane that appeared under the toolbars. A new search interface allowed for filtering of results, searching for Windows help, and natural-language queries that would be used to integrate with WinFS. The animated search characters were also removed. The "view modes" were also replaced with a single slider that would resize the icons in real-time, in the list, thumbnail, or details mode, depending on where the slider was. File metadata was also made more visible and more easily editable, with more active encouragement to fill out missing pieces of information. Also of note was the conversion of Windows Explorer to being a .NET application. Most builds of Longhorn and Vista were identified by a label that was always displayed in the bottom-right corner of the desktop. A typical build label would look like "Longhorn Build 3683.Lab06_N.020923-1821". Higher build numbers did not automatically mean that the latest features from every development team at Microsoft was included. Typically, a team working on a certain feature or subsystem would generate their working builds which developers would test with, and when the code was deemed stable, all the changes would be incorporated back into the main development tree at once. At Microsoft, several "Build labs" exist where the compilation of the entirety of Windows can be performed by a team. The name of the lab in which any given build originated is shown as part of the build label, and the date and time of the build follow that. Some builds (such as Beta 1 and Beta 2) only display the build label in the version information dialog (Winver). The icons used in these builds are from Windows XP. At the Windows Hardware Engineering Conference (WinHEC) in May 2003, Microsoft gave their first public demonstrations of the new Desktop Window Manager and Aero. The demonstrations were done on a revised build 4015 which was never released. Several sessions for developers and hardware engineers at the conference focused on these new features, as well as the Next-Generation Secure Computing Base (previously known as "Palladium"), which at the time was Microsoft's proposed solution for creating a secure computing environment whereby any given component of the system could be deemed "trusted". Also at this conference, Microsoft reiterated their roadmap for delivering Longhorn, pointing to an "early 2005" release date. Development reset By 2004, it had become obvious to the Windows team at Microsoft that they were losing sight of what needed to be done to complete the next version of Windows and ship it to customers. Internally, some Microsoft employees were describing the Longhorn project as "another Cairo" or "Cairo.NET", referring to the Cairo development project that the company embarked on through the first half of the 1990s, which never resulted in a shipping operating system (though nearly all the technologies developed in that time did end up in Windows 95 and Windows NT). Microsoft was shocked in 2005 by Apple's release of Mac OS X Tiger. It offered only a limited subset of features planned for Longhorn, in particular fast file searching and integrated graphics and sound processing, but appeared to have impressive reliability and performance compared to contemporary Longhorn builds. Most Longhorn builds had major Windows Explorer system leaks which prevented the OS from performing well, and added more confusion to the development teams in later builds with more and more code being developed which failed to reach stability. In a September 23, 2005 front-page article in The Wall Street Journal, Microsoft co-president Jim Allchin, who had overall responsibility for the development and delivery of Windows, explained how development of Longhorn had been "crashing into the ground" due in large part to the haphazard methods by which features were introduced and integrated into the core of the operating system, without a clear focus on an end-product. Allchin went on to explain how in December 2003, he enlisted the help of two other senior executives, Brian Valentine and Amitabh Srivastava, the former being experienced with shipping software at Microsoft, most notably Windows Server 2003, and the latter having spent his career at Microsoft researching and developing methods of producing high-quality testing systems. Srivastava employed a team of core architects to visually map out the entirety of the Windows operating system, and to proactively work towards a development process that would enforce high levels of code quality, reduce interdependencies between components, and in general, "not make things worse with Vista". Since Microsoft decided that Longhorn needed to be further componentized, work started on builds (known as the Omega-13 builds) that would componentize existing Windows Server 2003 source code, and over time add back functionality as development progressed. Future Longhorn builds would start from Windows Server 2003 Service Pack 1 and continue from there. This change, announced internally to Microsoft employees on August 26, 2004, began in earnest in September, though it would take several more months before the new development process and build methodology would be used by all of the development teams. A number of complaints came from individual developers, and Bill Gates himself, that the new development process was going to be prohibitively difficult to work within. As Windows Vista By approximately November 2004, the company had considered several names for the final release, ranging from simple to fanciful and inventive. In the end, Microsoft chose Windows Vista as confirmed on July 22, 2005, believing it to be a "wonderful intersection of what the product really does, what Windows stands for, and what resonates with customers, and their needs". Group Project Manager Greg Sullivan told Paul Thurrott "You want the PC to adapt to you and help you cut through the clutter to focus on what's important to you. That's what Windows Vista is all about: "bringing clarity to your world" (a reference to the three marketing points of Vista—Clear, Connected, Confident), so you can focus on what matters to you". Microsoft co-president Jim Allchin also loved the name, saying that "Vista creates the right imagery for the new product capabilities and inspires the imagination with all the possibilities of what can be done with Windows—making people's passions come alive." After Longhorn was named Windows Vista in July 2005, an unprecedented beta-test program was started, involving hundreds of thousands of volunteers and companies. In September of that year, Microsoft started releasing regular Community Technology Previews (CTP) to beta testers from July 2005 to February 2006. The first of these was distributed at the 2005 Microsoft Professional Developers Conference, and was subsequently released to beta testers and Microsoft Developer Network subscribers. The builds that followed incorporated most of the planned features for the final product, as well as a number of changes to the user interface, based largely on feedback from beta testers. Windows Vista was deemed feature-complete with the release of the "February CTP", released on February 22, 2006, and much of the remainder of the work between that build and the final release of the product focused on stability, performance, application and driver compatibility, and documentation. Beta 2, released in late May, was the first build to be made available to the general public through Microsoft's Customer Preview Program. It was downloaded over 5 million times. Two release candidates followed in September and October, both of which were made available to a large number of users. At the Intel Developer Forum on March 9, 2006, Microsoft announced a change in their plans to support EFI in Windows Vista. The UEFI 2.0 specification (which replaced EFI 1.10) was not completed until early 2006, and at the time of Microsoft's announcement, no firmware manufacturers had completed a production implementation which could be used for testing. As a result, the decision was made to postpone the introduction of UEFI support to Windows; support for UEFI on 64-bit platforms was postponed until Vista Service Pack 1 and Windows Server 2008 and 32-bit UEFI would not be supported, as Microsoft did not expect many such systems to be built because the market was quickly moving to 64-bit processors. While Microsoft had originally hoped to have the consumer versions of the operating system available worldwide in time for the 2006 holiday shopping season, it announced in March 2006 that the release date would be pushed back to January 2007 in order to give the company—and the hardware and software companies that Microsoft depends on for providing device drivers—additional time to prepare. Because a release to manufacturing (RTM) build is the final version of code shipped to retailers and other distributors, the purpose of a pre-RTM build is to eliminate any last "show-stopper" bugs that may prevent the code from responsibly being shipped to customers, as well as anything else that consumers may find annoying. Thus, it is unlikely that any major new features would be introduced; instead, work would focus on Vista's fit and finish. In just a few days, developers had managed to drop Vista's bug count from over 2470 on September 22 to just over 1400 by the time RC2 shipped in early October. However, they still had a way to go before Vista was ready to RTM. Microsoft's internal processes required Vista's bug count to drop to 500 or fewer before the product could go into escrow for RTM. For most of the pre-RTM builds, those 32-bit editions are only released. On June 14, 2006, Windows developer Philip Su posted a blog entry which decried the development process of Windows Vista, stating that "The code is way too complicated, and that the pace of coding has been tremendously slowed down by overbearing process." The same post also described Windows Vista as having approximately 50 million lines of code, with about 2,000 developers working on the product. During a demonstration of the speech recognition feature new to Windows Vista at Microsoft's Financial Analyst Meeting on July 27, 2006, the software recognized the phrase "Dear mom" as "Dear aunt". After several failed attempts to correct the error, the sentence eventually became "Dear aunt, let's set so double the killer delete select all". A developer with Vista's speech recognition team later explained that there was a bug with the build of Vista that was causing the microphone gain level to be set very high, resulting in the audio being received by the speech recognition software being "incredibly distorted". Windows Vista build 5824 (October 17, 2006) was supposed to be the RTM release, but a bug, which destroyed any system that was upgraded from Windows XP, prevented this, damaging development and lowering the chance that it would hit its January 2007 deadline. Development of Windows Vista came to an end when Microsoft announced that it had been finalized on November 8, 2006, and was concluded by co-president of Windows development, Jim Allchin. The RTM's build number had also jumped to 6000 to reflect Vista's internal version number, NT 6.0. Jumping RTM build numbers is common practice among consumer-oriented Windows versions, like Windows 98 (build 1998), Windows 98 SE (build 2222), Windows Me (build 3000) or Windows XP (build 2600), as compared to the business-oriented versions like Windows 2000 (build 2195) or Server 2003 (build 3790). On November 16, 2006, Microsoft made the final build available to MSDN and Technet Plus subscribers. A business-oriented Enterprise edition was made available to volume license customers on November 30, 2006. Windows Vista was launched for general customer availability on January 30, 2007. Two years after its release, Windows Vista was followed by Windows 7 in October 2009. New or changed features Windows Vista introduced several features and functionality not present in its predecessors. End-user Windows Aero: The new graphical user interface is named Windows Aero, which Jim Allchin stated is an acronym for Authentic, Energetic, Reflective, and Open. Microsoft intended the new interface to be cleaner and more aesthetically pleasing than those of previous Windows versions, featuring new transparencies, live thumbnails, live icons, and animations, thus providing a new level of eye candy. Laptop users report, however, that enabling Aero shortens battery life and reduces performance. Windows shell: The new Windows shell offers a new range of organization, navigation, and search capabilities: Task panes in Windows Explorer are removed, integrating the relevant task options into the toolbar. A "Favorite links" pane has been added, enabling one-click access to common directories. A search box appears in every Explorer window. The address bar has been replaced with a breadcrumb navigation bar. Icons of certain file types in Windows Explorer are "live" and can be scaled in size up to 256 × 256 pixels. The preview pane allows users to see thumbnails of various files and view the contents of documents. The details pane shows information such as file size and type, and allows viewing and editing of embedded tags in supported file formats. The Start menu has changed as well; incorporating an instant search box, and the All Programs list uses a horizontal scroll bar instead of the cascading flyout menu seen in Windows XP. The word "Start" itself has been removed in favor of a blue orb that bears the Windows logo. Windows Search: A new search component of Windows Vista, it features instant search (also known as search as you type), which provides instant search results, thus finding files more quickly than the search features found in previous versions of Windows and can search the contents of recognized file types. Users can search for certain metadata such as name, extension, size, date or attributes. Windows Sidebar: A transparent panel, anchored to the right side of the screen, wherein a user can place Desktop Gadgets, which are small applets designed for a specialized purpose (such as displaying the weather or sports scores). Gadgets can also be placed on the desktop. Windows Internet Explorer 7: New user interface, tabbed browsing, RSS, a search box, improved printing, Page Zoom, Quick Tabs (thumbnails of all open tabs), Anti-Phishing filter, several new security protection features, Internationalized Domain Name support (IDN), and improved web standards support. IE7 in Windows Vista runs in isolation from other applications in the operating system (protected mode); exploits and malicious software are restricted from writing to any location beyond Temporary Internet Files without explicit user consent. Windows Media Player 11, a major revamp of Microsoft's program for playing and organizing music and video. New features in this version include word wheeling (incremental search or "search as you type"), a new GUI for the media library, photo display and organization, the ability to share music libraries over a network with other Windows Vista machines, Xbox 360 integration, and support for other Media Center Extenders. Windows Defender: An antispyware program with several real-time protection agents. It includes a software explorer feature, which provides access to startup programs, and allows one to view currently running software, network-connected applications, and Winsock providers (Winsock LSPs). Backup and Restore Center: Includes a backup and restore application that gives users the ability to schedule periodic backups of files on their computer, as well as recovery from previous backups. Backups are incremental, storing only the changes made each time, minimizing disk usage. It also features Complete PC Backup (available only in the Ultimate, Business, and Enterprise editions), which backs up an entire computer as an image onto a hard disk or DVD. Complete PC Backup can automatically recreate a machine setup onto new hardware or hard disk in case of any hardware failures. Complete PC Restore can be initiated from within Windows Vista or from the Windows Vista installation CD if a PC is so corrupt that it cannot start normally from the hard disk. Windows Mail: A replacement for Outlook Express that includes a new mail store that improves stability, and features integrated instant search. It has a Phishing Filter like Internet Explorer 7 and Junk mail filtering that is enhanced through regular updates via Windows Update. Windows Calendar is a new calendar and task application that integrates with Windows Contacts and Windows Mail. It is compatible with various calendar file types, such as the popular iCalendar. Windows Photo Gallery, a photo and movie library management application. It can import from digital cameras, tag and rate individual items, adjust colors and exposure, create and display slideshows (with pan and fade effects) through Direct3D and burn slideshows to a DVD. Windows DVD Maker, a companion program to Windows Movie Maker that provides the ability to create video DVDs based on a user's content. Users can design a DVD with title, menus, video, soundtrack, pan and zoom motion effects on pictures or slides. Windows Media Center, which was previously exclusively bundled in a separate edition of Windows XP, known as Windows XP Media Center Edition, has been incorporated into the Home Premium and Ultimate editions of Windows Vista. Games: Most of the standard computer games included in previous versions of Windows have been redesigned to showcase Vista's new graphical capabilities. New games available in Windows Vista are Chess Titans (3D Chess game), Mahjong Titans (3D Mahjong game), and Purble Place (a small collection of games, oriented towards younger children, including a matching game, a cake-creator game, and a dress-up puzzle game). Purble Place is the only one of the new games available in the Windows Vista Home Basic edition. InkBall is available for Home Premium (or better) users. Games Explorer: A new special folder called "Games" exposes installed video games and information about them. These metadata may be updated from the Internet. Windows Mobility Center is a control panel that centralizes the most relevant information related to mobile computing (brightness, sound, battery level/power scheme selection, wireless network, screen orientation, presentation settings, etc.). Windows Fax and Scan Allows computers with fax modems to send and receive fax documents, as well as scan documents. It is not available in the Home editions of Windows Vista, but is available in the Business, Enterprise, and Ultimate editions. Windows Meeting Space replaces NetMeeting. Users can share applications (or their entire desktop) with other users on the local network, or over the Internet using peer-to-peer technology (higher editions than Starter and Home Basic can take advantage of hosting capabilities, Starter and Home Basic editions are limited to "join" mode only) Windows HotStart enables compatible computers to start applications directly from operating system startup or resume by the press of a button—this enables what Microsoft has described as appliance-like availability, which allows computers to function in a manner similar to a consumer electronics device such as a DVD player; the feature was also designed to provide the instant-on feature availability that is traditionally associated with mobile devices. While Microsoft has emphasized multimedia scenarios with Windows HotStart, a user can configure this feature so that a button launches a preferred application. Shadow Copy automatically creates daily backup copies of files and folders. Users can also create "shadow copies" by setting a System Protection Point using the System Protection tab in the System control panel. The user can view multiple versions of a file throughout a limited history and be allowed to restore, delete, or copy those versions. This feature is available only in the Business, Enterprise, and Ultimate editions of Windows Vista and is inherited from Windows Server 2003. Windows Update: Software and security updates have been simplified, now operating solely via a control panel instead of as a web application. Windows Mail's spam filter and Windows Defender's definitions are updated automatically via Windows Update. Users who choose the recommended setting for Automatic Updates will have the latest drivers installed and available when they add a new device. Parental controls: Allows administrators to monitor and restrict user activity, as well as control which websites, programs, and games each Standard user can use and install. This feature is not included in the Business or Enterprise editions of Vista. Windows SideShow: Enables the auxiliary displays on newer laptops or supported Windows Mobile devices. It is meant to be used to display device gadgets while the computer is on or off. Speech recognition is integrated into Vista. It features a redesigned user interface and configurable command-and-control commands. Unlike the Office 2003 version, which works only in Office and WordPad, Speech Recognition in Windows Vista works for any accessible application. In addition, it currently supports several languages: British and American English, Spanish, French, German, Chinese (Traditional and Simplified), and Japanese. New fonts, including several designed for screen reading, and improved Chinese (Yahei, JhengHei), Japanese (Meiryo), and Korean (Mulgan) fonts. ClearType has also been enhanced and enabled by default. Improved audio controls allow the system-wide volume or volume of individual audio devices and even individual applications to be controlled separately. New audio functionalities such as room correction, bass management, speaker fill, and headphone virtualization have also been incorporated. Problem Reports and Solutions, a feature that allows users to check for solutions to problems or view previously sent problems for any solutions or additional information, if available.Windows System Assessment Tool is a tool used to benchmark system performance. Software such as games can retrieve this rating and modify its own behavior at runtime to improve performance. The benchmark tests CPU, RAM, 2-D and 3-D graphics acceleration, graphics memory and hard disk space. Windows Ultimate Extras: The Ultimate edition of Windows Vista provides, via Windows Update, access to some additional features. These are a collection of additional MUI language packs, Texas Hold 'Em (a Poker game) and Microsoft Tinker (a strategy game where the character is a robot), BitLocker and EFS enhancements that allow users to back up their encryption key online in a Digital Locker, and Windows Dreamscene, which enables the use of videos in MPEG and WMV formats as the desktop background. On April 21, 2008, Microsoft launched two more Ultimate Extras; three new Windows sound schemes, and a content pack for Dreamscene. Various DreamScene Content Packs have been released since the final version of DreamScene was released. Reliability and Performance Monitor includes various tools for tuning and monitoring system performance and resources activities of CPU, disks, network, memory and other resources. It shows the operations on files, the opened connections, etc. Disk Management: The Logical Disk Manager in Windows Vista supports shrinking and expanding volumes on-the-fly. Windows Anytime Upgrade: is a program that allows a user to upgrade their computer running Vista to a higher edition. For example, a computer running Windows Vista Home Basic can be upgraded to Home Premium or better. Anytime Upgrade permits users to upgrade without having their programs and data erased, and is cheaper than replacing the existing installation of Windows. Anytime Upgrade is no longer available for Vista. Digital Locker Assistant: A program that facilitated access to downloads and purchases from the Windows Marketplace distribution platform. Apps purchased from Windows Marketplace are managed by Microsoft Account credentials, which are used to access a user's digital locker that stores the app and its associated information (e.g., licenses) off-site. Core Vista includes technologies such as ReadyBoost and ReadyDrive, which employ fast flash memory (located on USB flash drives and hybrid hard disk drives) to improve system performance by caching commonly used programs and data. This manifests itself in improved battery life on notebook computers as well, since a hybrid drive can be spun down when not in use. Another new technology called SuperFetch utilizes machine learning techniques to analyze usage patterns to allow Windows Vista to make intelligent decisions about what content should be present in system memory at any given time. It uses almost all the extra RAM as disk cache. In conjunction with SuperFetch, an automatic built-in Windows Disk Defragmenter makes sure that those applications are strategically positioned on the hard disk where they can be loaded into memory very quickly with the least physical movement of the hard disk's read-write heads. As part of the redesign of the networking architecture, IPv6 has been fully incorporated into the operating system and a number of performance improvements have been introduced, such as TCP window scaling. Earlier versions of Windows typically needed third-party wireless networking software to work properly, but this is not the case with Vista, which includes more comprehensive wireless networking support. For graphics, Vista introduces a new Windows Display Driver Model and a major revision to Direct3D. The new driver model facilitates the new Desktop Window Manager, which provides the tearing-free desktop and special effects that are the cornerstones of Windows Aero. Direct3D 10, developed in conjunction with major graphics card manufacturers, is a new architecture with more advanced shader support, and allows the graphics processing unit to render more complex scenes without assistance from the CPU. It features improved load balancing between CPU and GPU and also optimizes data transfer between them. WDDM also provides video content playback that rivals typical consumer electronics devices. It does this by making it easy to connect to external monitors, providing for protected HD video playback, and increasing overall video playback quality. For the first time in Windows, graphics processing unit (GPU) multitasking is possible, enabling users to run more than one GPU-intensive application simultaneously. At the core of the operating system, many improvements have been made to the memory manager, process scheduler and I/O scheduler. The Heap Manager implements additional features such as integrity checking in order to improve robustness and defend against buffer overflow security exploits, although this comes at the price of breaking backward compatibility with some legacy applications. A Kernel Transaction Manager has been implemented that enables applications to work with the file system and Registry using atomic transaction operations. Security-related Improved security was a primary design goal for Vista. Microsoft's Trustworthy Computing initiative, which aims to improve public trust in its products, has had a direct effect on its development. This effort has resulted in a number of new security and safety features and an Evaluation Assurance Level rating of 4+. User Account Control, or UAC is perhaps the most significant and visible of these changes. UAC is a security technology that makes it possible for users to use their computer with fewer privileges by default, to stop malware from making unauthorized changes to the system. This was often difficult in previous versions of Windows, as the previous "limited" user accounts proved too restrictive and incompatible with a large proportion of application software, and even prevented some basic operations such as looking at the calendar from the notification tray. In Windows Vista, when an action is performed that requires administrative rights (such as installing/uninstalling software or making system-wide configuration changes), the user is first prompted for an administrator name and password; in cases where the user is already an administrator, the user is still prompted to confirm the pending privileged action. Regular use of the computer such as running programs, printing, or surfing the Internet does not trigger UAC prompts. User Account Control asks for credentials in a Secure Desktop mode, in which the entire screen is dimmed, and only the authorization window is active and highlighted. The intent is to stop a malicious program from misleading the user by interfering with the authorization window, and to hint to the user about the importance of the prompt. Testing by Symantec Corporation has proven the effectiveness of UAC. Symantec used over 2,000 active malware samples, consisting of backdoors, keyloggers, rootkits, mass mailers, trojan horses, spyware, adware, and various other samples. Each was executed on a default Windows Vista installation within a standard user account. UAC effectively blocked over 50 percent of each threat, excluding rootkits. 5 percent or less of the malware that evaded UAC survived a reboot. Internet Explorer 7's new security and safety features include a phishing filter, IDN with anti-spoofing capabilities, and integration with system-wide parental controls. For added security, ActiveX controls are disabled by default. Also, Internet Explorer operates in a protected mode, which operates with lower permissions than the user and runs in isolation from other applications in the operating system, preventing it from accessing or modifying anything besides the Temporary Internet Files directory. Microsoft's anti-spyware product, Windows Defender, has been incorporated into Windows, protecting against malware and other threats. Changes to various system configuration settings (such as new auto-starting applications) are blocked unless the user gives consent. Whereas prior releases of Windows supported per-file encryption using Encrypting File System, the Enterprise and Ultimate editions of Vista include BitLocker Drive Encryption, which can protect entire volumes, notably the operating system volume. However, BitLocker requires approximately a 1.5-gigabyte partition to be permanently not encrypted and to contain system files for Windows to boot. In normal circumstances, the only time this partition is accessed is when the computer is booting, or when there is a Windows update that changes files in this area, which is a legitimate reason to access this section of the drive. The area can be a potential security issue, because a hexadecimal editor (such as dskprobe.exe), or malicious software running with administrator and/or kernel level privileges would be able to write to this "Ghost Partition" and allow a piece of malicious software to compromise the system, or disable the encryption. BitLocker can work in conjunction with a Trusted Platform Module (TPM) cryptoprocessor (version 1.2) embedded in a computer's motherboard, or with a USB key. However, as with other full disk encryption technologies, BitLocker is vulnerable to a cold boot attack, especially where TPM is used as a key protector without a boot PIN being required too. A variety of other privilege-restriction techniques are also built into Vista. An example is the concept of "integrity levels" in user processes, whereby a process with a lower integrity level cannot interact with processes of a higher integrity level and cannot perform DLL–injection to processes of a higher integrity level. The security restrictions of Windows services are more fine-grained, so that services (especially those listening on the network) cannot interact with parts of the operating system they do not need to. Obfuscation techniques such as address space layout randomization are used to increase the amount of effort required of malware before successful infiltration of a system. Code integrity verifies that system binaries have not been tampered with by malicious code. As part of the redesign of the network stack, Windows Firewall has been upgraded, with new support for filtering both incoming and outgoing traffic. Advanced packet filter rules can be created that can grant or deny communications to specific services. The 64-bit versions of Vista require that all device drivers be digitally signed, so that the creator of the driver can be identified. System management While much of the focus of Vista's new capabilities highlighted the new user interface, security technologies, and improvements to the core operating system, Microsoft also adding new deployment and maintenance features: The Windows Imaging Format (WIM) provides the cornerstone of Microsoft's new deployment and packaging system. WIM files, which contain a HAL-independent image of Windows Vista, can be maintained and patched without having to rebuild new images. Windows Images can be delivered via Systems Management Server or Business Desktop Deployment technologies. Images can be customized and configured with applications then deployed to corporate client personal computers using little to no touch by a system administrator. ImageX is the Microsoft tool used to create and customize images. Windows Deployment Services replaces Remote Installation Services for deploying Vista and prior versions of Windows. Approximately 700 new Group Policy settings have been added, covering most aspects of the new features in the operating system, as well as significantly expanding the configurability of wireless networks, removable storage devices, and user desktop experience. Vista also introduced an XML-based format (ADMX) to display registry-based policy settings, making it easier to manage networks that span geographic locations and different languages. Services for UNIX, renamed as "Subsystem for UNIX-based Applications", comes with the Enterprise and Ultimate editions of Vista. Network File System (NFS) client support is also included. Multilingual User Interface–Unlike previous versions of Windows (which required the loading of language packs to provide local-language support), Windows Vista Ultimate and Enterprise editions support the ability to dynamically change languages based on the logged-on user's preference. Wireless Projector support Developer Windows Vista includes a large number of new application programming interfaces. Chief among them is the inclusion of version 3.0 of the .NET Framework, which consists of a class library and Common Language Runtime. Version 3.0 includes four new major components: Windows Presentation Foundation is a user interface subsystem and framework based vector graphics, which makes use of 3D computer graphics hardware and Direct3D technologies. It provides the foundation for building applications and blending application UI, documents, and media content. It is the successor to Windows Forms. Windows Communication Foundation is a service-oriented messaging subsystem that enables applications and systems to interoperate locally or remotely using Web services. Windows Workflow Foundation provides task automation and integrated transactions using workflows. It is the programming model, engine, and tools for building workflow-enabled applications on Windows. Windows CardSpace is a component that securely stores digital identities of a person, and provides a unified interface for choosing the identity for a particular transaction, such as logging into a website. These technologies are also available for Windows XP and Windows Server 2003 to facilitate their introduction to and usage by developers and end-users. There are also significant new development APIs in the core of the operating system, notably the completely re-designed audio, networking, print, and video interfaces, major changes to the security infrastructure, improvements to the deployment and installation of applications ("ClickOnce" and Windows Installer 4.0), new device driver development model ("Windows Driver Foundation"), Transactional NTFS, mobile computing API advancements (power management, Tablet PC Ink support, SideShow) and major updates to (or complete replacements of) many core subsystems such as Winlogon and CAPI. There are some issues for software developers using some of the graphics APIs in Vista. Games or programs built solely on the Windows Vista-exclusive version of DirectX, version 10, cannot work on prior versions of Windows, as DirectX 10 is not available for previous Windows versions. Also, games that require the features of D3D9Ex, the updated implementation of DirectX 9 in Windows Vista are also incompatible with previous Windows versions. According to a Microsoft blog, there are three choices for OpenGL implementation on Vista. An application can use the default implementation, which translates OpenGL calls into the Direct3D API and is frozen at OpenGL version 1.4, or an application can use an Installable Client Driver (ICD), which comes in two flavors: legacy and Vista-compatible. A legacy ICD disables the Desktop Window Manager, a Vista-compatible ICD takes advantage of a new API, and is fully compatible with the Desktop Window Manager. At least two primary vendors, ATI and NVIDIA provided full Vista-compatible ICDs. However, hardware overlay is not supported, because it is considered as an obsolete feature in Vista. ATI and NVIDIA strongly recommend using compositing desktop/Framebuffer Objects for same functionality. Installation Windows Vista is the first Microsoft operating system: To use DVD-ROM media for installation That can be installed only on a partition formatted with the NTFS file system That provides support for loading drivers for SCSI, SATA and RAID controllers from any source in addition to floppy disks prior to its installation That can be installed on and booted from systems with GPT disks and UEFI firmware Unification of OEM and retail installation Windows Vista unifies the previously separate OEM and retail distributions of Microsoft Windows; a license for the edition purchased determines which version of Windows Vista is eligible for installation, regardless of its originating source. OEM and retail versions of Windows before Windows Vista were maintained separately on optical media—users with a manufacturer-supplied disc could not use a retail license during installation, and users with a retail disc could not use an OEM license during installation. Removed features Some notable Windows XP features and components have been replaced or removed in Windows Vista, including several shell and Windows Explorer features, multimedia features, networking related functionality, Windows Messenger, NTBackup, the network Windows Messenger service, HyperTerminal, MSN Explorer, Active Desktop, and the replacement of NetMeeting with Windows Meeting Space. Windows Vista also does not include the Windows XP "Luna" visual theme, or most of the classic color schemes that have been part of Windows since the Windows 3.x era. The "Hardware profiles" startup feature has also been removed, along with support for older motherboard technologies like the EISA bus, APM and game port support (though on the 32-bit version game port support can be enabled by applying an older driver). IP over FireWire (TCP/IP over IEEE 1394) has been removed as well. The IPX/SPX protocol has also been removed, although it can be enabled by a third-party plug-in. Support lifecycle Support for the original release of Windows Vista (without a service pack) ended on April 13, 2010. Windows XP SP2 was retired on July 13, 2010, and Service Pack 1 reached end of support on July 12, 2011, over three years after its general availability. Support for Windows XP, a predecessor of Windows Vista, ended on April 8, 2014, over 12 years after its launch. Mainstream support for Windows Vista officially ended on April 10, 2012. The "Extended Support" phase would last for the next 5 years, until April 11, 2017. Microsoft is no longer offering no-charge incident support, warranty claims, or design fixes for the operating system. For IT pros or users who needed to make specific fixes to the commercial Windows code, Microsoft required an extended hotfix agreement, which provided an additional 90 days from April 10, 2012. As part of the Extended Support phase, Vista users were still able to get security updates, and could still pay for support per incident, per-hour, or in other ways. Microsoft also made Windows Vista product information available through its online Knowledge Base. On April 11, 2017, Microsoft required Windows Vista users to upgrade to Windows 7 in order to continue receiving Microsoft Support. Editions Windows Vista shipped in six different editions. These are roughly divided into two target markets, consumer and business, with editions varying to cater to specific sub-markets. For consumers, there are three editions, with two available for economically more developed countries. Windows Vista Starter edition is aimed at low-powered computers with availability only in emerging markets. Windows Vista Home Basic is intended for budget users. Windows Vista Home Premium covers the majority of the consumer market and contains applications for creating and using multimedia. The home editions cannot join a Windows Server domain. For businesses, there are three editions as well. Windows Vista Business is specifically designed for small and medium-sized enterprises, while Windows Vista Enterprise is only available to customers participating in Microsoft's Software Assurance program. Windows Vista Ultimate contains the complete feature-set of both the Home and Business (combination of both Home Premium and Enterprise) editions, as well as a set of Windows Ultimate Extras, and is aimed at enthusiasts. All editions except Windows Vista Starter support both 32-bit (x86) and 64-bit (x64) processor architectures. In the European Union, Home Basic N and Business N variants are also available. These come without Windows Media Player, due to EU sanctions brought against Microsoft for violating anti-trust laws. Similar sanctions exist in South Korea. Visual styles Windows Vista has four distinct visual styles. Windows Aero Vista's default visual style, Windows Aero, is built on a desktop composition engine called Desktop Window Manager. Windows Aero introduces support for translucency effects (Glass), window thumbnails on the taskbar, window animations, and other visual effects (for example Windows Flip 3D), and is intended for mainstream and high-end video cards. To enable these features, the contents of every open window are stored in video memory to facilitate tearing-free movement of windows. As such, Windows Aero has significantly higher hardware requirements than its predecessors: systems running Vista must have video card drivers compatible with the Windows Display Driver Model (WDDM), and the minimum graphics memory required is 128 MB, depending on the resolution used.Windows Aero is not included in the Starter and Home Basic editions. A variant of Windows Aero, dubbed Windows Vista Standard, lacking the glass effects, window animations, and other advanced graphical effects, is included in Home Basic. Windows Vista Basic This visual style does not employ the Desktop Window Manager; as such, it does not feature transparency or translucency, window animation, Windows Flip 3D or any of the functions provided by the DWM. It is the default visual style on Windows Vista Starter and on systems without WDDM-compatible display drivers, and has similar video card requirements to Windows XP. Before Service Pack 1, a machine that failed Windows Genuine Advantage validation would also default to this visual style. Windows Standard The Windows Standard and Windows Classic visual styles reprise the user interface of Windows 9x, Windows 2000 and Microsoft's Windows Server line of operating systems. As with previous versions of Windows, this visual style supports custom color schemes, which are collections of color settings. Windows Vista includes four high-contrast color schemes and the default color schemes from Windows 98 (titled "Windows Classic") and Windows 2000/Windows Me (titled "Windows Standard"). Hardware requirements Computers capable of running Windows Vista are classified as Vista Capable and Vista Premium Ready. A Vista Capable or equivalent PC is capable of running all editions of Windows Vista although some of the special features and high-end graphics options may require additional or more advanced hardware. A Vista Premium Ready PC can take advantage of Vista's high-end features. Windows Vista's Basic and Classic interfaces work with virtually any graphics hardware that supports Windows XP or 2000; accordingly, most discussion around Vista's graphics requirements centers on those for the Windows Aero interface. As of Windows Vista Beta 2, the NVIDIA GeForce 6 series and later, the ATI Radeon 9500 and later, Intel's GMA 950 and later integrated graphics, and a handful of VIA chipsets and S3 Graphics discrete chips are supported. Although originally supported, the GeForce FX 5 series has been dropped from newer drivers from NVIDIA. The last driver from NVIDIA to support the GeForce FX series on Vista was 96.85. Microsoft offered a tool called the Windows Vista Upgrade Advisor to assist Windows XP and Vista users in determining what versions of Windows their machine is capable of running. The required server connections for this utility are no longer available. Although the installation media included in retail packages is a 32-bit DVD, customers needing a CD-ROM or customers who wish for a 64-bit install media can acquire this media through the Windows Vista Alternate Media program. The Ultimate edition includes both 32-bit and 64-bit media. The digitally downloaded version of Ultimate includes only one version, either 32-bit or 64-bit, from Windows Marketplace. Physical memory limits The maximum amount of RAM that Windows Vista can support varies, depending on both its edition and its processor architecture, as shown in the table. Processor limits The maximum number of logical processors in a PC that Windows Vista supports is: 32 for 32-bit; 64 for 64-bit. The maximum number of physical processors in a PC that Windows Vista supports is: 2 for Business, Enterprise, and Ultimate, and 1 for Starter, Home Basic, and Home Premium. Updates Microsoft occasionally releases updates such as service packs for its Windows operating systems to fix bugs, improve performance and add new features. Service Pack 1 Windows Vista Service Pack 1 (SP1) was released on February 4, 2008, alongside Windows Server 2008 to OEM partners, after a five-month beta test period. The initial deployment of the service pack caused a number of machines to continually reboot, rendering the machines unusable. This temporarily caused Microsoft to suspend automatic deployment of the service pack until the problem was resolved. The synchronized release date of the two operating systems reflected the merging of the workstation and server kernels back into a single code base for the first time since Windows 2000. MSDN subscribers were able to download SP1 on February 15, 2008. SP1 became available to current Windows Vista users on Windows Update and the Download Center on March 18, 2008. Initially, the service pack only supported five languages – English, French, Spanish, German and Japanese. Support for the remaining 31 languages was released on April 14, 2008. A white paper, published by Microsoft on August 29, 2007, outlined the scope and intent of the service pack, identifying three major areas of improvement: reliability and performance, administration experience, and support for newer hardware and standards. One area of particular note is performance. Areas of improvement include file copy operations, hibernation, logging off on domain-joined machines, JavaScript parsing in Internet Explorer, network file share browsing, Windows Explorer ZIP file handling, and Windows Disk Defragmenter. The ability to choose individual drives to defragment is being reintroduced as well. Service Pack 1 introduced support for some new hardware and software standards, notably the exFAT file system, 802.11n wireless networking, IPv6 over VPN connections, and the Secure Socket Tunneling Protocol. Booting a system using Extensible Firmware Interface on x64 systems was also introduced; this feature had originally been slated for the initial release of Vista but was delayed due to a lack of compatible hardware at the time. Booting from a GUID Partition Table–based hard drive greater than 2.19 TB is supported (x64 only). Two areas have seen changes in SP1 that have come as the result of concerns from software vendors. One of these is desktop search; users will be able to change the default desktop search program to one provided by a third party instead of the Microsoft desktop search program that comes with Windows Vista, and desktop search programs will be able to seamlessly tie in their services into the operating system. These changes come in part due to complaints from Google, whose Google Desktop Search application was hindered by the presence of Vista's built-in desktop search. In June 2007, Google claimed that the changes being introduced for SP1 "are a step in the right direction, but they should be improved further to give consumers greater access to alternate desktop search providers". The other area of note is a set of new security APIs being introduced for the benefit of antivirus software that currently relies on the unsupported practice of patching the kernel (see Kernel Patch Protection). An update to DirectX 10, named DirectX 10.1, marked mandatory several features that were previously optional in Direct3D 10 hardware. Graphics cards will be required to support DirectX 10.1. SP1 includes a kernel (6001.18000) that matches the version shipped with Windows Server 2008. The Group Policy Management Console (GPMC) was replaced by the Group Policy Object Editor. An updated downloadable version of the Group Policy Management Console was released soon after the service pack. SP1 enables support for hotpatching, a reboot-reduction servicing technology designed to maximize uptime. It works by allowing Windows components to be updated (or "patched") while they are still in use by a running process. Hotpatch-enabled update packages are installed via the same methods as traditional update packages, and will not trigger a system reboot. Service Pack 2 Service Pack 2 for Windows Vista was released to manufacturing on April 28, 2009, and released to Microsoft Download Center and Windows Update on May 26, 2009, one year after the release of Windows Vista SP1, and five months before the release of Windows 7. In addition to a number of security and other fixes, a number of new features have been added. However, it did not include Internet Explorer 8, but instead was included in Windows 7, which was released five months after Vista SP2. Windows Search 4 (available for SP1 systems as a standalone update) Feature Pack for Wireless adds support for Bluetooth 2.1 Windows Feature Pack for Storage enables the data recording onto Blu-ray media Windows Connect Now (WCN) to simplify Wi-Fi configuration Improved support for resuming with active Wi-Fi connections Improved support for eSATA drives The limit of 10 half-open, outgoing TCP connections introduced in Windows XP SP2 was removed Enables the exFAT file system to support UTC timestamps, which allows correct file synchronization across time zones Support for ICCD/CCID smart cards Support for VIA 64-bit CPUs Improved performance and responsiveness with the RSS feeds sidebar Improves audio and video performance for streaming high-definition content Improves Windows Media Center (WMC) in content protection for TV Provides an improved power management policy that is approximately 10% more efficient than the original with the default policies Windows Vista and Windows Server 2008 share a single service pack binary, reflecting the fact that their code bases were joined with the release of Server 2008. Service Pack 2 is not a cumulative update meaning that Service Pack 1 must be installed first. Platform Update The Platform Update for Windows Vista was released on October 27, 2009. It includes major new components that shipped with Windows 7, as well as updated runtime libraries. It requires Service Pack 2 of Windows Vista or Windows Server 2008 and is listed on Windows Update as a Recommended download. The Platform Update allows application developers to target both Windows Vista and Windows 7. It consists of the following components: Windows Graphics runtime: Direct2D, DirectWrite, Direct3D 11, DXGI 1.1, and WARP Updates to Windows Imaging Component Updates to XPS Print API, XPS Document API and XPS Rasterization Service Windows Automation API (updates to MSAA and UI Automation) Windows Portable Devices Platform (adds support for MTP over Bluetooth and MTP Device Services) Windows Ribbon API Windows Animation Manager library Some updates are available as separate releases for both Windows XP and Windows Vista: Windows Management Framework: Windows PowerShell 2.0, Windows Remote Management 2.0, BITS 4.0 Remote Desktop Connection 7.0 (RDP7) client Although extensive, the Platform Update does not bring Windows Vista to the level of features and performance offered by Windows 7. For example, even though Direct3D 11 runtime will be able to run on D3D9-class hardware and WDDM drivers using "feature levels" first introduced in Direct3D 10.1, Desktop Window Manager has not been updated to use Direct3D 10.1. In July 2011, Microsoft released the Platform Update Supplement for Windows Vista and Windows Server 2008, which contains several bug fixes and performance improvements. Out-of-band patches BlueKeep patch Microsoft has released an update for Windows Vista SP2 to resolve the BlueKeep security vulnerability (), which affects the Remote Desktop Protocol in older Windows versions. Subsequent related flaws, (collectively known as DejaBlue) do not affect Windows Vista or earlier versions of Windows. The installation of this patch changed the build number of Windows Vista from 6002 to 6003. Unofficial out-of-band patches While Windows Vista support ended on April 11, 2017 support could be unofficially extended by installing Windows Server 2008 updates, this allowed Windows Vista users to install security updates until the Windows Server 2008 end of support date of January 14, 2020. Below are out-of-band updates that were released for Windows Server 2008 and newer but can be installed on Windows Vista. Text Services Framework patch The Text Services Framework was compromised by a privilege escalation vulnerability () that could allow attackers to use the framework to perform privileged operations, run software, or send messages to privileged processes from unprivileged processes—bypassing security features such as sandboxes or User Account Control. Microsoft remediated issues related to this vulnerability with the release of a patch in August 2019 for Windows Vista SP2, Windows Server 2008 SP2, and later versions of Windows. Malware Protection Engine patch A vulnerability related to Windows Defender that affected the way the Malware Protection Engine operates () was reported in May 2017. If Windows Defender scanned a specially crafted file, it would lead to memory corruption, potentially allowing an attacker to control the affected machine or perform arbitrary code execution in the context of LocalSystem; the vulnerability was exacerbated by the default real-time protection settings of Windows Defender, which were configured to automatically initiate malware scans at regular intervals. The first version of the Protection Engine affected by the vulnerability is Version 1.1.13701.0—subsequent versions of the engine are unaffected. Microsoft released a patch to address the issue. Termination of update services Windows Vista support ended on April 11, 2017, and no more updates were released for the OS apart from the rare out-of-band patches. Windows Update Services for the OS continued to work to install previously released updates up until July 2020 when SHA-1 Windows Update endpoints were discontinued. However, as of May 2021, the Microsoft Update Catalog is still hosting these updates for download for Windows Vista. Marketing campaign The Mojave Experiment In July 2008, Microsoft introduced a web-based advertising campaign called the "Mojave Experiment", which depicts a group of people who are asked to evaluate the newest operating system from Microsoft, calling it Windows 'Mojave'. Participants are first asked about Vista, if they have used it, and their overall satisfaction with Vista on a scale of 1 to 10. They are then shown a demo of some of the new operating system's features, and asked their opinion and satisfaction with it on the same 1 to 10 scale. After respondents rate "Mojave", they are then told that they were shown a demo of Windows Vista. The object was to test "A theory: If people could see Windows Vista firsthand, they would like it." According to Microsoft, the initial sample of respondents rated Vista an average of 4.4 out of 10, and Mojave received an average of 8.5, with no respondents rating Mojave lower than they originally rated Windows Vista before the demo. The "experiment" has been criticized for deliberate selection of positive statements and not addressing all aspects of Vista. During the launch of Vista, Microsoft also made a lime flavored sparkling water available to campus visitors and developers. Reception Windows Vista received mixed to negative reviews at the time of its release and throughout its lifespan, mainly for its much higher hardware requirements and perceived slowness compared to Windows XP. It received generally positive reviews from PC gamers who praised the advantages brought by DirectX 10, which allowed for better gaming performance and more realistic graphics, as well as support for many new capabilities featured in new GPUs. However, many DirectX 9 games initially ran with lower frame rates compared to when they were run on Windows XP. In mid-2008, benchmarks suggested that the SP1 update improved performance to be on par with (or better than) Windows XP in terms of game performance. Peter Bright of Ars Technica wrote that, despite its delays and feature cuts, Windows Vista is "a huge evolution in the history of the NT platform [...] The fundamental changes to the platform are of a scale not seen since the release of NT [3.1; the first version]." In a continuation of his previous assessment, Bright stated that "Vista is not simply XP with a new skin; core parts of the OS have been radically overhauled, and virtually every area has seen significant refinement. In terms of the magnitude and extent of these changes, Vista represents probably the biggest leap that the NT platform has ever seen. Never before have significant subsystems been gutted and replaced in the way they are in Vista." Many others in the tech industry echoed these sentiments at the time, directing praise towards the massive amount of technical features new to Windows Vista. Windows Vista received the "Best of CES" award at the Consumer Electronics Show in 2007. In its first year of availability, PC World rated it as the biggest tech disappointment of 2007, and it was rated by InfoWorld as No. 2 of Tech's all-time 25 flops. Microsoft's then much smaller competitor Apple noted that, despite Vista's far greater sales, its own operating system did not seem to have suffered after its release, and would later invest in advertising mocking Vista's unpopularity with users. Computer manufacturers such as Dell, Lenovo, and Hewlett-Packard released their newest computers with Windows Vista pre-installed; however, after the negative reception of the operating system, they also began selling their computers with Windows XP CDs included because of a drop in sales. Sales A Gartner research report predicted that Vista business adoption in 2008 would overtake that of XP during the same time frame (21.3% vs. 16.9%) while IDC had indicated that the launch of Windows Server 2008 served as a catalyst for the stronger adoption rates. As of January 2009, Forrester Research had indicated that almost one third of North American and European corporations had started deploying Vista. At a May 2009 conference, a Microsoft Vice President said "Adoption and deployment of Windows Vista has been slightly ahead of where we had been with XP" for big businesses. Within its first month, 20 million copies of Vista were sold, double the amount of Windows XP sales within its first month in October 2001, five years earlier. Shortly after however, due to Vista's relatively low adoption rates and continued demand for Windows XP, Microsoft decided to sell Windows XP until June 30, 2008, instead of the previously planned date of January 31, 2008. There were reports of Vista users "downgrading" their operating systems back to XP, as well as reports of businesses planning to skip Vista. A study conducted by ChangeWave in March 2008 showed that the percentage of corporate users who were "very satisfied" with Vista was dramatically lower than other operating systems, with Vista at 8%, compared to the 40% who said they were "very satisfied" with Windows XP. The internet-usage market share for Windows Vista after two years of availability, in January 2009, was 20.61%. This figure combined with World Internet Users and Population Stats yielded a user base of roughly 330 million, which exceeded Microsoft's two-year post launch expectations by 130 million. The internet user base reached before the release of its successor (Windows 7) was roughly 400 million according to the same statistical sources. Criticism Windows Vista has received several negative assessments. Criticism targets include protracted development time (5–6 years), more restrictive licensing terms, the inclusion of several technologies aimed at restricting the copying of protected digital media, and the usability of the new User Account Control security technology. Moreover, some concerns have been raised about many PCs meeting "Vista Premium Ready" hardware requirements and Vista's pricing. Hardware requirements While in 2005 Microsoft claimed "nearly all PCs on the market today will run Windows Vista", the higher requirements of some of the "premium" features, such as the Aero interface, affected many upgraders. According to the UK newspaper The Times in May 2006, the full set of features "would be available to less than 5 percent of Britain's PC market"; however, this prediction was made several months before Vista was released. This continuing lack of clarity eventually led to a class action against Microsoft as people found themselves with new computers that were unable to use the new software to its full potential despite the assurance of "Vista Capable" designations. The court case has made public internal Microsoft communications that indicate that senior executives have also had difficulty with this issue. For example, Mike Nash (Corporate Vice President, Windows Product Management) commented, "I now have a $2,100 e-mail machine" because of his laptop's lack of an appropriate graphics chip so hobbled Vista. Licensing Criticism of upgrade licenses pertaining to Windows Vista Starter through Home Premium was expressed by Ars Technicas Ken Fisher, who noted that the new requirement of having a prior operating system already installed was going to irritate users who reinstall Windows regularly. It has been revealed that an Upgrade copy of Windows Vista can be installed clean without first installing a previous version of Windows. On the first install, Windows will refuse to activate. The user must then reinstall that same copy of Vista. Vista will then activate on the reinstall, thus allowing a user to install an Upgrade of Windows Vista without owning a previous operating system. As with Windows XP, separate rules still apply to OEM versions of Vista installed on new PCs: Microsoft asserts that these versions are not legally transferable (although whether this conflicts with the right of first sale has yet to be clearly decided legally). Cost Initially, the cost of Windows Vista was also a source of concern and commentary. A majority of users in a poll said that the prices of various Windows Vista editions posted on the Microsoft Canada website in August 2006 make the product too expensive. A BBC News report on the day of Vista's release suggested that, "there may be a backlash from consumers over its pricing plans—with the cost of Vista versions in the US roughly half the price of equivalent versions in the UK." Since the release of Vista in 2006, Microsoft has reduced the retail, and upgraded the price point of Vista. Originally, Vista Ultimate was priced at $399, and Home Premium Vista at $239. These prices have since been reduced to $319 and $199 respectively. Digital rights management Windows Vista supports additional forms of DRM restrictions. One aspect of this is the Protected Video Path, which is designed so that "premium content" from HD DVD or Blu-ray Discs may mandate that the connections between PC components be encrypted. Depending on what the content demands, the devices may not pass premium content over non-encrypted outputs, or they must artificially degrade the quality of the signal on such outputs or not display it at all. Drivers for such hardware must be approved by Microsoft; a revocation mechanism is also included, which allows Microsoft to disable drivers of devices in end-user PCs over the Internet. Peter Gutmann, security researcher and author of the open source cryptlib library, claims that these mechanisms violate fundamental rights of the user (such as fair use), unnecessarily increase the cost of hardware, and make systems less reliable (the "tilt bit" being a particular worry; if triggered, the entire graphic subsystem performs a reset) and vulnerable to denial-of-service attacks. However, despite several requests for evidence supporting such claims Peter Gutmann has never supported his claims with any researched evidence. Proponents have claimed that Microsoft had no choice but to follow the demands of the movie studios, and that the technology will not actually be enabled until after 2010; Microsoft also noted that content protection mechanisms have existed in Windows as far back as Windows ME, and that the new protections will not apply to any existing content, only future content. User Account Control Although User Account Control (UAC) is an important part of Vista's security infrastructure as it blocks software from silently gaining administrator privileges without the user's knowledge, it has been widely criticized for generating too many prompts. This has led many Vista UAC users to consider it troublesome, with some consequently either turning the feature off or (for Windows Vista Enterprise or Windows Vista Ultimate users) putting it in auto-approval mode. Responding to this criticism, Microsoft altered the implementation to reduce the number of prompts with SP1. Though the changes resulted in some improvement, it did not alleviate the concerns completely. Downgrade rights End-users of licenses of Windows 7 acquired through OEM or volume licensing may downgrade to the equivalent edition of Windows Vista. Downgrade rights are not offered for Starter, Home Basic or Home Premium editions of Windows 7. For Windows 8 licenses acquired through an OEM, a user may also downgrade to the equivalent edition of Windows Vista. Customers licensed for use of Windows 8 Enterprise are generally licensed for Windows 8 Pro, which may be downgraded to Windows Vista Business. See also BlueKeep (security vulnerability) Comparison of Windows Vista and Windows XP Microsoft Security Essentials Notes References External links Windows Vista End of Support Windows Vista Service Pack 2 (SP2) Update 2006 software IA-32 operating systems Products and services discontinued in 2017 Vista X86-64 operating systems
2488085
https://en.wikipedia.org/wiki/MESM
MESM
MESM (МЭСМ, Малая Электронно-Счетная Машина, Small Electronic Calculating Machine) was the first universally programmable electronic computer in the Soviet Union. By some authors it was also depicted as the first one in continental Europe, even though the electromechanical computers Zuse Z4 and the Swedish BARK preceded it. Overview MESM was created by a team of scientists under the direction of Sergei Alekseyevich Lebedev from the Kiev Institute of Electrotechnology in the Soviet Union, at Feofaniya (near Kiev). Initially, MESM was conceived as a layout or model of a Large Electronic Calculating Machine and letter "M" in the title meant "model" (prototype). Work on the machine was research in nature, in order to experimentally test the principles of constructing universal digital computers. After the first successes and in order to meet the extensive governmental needs of computer technology, it was decided to complete the layout of a full-fledged machine capable of "solving real problems". MESM became operational in 1950. It had about 6,000 vacuum tubes and consumed 25 kW of power. It could perform approximately 3,000 operations per minute. Creation and operation history Principal computer architecture scheme was ready by the end of 1949. As well as a few schematic diagrams of an individual blocks. In 1950 the computer was mounted in a two-story building of the former hostel of a convent in Feofania, where a psychiatric hospital was located before the second world war. November 6, 1950 - team performed the first test launch. Test task is: January 4, 1951. First useful calculations performed. Calculate the factorial of a number, raise number in a power. Computer was shown to special commission of the USSR State Academy of Sciences. Team was led by Mstislav Keldysh December 25, 1951. Official government testing passed successfully. USSR Academy of Sciences and Mstislav Keldysh began regular operation of the MESM. It was operated until 1957, and then transferred to Kyiv Polytechnic Institute for training purposes 1959, MESM dismantled. “Computer was split into pieces, which were used to build  series of stands, after all all of them was thrown away.” recalled Boris Malinovsky. Many of the electron tubes and other components left from MESM are stored in the Foundation for the History and Development of Computer Science and Technology in the Kiev House of Scientists of the National Academy of Sciences of Ukraine. System specification Arithmetic Logic Unit universal parallel action flip-flop based Number representation binary fixed points 16-n bits per number plus with one sign bit Instructions 20 binary bits per command The first 4 bits - operation code The next 5 bits - first operand address another 5 it the second operand address The last 6 bits - operation result address Following instruction types supported addition add with carry subtraction multiplication division binary shifts comparison taking into account mark absolute value comparison transfer of control magnetic drum read stop RAM Flip-flop based Data and code separated 31 machine words for data 63 machine words for code ROM 31 machine words for data 63 machine words for code Clock rate 5 kHz Performance About 3000 operations per minute (total time of one cycle is 17.6 ms; division operation takes from 17.6 to 20.8 ms) Computer was built using 6000 vacuum tubes where about 3500 of triodes and 2500 of diodes. System occupies 60 m² (646 square foots) of space and uses about 25 kW of power. Data was read from punched cards or typed using a plug switch. In addition, computer can use a magnetic drum that stores up to 5000 codes of numbers or commands. An electromechanical printer or photo device was used for output. See also History of computing in the Soviet Union References Soviet computer systems One-of-a-kind computers Vacuum tube computers 1950s computers 1950 in the Soviet Union
5040777
https://en.wikipedia.org/wiki/Certified%20Information%20Technology%20Professional
Certified Information Technology Professional
Certified Information Technology Professional (CITP) is a professional certification for professionals in the field of Information Technology. The CITP credential recognizes technical expertise across a wide range of business-technology practice areas. The credential is given to CPAs only, but different organisations offer these to professionals of varied qualifications. These professionals are expected to be able to assess risk, uncover fraud and perform audits within a business. Most firms that offer these certifications require an unrevoked and valid CPA licence. Qualifications for the CITP In some firms, CITP is offered only to those that have completed a formal exam, which includes the five CITP Body of Knowledge areas: Risk Assessment Fraud Considerations Internal Controls & Information Technology General Controls Evaluate, Test and Report Information Management and Business Intelligence These main topic areas have a larger field of information behind them; those applying for a qualification need a comprehensive understanding of these disciplines. In addition, certain certifications and advanced degrees also apply. To be awarded the CITP credential, a CPA must qualify for 100 total points on the application. On the CITP Credential Application you will be asked to sign a Declaration and Intent to comply with all the requirements for CITP recertification. A percentage of CITP applications will be randomly selected for further review each year, and if selected, the applicant agrees to provide detailed documentation (including specifics of Business Experience and Lifelong Learning) to support the assertions of the application. CITP Body of Knowledge Those applying to for a CITP require a breadth of knowledge in the fields listed above. Each main topic has a number of specific outcomes, similar to those found in an educational syllabus. The following data comes from The American Institute of CPAs, an issuer of the CITP certification. The five fields listed above are split into seven, to simplify the understandability of the information. Technology Strategic Planning Technology Strategic Planning looks at the understanding of enterprise or business strategy and vision to begin with. This focuses on the business focus and the position of the business in its industry. Technology Strategic Planning also explores the current IT environment of the business processes, as well as assessing risk of the business in IT. Envisioning future environments as well as assessing strategic IT plans for the business is also a factor that is important in this body of knowledge. IT Architecture IT architecture looks at the business' infrastructure, software, people and procedures as well as the data flow. This also looks at the system reliability and management, while understanding protocols, standards and enabling technologies are required as well. Application development environments are core to this topic, looking at database design, data definition, and models of dataflow. Business Process Enablement Business Process Enablement looks at more business styled requirements, with stakeholder identification and requirements core to this topic. Business functionality and models paired with risk and business strategy add to the complexity of this topic. The impact of IT on the models of the business is something that is looked at as well, however, this is more to identify the effects of IT on the traditional business process. System Development, Acquisition, Implementation, and Project Management System development leads to the identification of technology and its enablement of business processes. Acquisition of systems for commercial usage as well as understanding a software's System Development Life Cycle looks at requirements, risks and models again. System implementation identifies the effects of the software being implemented, while project management assesses the plans and controls of the program being implemented. Information Systems Management Information System Management looks at assessing the IT organisation, policies and procedures, while also the operations, effectiveness and efficiencies of the business in its industry. This also assesses asset management, change control and problem management. Financial Control over IT resources looks at performance metrics and IT costs. Systems Security, Reliability, Audit and Control Systems Security looks at reliability, controls and evaluation of a particular system in a business. While focusing on cyber security for the software in play in the business, professionals also need to understand privacy issues of the customers of the business. System Audit and Control looks at understanding system controls, testing these controls, and assessing the effectiveness of the controls. IT Governance & Regulation Governance establishes risk thresholds for Critical Information Assets as well as establishing broad IT program principles. It also protects stakeholder interests dependent on IT. This topic also pushes the professional to find and have a broad understanding of federal laws, rules and standards that are present in the business' operation space. CITP Multiple Entry Point System (MEP) To be awarded the CITP Credential, a CPA must accumulate 100 total points. Total points will be earned based on business experience, lifelong learning, and, if required, the results of an examination. Business Experience Requirement To be awarded the CITP credential, the candidate must earn a minimum of 25 points for business experience within the five-year period preceding the date of application. The maximum number of business experience points that can be earned over the preceding five-year period is 60. 40 hours of IT-related business experience equals approximately 1 point. The final number of points earned in this category will vary depending on the hours of experience and the scope of that experience. Eligible business experience must address the seven practice areas that currently comprise the CITP Body of Knowledge. Academics may count their time lecturing and teaching towards the business experience requirement. Life Long Learning (LLL) To be awarded the CITP credential, you must also earn a minimum of 25 points in lifelong learning within the five-year period preceding the date of application. The maximum number of lifelong learning points allowed over a five-year period is 60. The objectives of the lifelong learning requirement are twofold, to: Maintain your competency by requiring timely updates of existing technology knowledge and skills Provide a mechanism for monitoring the maintenance of your competency The following types of lifelong learning activities are eligible for points: Continuing Professional Education Approved courses from an accredited university or college Other continuing education courses Trade association conferences Non-traditional learning methods self-directed reading Presenting Authoring Other credentials designations and certifications, advanced degrees, and committee service Oversight The credential is administered by the AICPA through the National Accreditation Commission via a volunteer committee and dedicated AICPA staff. The committee was formed in 2003 and initially chaired, through 2005, by MICHAEL DICKSON, CPA.CITP. He was succeeded by the current chair, Gregory LaFollette, CPA, CITP. The committee consists of six members each serving staggered three-year terms. References Information technology qualifications
51245624
https://en.wikipedia.org/wiki/Tagetik
Tagetik
Tagetik develops and sells cloud and on-premises corporate performance management software applications for use by corporate finance teams and their business users. History Tagetik has headquarters in Lucca, Italy and Stamford, Connecticut. In July 2014, the company announced $36 million in outside funding. In June 2015, the company acquired iNovasion, a Netherlands-based company, and opened its Benelux direct operation. In February 2016, Tagetik established Tagetik GmbH, a direct operation for the German, Austrian, and Swiss markets. In September 2016, the company opened a new office in Brussels. In December 2016, Tagetik established Tagetik Nordic AB, a direct operation for the Swedish, Norwegian, Finnish, Danish, and Icelandic markets, and opened a new office in Stockholm. On 6 April 2017 Tagetik was acquired by Wolters Kluwer. Product The company’s software monitors and manages financial performance and processes, such as budgeting, consolidation, planning and forecasting, disclosure management, and reporting. The software uses Microsoft Office, using Microsoft Excel, Word and PowerPoint for input and report templates. The software also has connectors to Microsoft Power BI, SharePoint, SQL Server, Dynamics AX and NAV and runs on Microsoft Azure. Tagetik runs on Microsoft SQL Server, PostgreSQL, Oracle or SAP HANA databases. The company also has developed a connector to the Qlik Analytics Platform and to SAP to automate extracting and mapping data and metadata from SAP ECC FI, and BW tables. Via cpmVision, a Microsoft Power BI and CCH® Tagetik connection is established. Providing you with a scala of CCH® Tagetik functionality within the Power BI reporting instance. Analyst Evaluations Tagetik is routinely evaluated by recognized research and advisory firms. Recent evaluations include: Gartner: 2016 Magic Quadrant for Financial Corporate Performance Management Solutions Gartner: 2016 Magic Quadrant for Strategic Corporate Performance Management Solutions Gartner: 2016 Critical Capabilities for Financial Corporate Performance Management Solutions Gartner: 2016 Critical Capabilities for Strategic Corporate Performance Management Solutions Forrester Research: The Forrester Wave™: Enterprise Performance Management, Q4 2016 Customers and Partners As of June 2015, Tagetik had approximately 750 corporate customers around the world. Representative customers include: Fiat Chrysler, Henkel, Webster Bank, Randstad, Carillion, and John Hancock-Manulife. Major consulting partners include Satriun, Accenture, Alper & Schetter Consulting GmbH, Deloitte, Ernst & Young, KPMG, and PwC. Among Tagetik’s technology partners are Amazon Web Services, Microsoft, NetSuite, Qlik, and SAP. External links Official website References Software companies based in Connecticut Software companies of Italy Companies based in Lucca Companies based in Stamford, Connecticut Software companies of the United States
41551102
https://en.wikipedia.org/wiki/Eric%20S.%20Roberts
Eric S. Roberts
Eric S. Roberts is an American computer scientist noted for his contributions to computer science education through textbook authorship and his leadership in computing curriculum development. He is a co-chair of the ACM Education Council, former co-chair of the ACM Education Board, and a former member of the SIGCSE Board. He led the Java task force in 1994. He was a Professor emeritus at Stanford University. He currently teaches at Willamette University in Salem, Oregon. Education Roberts received an A. B. in Applied Mathematics from Harvard University in 1973. He received an S. M. in Applied Mathematics from Harvard University in June 1974 and a Ph.D in Applied Mathematics from Harvard University in 1980. Career and research He joined the Department of Computer Science at Wellesley College as an assistant professor in 1980. In 1984–1985 he was a visiting lecturer in Computer Science at Harvard University. In 1990 he was an associate professor at Stanford University and promoted to professor (teaching) of Computer Science in 1990. In 2018, he joined Reed College as a visiting professor of computer science. In 2020, he joined Willamette University as the Mark and Melody Teppola Presidential Distinguished Visiting Professor. While at Stanford he has also held several other positions such as associate chair and director of undergraduate studies from 1997 to 2002, and senior associate dean for student affairs from 2001 to 2003. Roberts has written several introductory computer science textbooks, including Thinking Recursively The Art and Science of C Programming Abstractions in C Thinking Recursively with Java The Art and Science of Java Awards Roberts has several notable awards in computer science. SIGCSE Award for Lifetime Service to Computer Science Education ACM Karl V. Karlstrom Outstanding Educator Award in 2012. IEEE Computer Society’s 2012 Taylor L. Booth Education Award. Elected ACM Fellow in 2007. References American computer scientists Computer science educators Stanford University Department of Computer Science faculty Fellows of the Association for Computing Machinery Year of birth missing (living people) Living people Harvard School of Engineering and Applied Sciences alumni Wellesley College faculty People from Durham, North Carolina
9339274
https://en.wikipedia.org/wiki/Murray%20Campbell
Murray Campbell
Murray Campbell is a Canadian computer scientist known for being part of the team that created Deep Blue; the first computer to defeat a world chess champion. Biography Campbell was involved in surveillance projects related to petroleum production, disease outbreak, and financial data. In earlier work, Campbell was a member of the teams that developed chess machines: HiTech and a project to culminate in Deep Blue, the latter being the first computer to defeat the reigning world chess champion, Garry Kasparov, in a challenge match, in 1997. Kasparov had won an earlier match the previous year. (Based on text taken from a newsletter by Mike Oettel, of the Shriver Center at UMBC.) Campbell visited UMBC for a speech called "IBM's Deep Blue: Ten Years After" on February 5, 2007. In the University Center building, he presented the background that led up to the decisive match with Kasparov, reviewed the match itself (with Kasparov and similar matches), and explored some of the design decisions that were made when building Deep Blue. Murray put emphasis on some of the broader implications of Deep Blue's development and victory on the information technology industry and artificial intelligence. He is a Senior Manager in the Business Analytics and Mathematical Sciences Department at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, USA. The mission of the Services Modeling group is to apply technical expertise in areas such as optimization, forecasting, probabilistic analysis. The focus is in the area of Business Analytics and Workforce Management. Solutions are developed that include services project management, skill analytics, demand forecasting, workplace learning, workforce optimization, and strategic planning. Personal life Campbell himself played chess at near National Master strength in Canada during his student days, but has not played competitively for more than 20 years. His peak Elo rating was around 2200. Honors and awards North American Computer Chess Championship: Member of winning teams in 1985 (HiTech), 1987 (ChipTest), 1988 (Deep Thought), 1989 (HiTech and Deep Thought), 1990 (Deep Thought), 1991 (Deep Thought) and 1994 (Deep Thought). 1989 World Computer Chess Championship, winning team (Deep Thought) Campbell shared the $100,000 Fredkin Prize with Feng-hsiung Hsu and A. Joseph Hoane Jr. in 1997. The prize was awarded for developing the first computer (Deep Blue) to defeat a reigning world chess champion in a match. Campbell received the Allen Newell Research Excellence Medal in 1997, citing his contributions to Deep Blue (first computer to defeat a world chess champion), Deep Thought (first Grandmaster level computer) and HiTech (first Senior Master level computer). Campbell was elected Fellow of the Association for the Advancement of Artificial Intelligence in 2012 for "significant contributions to computer game-playing, especially chess, and the associated improvement in public awareness of the AI endeavor." References External links Home page at IBM Research IBM employees Canadian computer scientists American computer scientists Canadian chess players Computer chess people University of Alberta alumni Carnegie Mellon University alumni Living people Year of birth missing (living people) Fellows of the Association for the Advancement of Artificial Intelligence
24759056
https://en.wikipedia.org/wiki/Linksys%20routers
Linksys routers
Linksys manufactures a series of network routers. Many models are shipped with Linux-based firmware and can run third-party firmware. The first model to support third-party firmware was the very popular Linksys WRT54G series. The Linksys WRT160N/WRT310N series is the successor to the WRT54G series of routers from Linksys. The main difference is the draft 802.11n wireless interface, providing a maximum speed of 270 Mbit/s over the wireless network when used with other 802.11n devices. Specifications and versions BEFW11S4 Linksys' first series of wireless routers. The Linksys BEFW11S4 is a Wi-Fi capable residential gateway from Linksys. The device is capable of sharing Internet connections among several computers via 802.3 Ethernet and 802.11b wireless data links. With only 1 MB of flash storage and 4 MB of RAM, no third party replacement firmware is compatible with it. WRT54G series The Linksys WRT54G and variants WRT54GS, WRT54GL, and WRTSL54GS are Wi-Fi capable residential gateways from Linksys. The device is capable of sharing Internet connections among several computers via 802.3 Ethernet and 802.11b/g wireless data links. The WRT54GL as well as most (but not all) of the other variants in this series, are capable of running Linux-based third-party firmware for added features. Supported software includes Tomato, OpenWrt, and DD-WRT WRT100 802.11g MIMO router with 100 Mbit/s switches WRT110 802.11g MIMO router with 100 Mbit/s switches WRT120N 150 Mbit/s N router, but not as fast as real N speeds, with 100 Mbit/s switches WRT150N 802.11n "draft" MIMO router with 100 Mbit/s switches. WRT160N 802.11n "draft" MIMO router with 100 Mbit/s switches. The E1000 and Cisco Valet M10 replaced this model. WRT160NL 802.11n "draft" MIMO router with 100 Mbit/s switches. Has a Linux-based OS, external antenna, and USB port for network storage. The E2100L replaced this model. WRT300N 802.11n "draft" MIMO router with 100 Mbit/s switches. Base model for all the others listed below. WRT310N Similar to WRT350N with a Gigabit Ethernet switch, hardware crypto acceleration for IPSec, SSL, and WPA/WPA2. The WRT310N has an integrated wireless chipset rather than the external PC Card adapter found on the WRT350N. The Cisco Valet Plus M20 replaced this model. WRT320N 802.11n "draft" MIMO router with a gigabit switch and non-simultaneous dual-band. The E2000 replaced this model. Due to the hardware being very similar, it is possible to upgrade the WRT320N to an E2000 by replacing the CFE. WRT330N Based on a different platform, but also has a Gigabit Ethernet switch according to the product specifications listed on the manufacturers website. WRT350N Similar to WRT300N, but with a Gigabit Ethernet switch, hardware crypto acceleration for IPSec, SSL, and WPA/WPA2, and a USB 2.0 port for connecting a hard drive or flash-based USB storage devices directly to your network to share music, video, or data files. WRT400N A simultaneous dual-band non-gigabit model. WRT600N A simultaneous dual-band gigabit model. It looks like WRT350N including USB 2.0 storage link except that the WRT600N is black. WRT610N A simultaneous dual-band gigabit model. The hardware is more integrated than the WRT600N and has no external antennas. The E3000 replaced this model. A special system menu can be accessed by browsing to http://ip_address_of_wrt610n/System.asp. ″Vista Premium" (ability to turn off 6to4) and EGHN (Entertainment Grade Home Network = Linksys/Cisco UPnP QoS solution) can be configured in this page. WRT1200AC The WRT1200AC is a dual band router inspired by its big brother the WRT1900AC. WRT1900AC The WRT1900AC is a dual band router inspired by the original WRT54G iconic blue/black stackable form factor. The WRT1900AC router is advertised as "Open Source ready", and "Developed for use with OpenWRT." However, there did not exist any open source firmware for the WRT1900AC at the time the product was launched, although Linksys/Marvell recently released updated Wi-Fi drivers in 2015.... which has allowed OpenWRT to release new open source firmware images. WRT1900ACS The WRT1900ACS was released 8. October 2015. It looks identical to the WRT1900AC, but has a 1.6 GHz dual core CPU (Same CPU as WRT1200AC/WRT1900AC v2, but overclocked to 1.6 GHz). Like the WRT1900AC v2, it has 512 MB of RAM. In January 2016, DD-WRT became available for the WRT1900ACS, as well as both versions of the WRT1900AC. WRT3200ACM This is a faster replacement of the WRT1900AC, but the 1900AC model can still be found. It has Tri-Stream 160 technology doubles bandwidth and the fastest dual-band of any router. MU-MIMO technology to multiple devices all at the same time, same speed. It is open-source ready with OpenWrt and DD-WRT®. Compatible with Linksys Smart Wi-Fi app to manage Wi-Fi from a mobile device. Specs refresh include a 256MB Flash and 512MB of RAM Memory. WRT32X 3200ACM Gaming Router The WRT32X 3200ACM has identical hardware to the WRT3200ACM but includes the Rivet Networks Killer Prioritization Engine which identifies systems equipped with Killer Network LAN hardware. Powered by a dual-core 1.8 GHz CPU with 256MB of flash memory and 512MB of DDR3 memory it is capable of speeds of up to 600Mbit/s on 2.4 GHz band and 2600Mbit/s on the 5 GHz band. E800 A single-band non-gigabit model. E900 A single-band non-gigabit model. E1000 A single-band non-gigabit model that replaced the WRT160N. The E1000 v1 shares the same hardware as the Cisco Valet M10 v1. E1200 A single-band non-gigabit model. E1500 A single-band non-gigabit model. E1550 A single-band non-gigabit model with USB storage link. E1700 A single-band 4 port gigabit model. E2000 A non-simultaneous dual-band gigabit model that replaced the WRT320N. E2100L A single-band non-gigabit model (with 2 external antennas and USB storage link) that replaced the WRT160NL. E2500 A simultaneous dual-band non-gigabit model. E3000 A simultaneous dual-band gigabit model that replaced the WRT610N. Similar to its predecessor, a special system menu can be accessed by browsing to http://ip_address_of_e3000/System.asp which displays a detailed system status page and allows administrators to disable/enable "Vista Premium" and the "Parental Control Status". E3200 A simultaneous dual-band gigabit model with USB storage link. E4200 A three-stream simultaneous dual-band gigabit model targeted for "high performance wireless entertainment", with a rated maximum throughput of 450 Mbit/s. This model also includes a USB port for storage, UPnP media streaming or a print server. The E4200 also marks the first radical change in the design of the Linksys series since Cisco launched the winged "spaceship" design first seen on the WRT400N. The E4200 features a minimalistic, streamlined design with only a white status LED visible on the top. All traffic activity LEDs and buttons have been relocated to the rear of the device. A special system menu can be accessed by browsing to http://ip_address_of_e4200/System.asp. This menu shows all kinds of system statistics and settings. No settings can be changed from this menu. This is only found in the original version, and not available in the v2 model. The E4200V2 has a Marvell 88W8366/88W8063 wireless chipset. In previous Tomato builds (a popular 3rd-party firmware for Linksys routers), only the 2.4 GHz radio was properly supported. However, simultaneous dual-band radio can now be achieved using Tomato RAF, Tomato Shibby and Tomato Toastman's builds. EA2700 A dual-band gigabit model. App enabled with Linksys Smart WiFi. EA3500 A dual-band gigabit model with USB storage link. App enabled with Linksys Smart WiFi. EA4500 A dual-band gigabit model with USB storage link. App enabled with Linksys Smart WiFi. EA6100 An 802.11ac dual-band model with USB storage link EA6200 An 802.11ac (advertised as AC900, actually AC1200) dual-band gigabit model with USB storage link. App enabled with Linksys Smart WiFi. EA6300 An 802.11ac (AC1200) dual-band gigabit model with USB storage link. App enabled with Linksys Smart WiFi. EA6350 An 802.11ac (AC1200) dual-band gigabit model with USB storage link. App enabled with Linksys Smart WiFi. EA6400 An 802.11ac (AC1600) dual-band gigabit model with USB storage link. App enabled with Linksys Smart WiFi. EA6500 An 802.11ac (AC1750) dual-band gigabit model with USB storage link. App enabled with Linksys Smart WiFi. EA6700 An 802.11ac (AC1750) dual-band gigabit model with USB storage link. App enabled with Linksys Smart WiFi. EA6900 An 802.11ac (AC1900) dual-band gigabit model with USB storage link. App enabled with Linksys Smart WiFi. EA8300 An 802.11ac (AC2200) MU-MIMO, tri-band, 'Max-Stream' gigabit model with USB 3.0 storage link. Browser-based setup or Linksys App. E8350 An 802.11ac (AC2400) dual-band MU-MIMO gigabit router. EA8500 An 802.11ac (AC2600) dual-band MU-MIMO gigabit router. EA9200 An 802.11ac (AC3200) tri-band MU-MIMO gigabit router. See also Cisco Valet Routers References External links DD-WRT wiki page for Linksys WRT300N (v1.0) OpenWrt wiki page for Linksys WRT300N (v1.0 and v2.0) Hardware routers Linksys
26126491
https://en.wikipedia.org/wiki/Shopify
Shopify
Shopify Inc. is a Canadian multinational e-commerce company headquartered in Ottawa, Ontario. It is also the name of its proprietary e-commerce platform for online stores and retail point-of-sale systems. The Shopify platform offers online retailers a suite of services including payments, marketing, shipping and customer engagement tools. The company reported that it had more than 1,700,000 businesses in approximately 175 countries using its platform as of May 2021. According to BuiltWith, 1.58 million websites run on the Shopify platform as of 2021. The total gross merchandise volume exceeded US$61 billion for calendar 2019. As of 2021, Shopify is the largest publicly traded Canadian company by market capitalization. Total revenue for the full year of 2020 was US$2.929 billion. History Shopify was founded in 2006 by Tobias Lütke and Scott Lake after attempting to open Snowdevil, an online store for snowboarding equipment. Dissatisfied with the existing e-commerce products on the market, Lütke, a computer programmer by trade, instead built his own. Lütke used the open source web application framework Ruby on Rails to build Snowdevil's online store, and launched it after two months of development. The Snowdevil founders launched the platform as Shopify in June 2006. Shopify created an open-source template language called Liquid, which is written in Ruby and used since 2006. In June 2009, Shopify launched an application programming interface (API) platform and App Store. The API allows developers to create applications for Shopify online stores and then sell them on the Shopify App Store. In April 2010, Shopify launched a free mobile app on the Apple App Store. The app lets Shopify store owners view and manage their stores from iOS mobile devices. In 2010, Shopify started its Build-A-Business competition, in which participants create a business using its commerce platform. The winners of the competition receive cash prizes and mentorship from entrepreneurs, such as Richard Branson, Eric Ries and others. Shopify was named Ottawa's Fastest Growing Company by the Ottawa Business Journal in 2010. The company received $7 million from an initial series A round of venture capital financing in December 2010. Its Series B round raised $15 million in October 2011. In February 2012, Shopify acquired Select Start Studios Inc ("S3"), a mobile software developer, along with 20 of the company's mobile engineers and designers. In August 2013, Shopify acquired Jet Cooper, a 25-person design studio based in Toronto. In August 2013, Shopify announced the launch of Shopify Payments, which was rebranded as Shop Pay in February 2020. Shop Pay allowed merchants to accept credit cards without requiring a third party payment gateway. The company also announced the launch of an iPad-centric point of sale system. It uses an iPad to accept payments from debit cards and credit cards. The company received $100 million in Series C funding in December 2013. By 2014, the platform had hosted approximately 120,000 online retailers, and was listed as #3 in Deloitte’s Fast50 in Canada, as well as #7 in Deloitte’s Fast 500 of North America. Shopify earned $105 million in revenue in 2014, twice as much as it raised the previous year. In February 2014, Shopify released "Shopify Plus" for large e-commerce businesses with access to additional features and support. On April 14, 2015, Shopify filed for an initial public offering (IPO) on the New York Stock Exchange and Toronto Stock Exchange under the symbols "SHOP" and "SH" respectively. Shopify went public on May 21, 2015, and in its debut on the New York Stock Exchange, started trading at $28, more than 60% higher than its US$17 offering price, with its IPO raising more than $131 million. In September 2015, Amazon.com announced it would be closing its Amazon Webstore service for merchants, and had selected Shopify as the preferred migration provider; Shopify's shares jumped more than 20% upon the news. In April 2016, Shopify announced Shopify Capital, a cash advance product. Shopify Capital was initially piloted to merchants within the US and allows merchants to receive an advance on future earnings processed through their payment gateway. Since its launch in 2016, Shopify Capital has provided over $2 billion in funding to Shopify merchants with a maximum advance of $2 million. On October 3, 2016, Shopify acquired Boltmade. In November 2016, Shopify partnered with Paystack which allowed Nigerian online retailers to accept payments from customers around the world. On November 22, 2016, Shopify launched Frenzy, a mobile app that improves flash sales. On December 5, 2016, Shopify acquired Toronto-based mobile product development studio Tiny Hearts. The Tiny Hearts building has been turned into a Shopify research and development office. In January 2017, Shopify announced integration with Amazon that would allow merchants to sell on Amazon from their Shopify stores. Shopify's stock rose almost 10% upon this announcement. In April 2017, Shopify introduced a Bluetooth enabled debit and credit card reader for brick and mortar retail purchases. The company has since released additional technology for brick and mortar retailers, including a point-of-sale system with a Dock and Retail Stand similar to that offered by Square, and a tappable chip card reader. In September 2018, Shopify announced plans to locate thousands of employees in Toronto's King West neighborhood in 2022 as part of "The Well" complex, jointly owned by Allied Properties REIT and RioCan REIT. In October 2018, Shopify opened their first physical space in Los Angeles. The space offered classes, a "genius bar" for companies that use Shopify software and workshops. Online cannabis sales in Ontario used Shopify's software when the drug was legalized in October 2018. Shopify's software is also used for in-person cannabis sales in Ontario since becoming legal in 2019. In January 2019, Shopify announced the launch of Shopify Studios, a full-service television and film content and production house. On March 22, 2019, Shopify and email marketing platform Mailchimp ended an integration agreement over disputes involving customer privacy and data collection. In April 2019, Shopify announced an integration with Snapchat to allow Shopify merchants to buy and manage Snapchat Story ads directly on the Shopify platform. The company had previously secured similar integration partnerships with Facebook and Google. In May 2019, Shopify acquired Handshake, a business-to-business e-commerce platform for wholesale goods. The Handshake team was integrated into Shopify Plus, and Handshake founder and CEO Glen Coates was made Director of Product for Shopify Plus. In June 2019, Shopify announced that it would launch its own Fulfillment Network. The service promises to handle shipping logistics for merchants and will compete with an established leader, Amazon FBA. Shopify Fulfillment Network will at first be available to qualifying U.S. merchants in select states. On August 14, 2019, Shopify launched Shopify Chat, a new native chat function that allows merchants to have real-time conversations with customers visiting Shopify stores online. On September 9, 2019, Shopify announced the acquisition of 6 River Systems, a Massachusetts-based company that makes warehouse robots. The acquisition was finalized in October resulting in a cash-and-share deal worth US$450 million. In 2020, the company announced new hires in Vancouver, Canada, and the effects of the COVID-19 pandemic contributed to lifting stock prices. On February 21, 2020, Shopify announced plans to join the Diem Association, known as Libra Association at the time. On March 11, 2020, Shopify announced it is going fully remote. Over 5000 employees will start working from home, in response to rapid spread of Coronavirus disease 2019. It was reported that Shopify's valuation would likely rise on the back of options it had in the company Affirm that was expecting to go public shortly. In November 2020, Shopify announced a partnership with Alipay to support merchants with cross-border payments. As a result of Affirm's January 13, 2021 initial public offering (IPO), Shopify's 8% stake in Affirm is worth $2 billion. About half of Shopify's C-level executives left the company in early 2021. On 11 June 2021, Shopify announced their acquisition of Primer, an AR app on the App Store that allows users to preview home improvement items digitally. Esports In February 2021, Shopify unveiled that the company has formed a new esports organization called Shopify Rebellion, and has put together a professional StarCraft II team to compete in international tournaments. The team members include former 2016 world champion Byun Hyun-woo as well as Sasha Hostyn. Last-mile logistics In April 2021, Shopify made its first entry in last-mile logistics by investing in Swyft, a Toronto-based digital logistics startup. As part of a Series A round of funding, a total of $17.5 million was raised for Swyft, co-led by Inovia Capital and Forerunner Ventures with participation from Shopify. Criticism In 2017, the #DeleteShopify hashtag campaign called for a boycott of Shopify for allowing Breitbart News to host a shop on its platform. Shopify's CEO, Tobias Lütke, responded to the criticism, saying "refusing to do business with the site would constitute a violation of free speech". In October 2017, Citron Research founder, short-seller Andrew Left released a detailed report which described the e-commerce platform as a "get-rich-quick" scheme in contravention of Federal Trade Commission regulations. The day the report was released, the stock plunged more than 11%. The main question he posed was "Outside the roughly 50,000 verifiable merchants working with Shopify, who are the other 450,000 the company says it has?" Third-party marketing tactics were expected to be improved. Left was quoted in 2019 by The Street as saying about Shopify "I still think they are best in class". Legal Affairs In December 2021, a group of publishers including Pearson Education, Inc., Macmillan Learning, Cengage Learning, Inc., Elsevier Inc., and McGraw Hill sued Shopify claiming that it had failed to remove listings and stores selling pirated copies of their books and learning materials. Data breach In September 2020, Shopify confirmed a data breach in which customer data from fewer than 200 merchants was stolen. One of those merchants later said over 4,900 of their customers alone had had their information accessed. Shopify claims that the data stolen included names, addresses and order details, but not "complete payment card numbers or other sensitive personal or financial information." The company also claims that no evidence has proven that the data has been used. Shopify identified two "rogue members" of its support team to be responsible. The employees in question have been fired and the matter has been forwarded to the FBI. See also Comparison of shopping cart software References External links 2015 initial public offerings Canadian brands Canadian companies established in 2006 Cannabis shops in Canada Companies based in Ottawa Companies listed on the Toronto Stock Exchange S&P/TSX 60 Internet properties established in 2006 Multinational companies headquartered in Canada Point of sale companies Retail companies established in 2006 Self-publishing companies Self-publishing online stores Web applications Online marketplaces of Canada E-commerce software
267686
https://en.wikipedia.org/wiki/University%20of%20Saskatchewan
University of Saskatchewan
The University of Saskatchewan (U of S, or USask) is a Canadian public research university, founded on March 19, 1907, and located on the east side of the South Saskatchewan River in Saskatoon, Saskatchewan, Canada. An "Act to establish and incorporate a University for the Province of Saskatchewan" was passed by the provincial legislature in 1907. It established the provincial university on March 19, 1907 "for the purpose of providing facilities for higher education in all its branches and enabling all persons without regard to race, creed or religion to take the fullest advantage". The University of Saskatchewan is the largest education institution in the Canadian province of Saskatchewan. The University of Saskatchewan is one of Canada's top research universities (based on the number of Canada Research Chairs) and is a member of the U15 Group of Canadian Research Universities (the 15 most research-intensive universities in Canada). The university began as an agricultural college in 1907 and established the first Canadian university-based department of extension in 1910. There were set aside for university buildings and for the U of S farm, and agricultural fields. In total was annexed for the university. The main University campus is situated upon , with another allocated for Innovation Place Research Park. The University of Saskatchewan agriculture college still has access to neighbouring urban research lands. The University of Saskatchewan's Vaccine and Infectious Disease Organization (VIDO) facility, (2003) develops DNA-enhanced immunization vaccines for both humans and animals. The university is also home to the Canadian Light Source synchrotron, which is considered one of the largest and most innovative investments in Canadian science. Since its origins as an agricultural college, research has played an important role at the university. Discoveries made at the U of S include sulphate-resistant cement and the cobalt-60 cancer therapy unit. The university offers over 200 academic programs. History Beginnings The institution was modelled on the American state university, with an emphasis on extension work and applied research. The University of Saskatchewan, in Saskatoon, was granted a provincial charter on March 19, 1907. A provincial statute known as the University Act. It provided for a publicly funded, yet independent institution to be created for the citizens of the whole province. The governance was modelled on the provincial University of Toronto Act of 1906 which established a bicameral system of university government consisting of a senate (faculty), responsible for academic policy, and a board of governors (citizens) exercising exclusive control over financial policy and having formal authority in all other matters. The president, appointed by the board, was to provide a link between the two bodies and to perform institutional leadership. The scope of the new institution was to include colleges of arts and science, including art, music and commerce, agriculture with forestry, domestic science, education, engineering, law, medicine, pharmacy, veterinary science and dentistry. Saskatoon was chosen as the site for the university on April 7, 1909 by the board of governors. On October 12, 1912 the first building opened its doors for student admission. It awarded its first degrees in 1912. Duncan P. McColl was appointed as the first registrar, establishing the first convocation from which Chief Justice Edward L. Wetmore was elected as the first chancellor. Walter Charles Murray became the first president of the university's board of governors. In the early part of this century, professional education expanded beyond the traditional fields of theology, law and medicine. Graduate training based on the German-inspired American model of specialized course work and the completion of a research thesis was introduced. Battleford, Moose Jaw, Prince Albert, Regina, and Saskatoon all lobbied to be the location of the new university. Walter Murray preferred the provincial capital, Regina. In a politically influenced vote, Saskatoon was chosen on April 7, 1909. Designed by David Robertson Brown (architect), the Memorial Gates were erected in 1927 at the corner of College Drive and Hospital Drive in honour of the University of Saskatchewan alumni who served in the First World War. A stone wall bears inscriptions of the names of the sixty seven university students and faculty who lost their lives while on service during World War I. The hallways of the Old Administrative Building (College Building) at the University of Saskatchewan are decorated with memorial scrolls in honour of the University of Saskatchewan alumni who served in the World Wars. 342 students, faculty, and staff enlisted for World War I. Of these, 67 were killed, 100 were wounded, and 33 were awarded medals of valour. The University of Saskatchewan's Arms were registered with the Canadian Heraldic Authority on February 15, 2001. Campus A location next to the South Saskatchewan River, across from the city centre of Saskatoon, was selected for the campus. David Robertson Brown of Brown & Vallance were the initial architects constructing a campus plan and the first university buildings in Collegiate Gothic style: The Prime Minister of Canada, Sir Wilfrid Laurier, laid the cornerstone of the first building, the College Building, on July 29, 1910. The first building to be started on the new campus, the College Building, built 1910–1912 opened in 1913; in 2001, it was declared a National Historic Site of Canada. Brown & Vallance designed the Administration Building (1910–12); Saskatchewan Hall Student Residence (1910–12). Brown & Vallance designed the Engineering Building (1910–12) as well as additions 1913 in 1920 and rebuilt the building after it burned in 1925. Brown & Vallance designed the Barn and Stock Pavilion (1910–12) and Emmanuel College (1910–12). Brown & Vallance built the Faculty Club (1911–12) and rebuilt it after it burned in 1964. Brown & Vallance constructed the President's Residence (1911–13) Qu'Appelle Hall Student Residence (1914–16) Physics Building (1919–21); Chemistry Building (1922–23); St. Andrew's Presbyterian College (1922–23); Memorial Gates (1927–28) and the Field Husbandry Building (1929). The original buildings were built using native limestone – greystone – which was mined just north of campus. Over the years, this greystone became one of the most recognizable campus signatures. When the local supply of limestone was exhausted, the university turned to Tyndall stone, which is quarried in Manitoba. Saskatchewan's Provincial University and Agricultural College were officially opened May 1, 1913 by Hon. Walter Scott. The original architectural plan called for the university buildings to be constructed around a green space known as The Bowl. The original university buildings are now connected by skywalks and tunnels. Clockwise, from the north; Thorvaldson Building (August 22, 1924) (Spinks addition); Geology, W.P. Thompson Biology (1960) adjoined to Physics Building (1921); College Building (May 1, 1913) (Administration addition); Saskatchewan cojoined with Athabasca Hall (1964); Qu'Appelle Hall (1916); Marquis Hall adjoined to Place Riel – Qu'Appelle Addition; Murray Memorial Main Library (1956); Arts (1960) cojoined with Law and adjoined to Commerce building complete the initial circle around the perimeter of the bowl. Francis Henry Portnall and Frank Martin designed the Dairy & Soils Laboratory (1947). Establishment of colleges Roughly adhering to the original plan of 1909, numerous colleges were established: Arts & Science (1909); Agriculture, now called Agriculture and Bioresources (1912); Engineering (1912); Law (1913); Pharmacy, now called Pharmacy & Nutrition (1914); Commerce, now the N. Murray Edwards School of Business (1917); Medicine (1926); Education (1927); Home Economics (1928); Nursing (1938); Graduate Studies and Research (1946); Physical Education, now called Kinesiology (1958); Veterinary Medicine (1964); Dentistry (1965); and the School of Physical Therapy (1976). The U of S also has several graduate programs amongst these colleges, which give rise to a masters or doctorate degree. In 1966, the University of Saskatchewan introduced a masters program in adult education. Diploma, and certificate post secondary courses are also available to aid in professional development. Theological colleges, affiliated with the university, were also established: Emmanuel College – (Anglican denomination) (1909), St. Andrew's College (as Presbyterian College, Saskatoon) then United Church of Canada (1913), Lutheran Theological Seminary (1920), St. Thomas More College (1936), and Central Pentecostal College (1983). Regina College was saved from bankruptcy and became part of the university in 1934, and was given degree-granting privileges in 1959, making it a second University of Saskatchewan campus. By another act of legislation in 1974, Regina College was made an independent institution known as the University of Regina. The policy of university education initiated in the 1960s responded to population pressure and the belief that higher education was a key to social justice and economic productivity for individuals and for society. The single-university policy in the West was changed as existing colleges of the provincial universities gained autonomy as universities. Correspondence courses were established in 1929. Other federated and affiliated colleges include Briercrest Bible College and Biblical Seminary in Caronport, Saskatchewan; Gabriel Dumont College and St. Peter's Historic Junior College in Muenster, Saskatchewan. Later development In the late 1990s, the U of S launched a major revitalisation program, comprising new capital projects such as an expansion to the Western College of Veterinary Medicine, the building of a new parkade, and a revision of its internal road layout (which has already seen the East Road access being realigned). The Thorvaldson Building, which is home to the departments of chemistry and computer science, hosts a new expansion known as the Spinks addition. The College of Pharmacy and Nutrition has also had a number of renovations. Land holdings Up until the late 1980s, the University of Saskatchewan held an extensive area of land in the northeast quadrant of Saskatoon, stretching far beyond the core campus, east of Preston Avenue and north of the Sutherland and Forest Grove subdivisions. Much of this land was used for farming, though some areas were intended for future campus and facility development. In the late 1980s, most U of S land beyond Circle Drive was earmarked for residential development; Silverspring was the first of these neighbourhoods to be developed. Another section of land, west of the Preston Avenue/Circle Drive interchange and north of the Canadian Pacific Railway line, was zoned for commercial use, and led to "big box" retail development in the early 2000s called Preston Crossing. Realignment of two major roads in the area around this same time (Preston Avenue and 108th Street) also used up a portion of university land. The U of S obtained a large tract of land immediately east of the Saskatoon city limits after the city annexed the northeastern section of U of S land (this land has since been itself annexed into the city). The U of S leased a site to the Correctional Service of Canada north of Attridge Drive on Central Avenue for the Regional Psychiatric Centre. It has an additional undeveloped parcel of land at Central Avenue and Fedoruk Road. In the 1970s and again in the 1980s, the U of S considered opening up some of its land holdings south of College Drive and north of 14th Street for residential development, but opposition from nearby neighbourhoods that appreciated the "green belt" offered by the university led to these plans being dropped. The city has refrained from indicating any residential development plans for the newer land holdings in the northeast, allowing another green belt to be created separating the new communities of Evergreen and Aspen Ridge from other parts of the city. Academics Ranking Programs The University of Saskatchewan offers a wide variety of programs and courses. Agriculture and Bioresources, Arts and Science, Biotechnology, Edwards School of Business, Dentistry, Education, Engineering, Graduate and Postdoctoral Studies, Kinesiology, Law, Medicine, Nursing, Pharmacy and Nutrition, Physical Therapy and Veterinary Medicine. In addition, the university's affiliated colleges and Centre for Continuing and Distance Education offer degree programs, certificates, and training programs. Many affiliated colleges allow students to complete the first two years of a Bachelor of Science or Bachelor of Arts degree, and some offer full degrees in education, native studies, and theology. Research In 1948, the university built the first betatron facility in Canada. Three years later, the world's first non-commercial cobalt-60 therapy unit was constructed. (The first female chancellor of the university, Sylvia Fedoruk, was a member of the cobalt-60 research team. She also served as Saskatchewan's lieutenant-governor from 1988 to 1994.) The success of these facilities led to the construction of a linear accelerator as part of the Saskatchewan Accelerator Laboratory in 1964 and placed university scientists at the forefront of nuclear physics in Canada. The Plasma Physics Laboratory operates a tokamak on campus. The university used the SCR-270 radar in 1949 to image the Aurora for the first time. Experience gained from years of research and collaboration with global researchers led to the University of Saskatchewan being selected as the site of Canada's national facility for synchrotron light research, the Canadian Light Source. This facility opened October 22, 2004 and is the size of a football field. The university also is home to the Vaccine and Infectious Disease Organization. Innovation Place Research Park is an industrial science and technology park that hosts private industry working with the university. Partner universities Beijing Institute of Technology, Beijing, China Xi'an Jiao Tong University, Xi'an, China University of Greifswald, Greifswald, Germany Darmstadt University of Technology, Darmstadt, Germany Vellore Institute of Technology, India University of Oslo, Norway University of Canterbury, New Zealand University of Oxford, Oxford, England Stockholm University, Stockholm, Sweden Administration and governance The University Act provided that the university should provide "facilities for higher education in all its branches and enabling all persons without regard to race, creed or religion to take the fullest advantage". It further stated that "no woman shall by reason of her sex be deprived of any advantage or privilege accorded to the male students of the university." Seventy students began the first classes on September 28, 1909. The first class graduated on May 1, 1912. Of the three students who earned graduation honours, two were women. The University of Saskatchewan has a tricameral governance structure, defined by the University of Saskatchewan Act, consisting of a Board of Governors, University Council, and Senate, as well as the General Academic Assembly. Financial, management, as well as administration affairs are handled by the Board of Governors, which comprises 11 members. The University of Saskatchewan liaison between the public and professional sector is dealt with by the university Senate, a body of 100 representatives. Finally, University Council is made up of a combination of 116 faculty and students. Council is the university's academic governing body, responsible for "overseeing and directing the University's academic affairs." The General Academic Assembly consists of all faculty members and elected students. As of 2006, faculty and staff total 7,000, and student enrolment comprised 15,005 full-time students as well as 3,552 part-time students. The university senior administration consists of the President and Vice-Chancellor Professor, Peter Stoicheff; the Provost and Vice-President Academic, Professor Arini; Vice-President (Finance & Resources), Greg Fowler; Vice-President (Research), Professor Baljit Singh; and the Vice-President (University Relations) Debra Pozega Osburn. Campus life and Facilities The Sheaf, a student publication, was first published in 1912, monthly or less frequently. By 1920, it was published weekly with the aim of becoming a more unifying influence on student life. It has continued to publish. In 1965, a student-run campus radio station, CJUS-FM began broadcasting on a non-commercial basis. In 1983, the station became a limited commercial station. By 1985, however, funding was no longer provided, and the campus radio presence died. In early 2005, CJUS was revived in an internet radio form and continues to broadcast today. The university also maintains a relationship with the independent community radio station CFCR-FM, which actively solicits volunteers on campus. Place Riel Theatre, a campus theatre, was opened in 1975, as was Louis, a campus pub. Place Riel, the existing campus student centre, opened in 1980, and now holds retail outlets, arcade, lounge space, student group meeting areas, and a food court; it is undergoing expansion and renovation, slated for completion in 2012–2013. These facilities were named after Louis Riel. In the late 1990s, Place Riel Theatre stopped public showings and it is now used for campus movie features and lectures. The University of Saskatchewan Students' Union is the students' union representing full-time undergraduate students at the University of Saskatchewan. Since 1992, the graduate students are represented by the University of Saskatchewan Graduate Student's Association (GSA-uSask), a not-for-profit student organization that provides services, events, student clubs and advocacy work to the graduate students of the U of S. Since 2007, the GSA-uSask is located in the Emmanuel and St. Chad Chapel, also called GSA Commons. Campus sports teams in U Sports use the name Saskatchewan Huskies. The U of S Huskies compete in eight men's sports: Canadian football, basketball, cross country, hockey, soccer, track and field, volleyball and wrestling and seven women's sports: basketball, cross country, hockey, soccer, track and field, volleyball and wrestling. The Huskies Track and Field team has won the national championships on 12 occasions and is the most successful team on campus The men's Husky football team has won the Vanier Cup as national champions on three occasions; in 1990, 1996, and 1998. Museums and galleries The Agricultural Displays and Kloppenburg Collection are hosted in the Agriculture & Bioresources College. The agricultural wall displays are located in the walkway connecting the Agriculture Building and the Biology Building. The Kloppenburg Collection is featured on the sixth floor of the College of Agriculture and Bioresources building which opened in 1991. Twenty seven works by famous Saskatchewan artists are featured in this donation to the University of Saskatchewan. Beamish Conservatory and Leo Kristjanson Atrium is also located within the Agriculture & Bioresources College. The Leo Kristjanson atrium is located in the College of Agriculture and Bioresources building and hosts the conservatory. The Beamish Conservatory is named in honour of the donor May Beamish who is the daughter of artist Augustus Kenderdine. The University of Saskatchewan's 75th Anniversary in 1984 was the starting catalyst for the Athletic Wall of Fame at which time 75 honours were bestowed. The wall of fame celebrates achievements by athletes, teams securing a regional and/or national championship, as well as builders who can be either an administrator, coach, manager, trainer or other major contributor toward the Huskie athletic community for a time period of at least 10 years and have provided outstanding notable support. As of 2001, an annual event, the Huskie Salute inaugurates a new candidate into the Athletic Wall of Fame. The College Building was officially declared a Canadian National Historic Site by Sheila Copps, Minister of Canadian Heritage on February 27, 2001. The College Building was the first building under construction on the university, and upon completion was used for agriculture degree classes. The Right Honourable John G. Diefenbaker Centre for the Study of Canada, also known as the Diefenbaker Canada Centre, houses the Diefenbaker paper collection and legacy, changing exhibit, Centre for the Study of Co-operatives and the Native Law Centre. The grave site of Canadian Prime Minister John Diefenbaker is located near this museum. The Gordon Snelgrove Gallery is teaching facility and a public gallery that is managed through the Department of Art & Art History. It provides a venue for new work by artists and curators both within the department and the wider community. It has a full-time director and a number of part-time staff. Additionally, the gallery curates The Department of Art and Art History Collection, consisting of select works from graduating students. Art from the collection is displayed throughout the Murray Building, the university library, a number of sites on campus and the gallery website. The gallery is located at 191 Murray Building on the University of Saskatchewan campus. It is open Monday to Friday, 9:30am to 4:30pm and closed weekends and holidays. The Kenderdine Art Gallery celebrated its official opening October 25, 1991. Augustus Frederick Lafosse (Gus) Kenderdine began the University Art Camp at Emma Lake in 1936, the precursor to the Emma Lake Kenderdine Campus, a bequest was donated to the University of Saskatchewan by his daughter, Mrs. May Beamish, and initialized the formation of the Kenderdine Art Gallery which has a permanent collection started by Dr. Murray, as well as ongoing exhibits. The Kenderdine collection consists of archival material and 4,000 works, including paintings, sketches, ceramics, porcelain or pottery, glass, textiles or tapestries many by 19th and 20th century Saskatchewan, Canadian and international artists. The MacAulay Pharmaceutical Collection is located in the Thorvaldson Building, Room 118A. The collection showcases early 20th-century pharmaceutical paraphernalia, as well as early First Nations remedies such as cherry bark syrup and smartweed. The Memorial Gates were constructed in honour of those U of S students who made the ultimate sacrifice. Inscribed on the gates themselves is an inscription, “These are they who went forth from this University to the Great War and gave their lives that we might live in freedom.” The gates originally straddled the main road entrance to the campus via University Drive (later, this became the access road into Royal University Hospital); when a new road access, Hospital Drive, was constructed to the west in the 1990s, the gates were preserved in their original location. The Museum of Antiquities started its collection in 1974, and opened in 1981 at its new location. The museum celebrates notable artistic, sculptural and art achievements of various civilizations and eras. The Museum of Natural Sciences in the geology building features a two-story high plant-filled atrium demonstrating the evolution of life on earth. It houses a live gallery of animals including aquariums, and extensive geological specimens as well as paleontological specimens, including a full-size skeletal replica of a Tyrannosaurus Rex. The University of Saskatchewan Observatory offers public viewing hours, school tours, as well as an adopt-a-star program. An adopted star can commemorate a special or significant achievement, or person and the award is given via certificate, honourable registry mention and maps of star location and facts sheet. The Rugby Chapel, built in 1912 (as a gift from the students of Rugby School) and moved from Prince Albert, has been declared a City of Saskatoon Municipal Heritage Property. Rugby Chapel, the precursor to College of Emmanuel and St. Chad was first constructed in 1883 and designated The University of Saskatchewan (Saskatchewan Provisional District of the North West Territories), in Prince Albert. The St. Thomas More College Art Gallery was first opened in 1964 and hosts artwork of local and regional artists. The Victoria School House, known also as the Little Stone School House, was built in 1888 as the first school house of the Temperance Colony. The one room school house was originally constructed in Nutana. The location is now known as five corners at the south or top of the Broadway Bridge. The school yard at one time comprised three school houses, as the population grew. The little stone school house was preserved and moved on campus. It was declared a historic site on June 1, 1967. School songs The University of Saskatchewan's fight song "Saskatchewan, Our University", was written by Russell Hopkins in 1939. Hopkins was notable in the university community at the time, and won a Rhodes Scholarship in 1932. The fight song is commonly played at sporting events. Also composed for the university is an Alma Mater hymn known as "University Hymn". Neil Harris wrote the hymn in 1949. The hymn is performed at convocation events. Residence life Voyageur Place 'Room and board' residences on the University of Saskatchewan campus and comprises four separate halls. Saskatchewan Hall was the first student residence of the university and was completed in 1912. Originally called University Hall, it was designed to provide residences for 150 students. Saskatchewan Hall was named for the Saskatchewan River. Qu'Appelle Hall was originally known as Student's Residence No. 2 and officially opened in 1916. The design housed 120 students, and in 1963 an addition for 60 additional student residences was completed. The Qu'Appelle Hall Addition is the fourth residence of Voyageur Place and houses male students. Qu'Appelle Hall was named for the Qu'Appelle River. Athabasca Hall provides 270 residences and was completed in 1964. It is now a co-ed hall. Athabasca Hall was named for the Athabasca River. Voyageur Place has historically been organized on the house system, with each house named after an explorer associated with Saskatchewan's early history. Thus, traditionally there were three male houses: Hearne House (named after Samuel Hearne and consisting of the residents of Saskatchewan Hall); Kelsey (named after Henry Kelsey and consisting of the residents of Qu'Appelle Hall); and Lav (named after Pierre Gaultier de Varennes, sieur de La Vérendrye and consisting of the residents of Qu'Appelle Hall Addition). There were also three female houses (all of which were composed of residents of the all-female Athabasca Hall): Pond (named after Peter Pond), Henday (named after Anthony Henday), and Palliser (named after John Palliser). McEown Park – Residence complex south of the university campus. Opening ceremonies were October 2, 1970 for the four high rise complex. McEown Park was named in honour of a University administrator, A.C. McEown. Souris Hall is an apartment complex for married students with families. Souris Hall, named after the Souris River, is a nine-storey town house, comprising 67 two-bedroom apartments. Assiniboine Hall is an eleven-storey apartment house which has 23 two-bedroom and 84 one-bedroom apartments available for married or single students without families. Assiniboine Hall was named for the Assiniboine River. Wollaston Hall was added to McEown Park complex in 1976, providing 21 two-bedroom and 83 one-bedroom apartments. Seager Wheeler Hall provides housing for single students living in small groups in a fourteen-storey residential house. Seager Wheeler Hall was named in honour of Seager Wheeler, a notable Saskatchewan pioneer for breeding wheat. This residence was on the original three complexes built at McEown Park. Graduate House is the university's newest residence, which opened in 2013 in the College Quarter. Indigenization, Reconciliation and Decolonization In 2017, the University of Saskatchewan appointed Dr. Jacqueline Ottmann as the Vice Provost Indigenous Engagement. The University of Saskatchewan provides services to Indigenous people in more remote communities. The University of Saskatchewan Summer University Transition Course brings first-year Indigenous students to campus before the start of the school year for some campus orientation. Academic counsellors, tutors and elders are present on campus at the University of Saskatchewan to provide academic and social supports. Science outreach Kamskénow program The Science outreach Kamskénow program, runs out of the College of Arts and Science at the University of Saskatchewan. PotashCorp Kamskénow is a science outreach program that provides hands-on learning in Saskatoon classrooms based on each of the Division of Science disciplines at the U of S: biology, chemistry, computer science, geological sciences, mathematics and physics. Rather than a one-time school visit, the program offers students 12 weeks of classroom activities culminating in a trip to on-campus labs in week 13. All sessions are led by U of S graduate and undergraduate students. This program has been chosen as the joint winner of the 2014 Science, Technology, Engineering and Math (STEM) Award for the North America region. Additional funding for PotashCorp Kamskénow comes from NSERC, the Community Initiatives Fund, the College of Arts & Science and U of S Community Engagement and Outreach. Students and alumni Between 1907 and 2007 there have been over 132,200 members of the University of Saskatchewan Alumni Association. The alumni feature those who have successfully graduated from a degree, certificate and/or diploma programme at the University of Saskatchewan. Notable faculty and researchers Ken Coates (1956- ), historian, Canada Research Chair in Regional Innovation, Johnson-Shoyama Graduate School of Public Policy and Director of the International Centre for Northern Governance and Development Sylvia Fedoruk, University Chancellor, Professor in Oncology, Associate Member in Physics, and Lieutenant-Governor of Saskatchewan (1988–1994) Paul Finkelman (1949- ), historian and legal scholar, Ariel F. Sallows Visiting Professor of Human Rights Law, College of Law Herbert V. Günther (1917–2006), Buddhist scholar and philosopher Gerhard Herzberg, Nobel Prize in Chemistry, 1970; offered a position in 1935 to flee Nazi Germany, and remained at the university for ten years J.W. Grant MacEwan, Director of the School of Agriculture, Professor of Animal Husbandry, and Lieutenant-Governor of Alberta (1966–1974) Hilda Neatby (1904–1975), historian Elizabeth Quinlan, sociologist William Sarjeant, geologist and novelist Thorbergur Thorvaldson, chemist and first dean of graduate studies at the university Curt Wittlin (1941– ), philologist and expert in medieval literature Notable alumni Marilyn Atkinson, founder and president of Erickson Coaching International Lorne Babiuk, scientist Michael Byers, political scientist at the University of British Columbia and federal NDP candidate in the Vancouver Centre riding Alastair G. W. Cameron, astrophysicist who studied the origin of the chemical elements and the origin of the moon Kim Coates, actor Jonathan Denis, Alberta MLA and Minister of Housing and Urban Affairs (LLB, 2000) John Diefenbaker, 13th Prime Minister of Canada Diefenbaker was also the university's chancellor. After he died, he and his wife were buried at the university, near the Diefenbaker Canada Centre. N. Murray Edwards, business owner, co-owner of the Calgary Flames NHL franchise Edith Fowke, Canadian folklorist Sherine Gabriel, President of Rush University (Chicago) Emmett Matthew Hall (1898–1995), Supreme Court judge and a father of the Canadian system of Medicare Lynda Haverstock, Lieutenant-Governor of Saskatchewan (2000–2006), leader of the Saskatchewan Liberal Party (1989–1995) John Hewson, Australian politician Ray Hnatyshyn, 24th Governor General of Canada Andrew David Irvine, playwright and University of British Columbia professor Fredrick W. Johnson, 16th Lieutenant-Governor of Saskatchewan William McIntyre, former justice of the Supreme Court of Canada; authored the dissent in the landmark abortion case R. v. Morgentaler (1988) Permanand Mohan, senior computer science lecturer at the University of the West Indies, St. Augustine Campus, Trinidad and Tobago; Chief Examiner for the Caribbean Examinations Council's CAPE Examinations in Computer Science Carson Morrison, Engineering Institute of Canada Fellow, Canadian Silver Jubilee Medal, Ontario Engineering Society Order of Honour, Canadian Standards Association Jean-Paul Carriere Award Caia Morstad, volleyball player Hilda Neatby (1904–1975), historian George Porteous, 14th Lieutenant Governor of Saskatchewan Alison Redford, 14th Premier of Alberta Ron Robison, commissioner of the Western Hockey League Roy Romanow, 12th Premier of Saskatchewan Lorna Russell, artist Nicole Sarauer, Saskatchewan MLA and former Leader of the Official Opposition Henry Taube, Nobel Prize in Chemistry 1983 Gordon Thiessen, former Governor of the Bank of Canada Guy Vanderhaeghe (1951– ), novelist, winner of the Governor General's Award, officer of the Order of Canada Brad Wall, 14th Premier of Saskatchewan Peter Makaroff, Doukhobor peace activist Rhodes Scholars In all, 69 graduates of the University of Saskatchewan have gone on to receive the Rhodes Scholarship. These include Wilbur Jackett (1933) and Mark Abley (1975). References Further reading Histories of the university Michael Hayden Seeking a Balance: The University of Saskatchewan, 1907–1982 (Vancouver: University of British Columbia Press, 1982) Michael Hayden. "The Fight that Underhill Missed: Government and Academic Freedom at the University of Saskatchewan, 1919–1920." InAcademic Freedom: Harry Crowe Memorial Lectures 1986, edited by Michiel Horn. North York: York University, 1987. Arthur S. Morton, Saskatchewan: The Making of a University (Toronto: University of Toronto Press) Call Number Peake 347.M.08.0 Shirley Spafford No Ordinary Academics: Economics and Political Science at the University of Saskatchewan, 1910–1960 (Toronto: University of Toronto Press, July 1, 2000) James Sutherland Thomson, Yesteryears at the University of Saskatchewan 1937–1949 (Toronto: University of Toronto Press, 1949) Call Number 347.M.10.0 W.P. Thompson, The University of Saskatchewan: A Personal History'' (Toronto: University of Toronto Press) Call Number Peake 365.2.M.01.0 The National Film Board of Canada documentary "Prairie University" (1955) directed by John Feeney explores diverse research activities at the University of Saskatchewan on agriculture, medicine, and ice cream. External links Educational institutions established in 1907 Universities in Canada Tourist attractions in Saskatoon 1907 establishments in Saskatchewan Universities and colleges in Saskatchewan Saskatchewan
3013613
https://en.wikipedia.org/wiki/QuickPlay
QuickPlay
QuickPlay is a technology pioneered by Hewlett-Packard that allows users to directly play multimedia without booting a computer to the main operating system. QuickPlay software, known as QuickPlay or HP QuickPlay is software custom developed for HP by CyberLink Corp. A media component of HP Pavilion Entertainment laptops, QuickPlay is a feature of the dv1000 series and above, including the new Pavilion HDX series of notebooks. QuickPlay is also a feature of many other HP Compaq notebooks. The technology has been emulated by other computer manufacturers such as Dell, Alienware, and Toshiba in various iterations. QuickPlay software revisions up to version 2.3 have two main components. The first component is a "Direct" function that provides instant access upon boot to music CDs, DVD movies, and MP3s stored on the hard drive. It is launched by the QuickPlay external button found on the notebook or included IR remote. QuickPlay "Direct" is possible through software on a separate partition with a custom operating system (Linux for QuickPlay 1.0 and Windows XP embedded for QuickPlay 2.3) installed. The secondary component of QuickPlay software (all versions) is an application run under Windows with identical functions. Newer versions of the Windows-only component (QuickPlay versions above 2.3) have additional gaming and karaoke functions. QuickPlay software versions 3.0 and newer included in notebooks shipping with Windows Vista, solely retain the Windows-only component, as the "Direct" component is no longer implemented due to unresolved compatibility issues. Instead, users must first boot Windows Vista and log into their user accounts before the Windows-only version of the QuickPlay software can be run. This occurs regardless of whether QuickPlay is launched externally (via a notebook button or IR remote button) when the notebook is off, or when Windows Vista is running. QuickPlay software has been replaced by the HP MediaSmart Software on HP Desktops and Notebook PC HP QuickPlay should not be confused with the QuickPlay project hosted by sourceforge.net. External links Official HP QuickPlay support page Backup QuickPlay files & Info at Asifism.com HP software Media players
3384122
https://en.wikipedia.org/wiki/Dennis%20Smith%20%28American%20football%29
Dennis Smith (American football)
Dennis Smith (born February 3, 1959) is a retired American football player. He played professionally as a safety in the National Football League (NFL) for the Denver Broncos from 1981 until 1994. Smith played college football at the University of Southern California (USC). High school career Smith played high school football as a wide receiver and defensive back at Santa Monica High School (Samohi). He was the CIF Southern Section Co-Player of the Year in 1976. Dennis Smith was a great prospect who went on to be a Pro Bowl player. But at that time his standard as a player was to emulate Dennis Thurman, who had been such an outstanding player previously at Samohi in 1974 who also later starred at USC and Smith would eventually follow Thurman as a freshman to run track and play football at USC as a safety. Smith also ran on the track & field team at Santa Monica and broke the HS high jump record (which has since been eclipsed) in 1977. He was coached by Tebb Kusserow. who also coached number of other NFL players such as Dennis Thurman, Junior Thurman, Mel Kaufman, Pat O'Hara, Glyn Milburn, Sam Anno, Damone Johnson, and others notables such as actor Dean Cain. Smith has also been inducted into the Santa Monica High School Hall of Fame for athletes and has had his number "symbolically" retired by the school. College career Smith was a consensus All-America choice as a senior at USC in 1980. He played in two Rose Bowls for the USC Trojans. He played in a secondary which included three future NFL All-Pros: the San Francisco 49ers' Ronnie Lott and the Minnesota Vikings' Joey Browner in addition to himself. Smith was a consensus All-America choice as he lettered four times in football and three times in track. He posted 205 career tackles and 16 interceptions. He was inducted into the USC Ring of Fame in 2001. Smith played at Southern California (1977–1980) for John Robinson in a star-studded defensive backfield and is a member of the USC team that won the national championship in 1978. Professional career Smith established himself as one of the most feared and hardest hitting safeties in the NFL (a reputation later held by his protégé, Steve Atwater). Smith was voted to play in six Pro Bowls (following the 1985–1986, 1989–1991 and 1993 seasons), was named All-American Football Conference (AFC) in 1984 and 1988. He played on three Broncos Super Bowl teams (XXI, XXII, XXIV), and was named All-Pro four times. Smith's career totals include 1,171 tackles, 30 interceptions and 15 sacks. He posted a career-high five interceptions in 1991. He ranks fourth all-time among Denver Broncos in games played with the franchise with 184. He is the Broncos' leading tackler of all time with 1,152 recorded tackles. Smith was named the Denver Broncos's most inspirational player in 1992. He was inducted into the Denver Broncos Ring of Fame in 2001 and the Colorado Hall of Fame in 2006. References 1959 births Living people American football safeties Denver Broncos players USC Trojans football players USC Trojans men's track and field athletes American Conference Pro Bowl players Sportspeople from Santa Monica, California Players of American football from Santa Monica, California
56176808
https://en.wikipedia.org/wiki/2017%20Sydney%20Seaplanes%20DHC-2%20crash
2017 Sydney Seaplanes DHC-2 crash
On 31 December 2017 at about 3:15pm AEDT (UTC+11:00), a de Havilland Canada DHC-2 Beaver configured as a floatplane crashed into Jerusalem Bay off Cowan Creek, on the northern outskirts of Sydney, Australia. The aircraft, operated by Sydney Seaplanes, was carrying five passengers and a pilot, all of whom were killed in the crash. It was returning diners from Cottage Point Inn restaurant to Rose Bay Water Airport. The ATSB believes it probable that the pilot's performance was adversely affected by carbon monoxide poisoning. Post mortem tests on the bodies of the victims showed raised levels of CO in the blood, and a crack in the exhaust system was seen as the likely source of the gas. Aircraft The aircraft was a 54-year-old de Havilland Canada DHC-2 Beaver, originally built in 1963 and registered in Australia since February 1964; it was powered by a single Pratt & Whitney R-985 Wasp Junior engine. In 1996, the aircraft was destroyed in a crash while working as a crop duster near Armidale, New South Wales, killing the pilot. The aircraft was then completely rebuilt; the Civil Aviation Safety Authority confirmed that it had been repaired according to industry requirements. Victims The Canadian-Australian pilot and five British tourists – Richard Cousins, 58, CEO of British foodservice company Compass Group, his two sons, his fiancée and her daughter – were killed in the crash. Investigation The Australian Transport Safety Bureau (ATSB) opened an investigation into the accident. Most of the wreckage of the aircraft was raised on 4 January 2018. The intermediate report, published on 31 January 2018, indicated that the aircraft was found to have followed a differing flight path from the usual, and had failed to climb to the necessary altitude needed to fly over the surrounding terrain. It also indicated that the preliminary examination of aircraft flight control surfaces and controls seemed in sound order and expected positions, and as well that the engine sounded normal to witnesses on the ground. The aircraft landed inverted and all six on board suffered fatal injuries. On 28 January 2021, the ATSB released a final report to the accident, which concluded that Toxicology results identified that the pilot and passengers had higher than normal levels of carboxyhaemoglobin in their blood and that several pre-existing cracks in the exhaust collector ring, very likely released exhaust gas into the engine/accessory bay, which then very likely entered the cabin through holes in the main firewall where three bolts were missing from the magneto access panels. Even though the aircraft was fitted with a disposable CO detector, CO detectors of its kind have been widely deemed unreliable. Furthermore, the operators of the aircraft have implemented changes to the aircraft's system of maintenance, which include: The aircraft being fitted with active electronic CO detectors. The check of the serviceability of the CO detectors has been incorporated into the monthly emergency equipment checklist. Directing its new maintenance provider that the removal and installation of the main firewall access panels must be classified as a critical maintenance operation task, and will require certification by a licensed aircraft maintenance engineer and a conformity inspection. Directing its new maintenance provider that following maintenance activities on the engine exhaust system or use of the main firewall access panels, the test for the presence of CO must be conducted. An inspection of the magneto access panels and CO testing has been incorporated into the 100-hourly ‘B’ check inspection. See also 2019 English Channel Piper PA-46 crash - another light aircraft crash where carbon monoxide poisoning was a significant cause References Sydney Seaplanes crash Sydney Seaplanes crash Accidents and incidents involving the de Havilland Canada DHC-2 Beaver Sydney Seaplanes crash Sydney Seaplanes crash Sydney Seaplanes crash
374636
https://en.wikipedia.org/wiki/Adobe%20ColdFusion
Adobe ColdFusion
Adobe ColdFusion is a commercial rapid web-application development computing platform created by J. J. Allaire in 1995. (The programming language used with that platform is also commonly called ColdFusion, though is more accurately known as CFML.) ColdFusion was originally designed to make it easier to connect simple HTML pages to a database. By version 2 (1996), it became a full platform that included an IDE in addition to a full scripting language. Overview One of the distinguishing features of ColdFusion is its associated scripting language, ColdFusion Markup Language (CFML). CFML compares to the scripting components of ASP, JSP, and PHP in purpose and features, but its tag syntax more closely resembles HTML, while its script syntax resembles JavaScript. ColdFusion is often used synonymously with CFML, but there are additional CFML application servers besides ColdFusion, and ColdFusion supports programming languages other than CFML, such as server-side Actionscript and embedded scripts that can be written in a JavaScript-like language known as CFScript. Originally a product of Allaire and released on July 2, 1995, ColdFusion was developed by brothers Joseph J. Allaire and Jeremy Allaire. In 2001 Allaire was acquired by Macromedia, which in turn was acquired by Adobe Systems Inc in 2005. ColdFusion is most often used for data-driven websites or intranets, but can also be used to generate remote services such as REST services, WebSockets, SOAP web services or Flash remoting. It is especially well-suited as the server-side technology to the client-side ajax. ColdFusion can also handle asynchronous events such as SMS and instant messaging via its gateway interface, available in ColdFusion MX 7 Enterprise Edition. Main features ColdFusion provides a number of additional features out of the box. Main features include: Simplified database access Client and server cache management Client-side code generation, especially for form widgets and validation Conversion from HTML to PDF Data retrieval from common enterprise systems such as Active Directory, LDAP, SMTP, POP, HTTP, FTP, Microsoft Exchange Server and common data formats such as RSS and Atom File indexing and searching service based on Apache Solr GUI administration Server, application, client, session, and request scopes XML parsing, querying (XPath), validation and transformation (XSLT) Server clustering Task scheduling Graphing and reporting Simplified file manipulation including raster graphics (and CAPTCHA) and zip archives (introduction of video manipulation is planned in a future release) Simplified web service implementation (with automated WSDL generation / transparent SOAP handling for both creating and consuming services - as an example, ASP.NET has no native equivalent for <CFINVOKE WEBSERVICE="http://host/tempconf.cfc?wsdl" METHOD="Celsius2Fahrenheit" TEMP="#tempc#" RETURNVARIABLE="tempf">) Other implementations of CFML offer similar or enhanced functionality, such as running in a .NET environment or image manipulation. The engine was written in C and featured, among other things, a built-in scripting language (CFScript), plugin modules written in Java, and a syntax very similar to HTML. The equivalent to an HTML element, a ColdFusion tag begins with the letters "CF" followed by a name that is indicative of what the tag is interpreted to, in HTML. E.g. <cfoutput> to begin the output of variables or other content. In addition to CFScript and plugins (as described), CFStudio provided a design platform with a WYSIWYG display. In addition to ColdFusion, CFStudio also supports syntax in other languages popular for backend programming, such as Perl. In addition to making backend functionality easily available to the non-programmer, (version 4.0 and forward in particular) integrated easily with the Apache Web Server and with Internet Information Services. Other features All versions of ColdFusion prior to 6.0 were written using Microsoft Visual C++. This meant that ColdFusion was largely limited to running on Microsoft Windows, although Allaire did successfully port ColdFusion to Sun Solaris starting with version 3.1. The Allaire company was sold to Macromedia, then Macromedia was sold to Adobe. Earlier versions were not as robust as the versions available from version 4.0 forward. With the release of ColdFusion MX 6.0, the engine had been re-written in Java and supported its own runtime environment, which was easily replaced through its configuration options with the runtime environment from Sun. Version 6.1 included the ability to code and debug Macromedia Flash. Versions Cold Fusion 3 Version 3, released in June 1997, brought custom tags, cfsearch/cfindex/cfcollection based on the Verity search engine, the server scope, and template encoding (called then "encryption"). Version 3.1, released in Jan 1998, added RDS support as well as a port to the Sun Solaris operating system, while Cold Fusion studio gained a live page preview and HTML syntax checker. ColdFusion 4 Released in Nov 1998, version 4 is when the name was changed from "Cold Fusion" to "ColdFusion" - possibly to distinguish it from Cold fusion theory. The release also added the initial implementation of cfscript, support for locking (cflock), transactions (cftransaction), hierarchical exception handling (cftry/cfcatch), sandbox security, as well as many new tags and functions, including cfstoredproc, cfcache, cfswitch, and more. ColdFusion 4.5 Version 4.5, released in Nov 1999, expanded the ability to access external system resources, including COM and CORBA, and added initial support for Java integration (including EJB's, Pojo's, servlets, and Java CFX's). IT also added the getmetricdata function (to access performance information), additional performance information in page debugging output, enhanced string conversion functions, and optional whitespace removal. ColdFusion 5 Version 5 was released in June 2001, adding enhanced query support, new reporting and charting features, user-defined functions, and improved admin tools. It was the last to be legacy coded for a specific platform, and the first release from Macromedia after their acquisition of Allaire Corporation, which had been announced January 16, 2001. ColdFusion MX 6 Prior to 2000, Edwin Smith, an Allaire architect on JRun and later the Flash Player, initiated a project codenamed "Neo". This project was later revealed as a ColdFusion Server re-written completely using Java. This made portability easier and provided a layer of security on the server, because it ran inside a Java Runtime Environment. In June 2002 Macromedia released the version 6.0 product under a slightly different name, ColdFusion MX, allowing the product to be associated with both the Macromedia brand and its original branding. ColdFusion MX was completely rebuilt from the ground up and was based on the Java EE platform. ColdFusion MX was also designed to integrate well with Macromedia Flash using Flash Remoting. With the release of ColdFusion MX, the CFML language API was released with an OOP interface. ColdFusion MX 7 With the release of ColdFusion 7.0 on February 7, 2005, the naming convention was amended, rendering the product name "Macromedia ColdFusion MX 7" (the codename for CFMX7 was "Blackstone"). CFMX 7 added Flash-based and XForms-based web forms, and a report builder that output in Adobe PDF as well as FlashPaper, RTF and Excel. The Adobe PDF output is also available as a wrapper to any HTML page, converting that page to a quality printable document. The enterprise edition also added Gateways. These provide interaction with non-HTTP request services such as IM Services, SMS, Directory Watchers, and an asynchronous execution. XML support was boosted in this version to include native schema checking. ColdFusion MX 7.0.1 (codename "Merrimack") added support for Mac OS X, improvements to Flash forms, RTF support for CFReport, the new CFCPRoxy feature for Java/CFC integration, and more. ColdFusion MX 7.0.2 (codenamed "Mystic") included advanced features for working with Adobe Flex 2 as well as more improvements for the CF Report Builder. Adobe ColdFusion 8 On July 30, 2007, Adobe Systems released ColdFusion 8, dropping "MX" from its name. During beta testing the codename used was "Scorpio" (the eighth sign of the zodiac and the eighth iteration of ColdFusion as a commercial product). More than 14,000 developers worldwide were active in the beta process - many more testers than the 5,000 Adobe Systems originally expected. The ColdFusion development team consisted of developers based in Newton/Boston, Massachusetts and offshore in Bangalore, India. Some of the new features are the CFPDFFORM tag, which enables integration with Adobe Acrobat forms, some image manipulation functions, Microsoft .NET integration, and the CFPRESENTATION tag, which allows the creation of dynamic presentations using Adobe Acrobat Connect, the Web-based collaboration solution formerly known as Macromedia Breeze. In addition, the ColdFusion Administrator for the Enterprise version ships with built-in server monitoring. ColdFusion 8 is available on several operating systems including Linux, Mac OS X and Windows Server 2003. Other additions to ColdFusion 8 are built-in Ajax widgets, file archive manipulation (CFZIP), Microsoft Exchange server integration (CFEXCHANGE), image manipulation including automatic CAPTCHA generation (CFIMAGE), multi-threading, per-application settings, Atom and RSS feeds, reporting enhancements, stronger encryption libraries, array and structure improvements, improved database interaction, extensive performance improvements, PDF manipulation and merging capabilities (CFPDF), interactive debugging, embedded database support with Apache Derby, and a more ECMAScript compliant CFSCRIPT. For development of ColdFusion applications, several tools are available: primarily Adobe Dreamweaver CS4, Macromedia HomeSite 5.x, CFEclipse, Eclipse and others. "Tag updaters" are available for these applications to update their support for the new ColdFusion 8 features. Adobe ColdFusion 9 ColdFusion 9 (Codenamed: Centaur) was released on October 5, 2009. New features for CF9 include: Ability to code ColdFusion Components (CFCs) entirely in CFScript. An explicit "local" scope that does not require local variables to be declared at the top of the function. Implicit getters/setters for CFC. Implicit constructors via method called "init" or method with same name as CFC. New CFFinally tag for Exception handling syntax and CFContinue tag for Control flow. Object-relational mapping (ORM) Database integration through Hibernate (Java). Server.cfc file with onServerStart and onServerEnd methods. Tighter integration with Adobe Flex and Adobe AIR. Integration with key Microsoft products including Word, Excel, SharePoint, Exchange, and PowerPoint. In Memory Management - or Virtual File System: an ability to treat content in memory as opposed to using the HDD. Exposed as Services - an ability to access, securely, functions of the server externally. Adobe ColdFusion 10 ColdFusion 10 (Codenamed: Zeus) was released on May 15, 2012. New or improved features available in all editions (Standard, Enterprise, and Developer) include (but are not limited to): Security enhancements Hotfix installer and notification Improved scheduler (based on a version of quartz) Improved web services support (WSDL 2.0, SOAP 1.2) Support for HTML5 web sockets Tomcat integration Support for RESTful web services Language enhancements (closures, and more) Search integration with Apache Solr HTML5 video player and Adobe Flash Player Flex and Adobe AIR lazy loading XPath integration HTML5 enhancements Additional new or improved features in ColdFusion Enterprise or Developer editions include (but are not limited to): Dynamic and interactive HTML5 charting Improved and revamped scheduler (additional features over what is added in CF10 Standard) Object relational mapping enhancements The lists above were obtained from the Adobe web site pages describing "new features", as listed first in the links in the following list. CF10 was originally referred to by the codename Zeus, after first being confirmed as coming by Adobe at Adobe MAX 2010, and during much of its prerelease period. It was also commonly referred to as "ColdFusion next" and "ColdFusion X" in blogs, on Twitter, etc., before Adobe finally confirmed it would be "ColdFusion 10". For much of 2010, ColdFusion Product Manager Adam Lehman toured the US setting up countless meetings with customers, developers, and user groups to formulate a master blueprint for the next feature set. In September 2010, he presented the plans to Adobe where they were given full support and approval by upper management. The first public beta of ColdFusion 10 was released via Adobe Labs on 17 February 2012. Adobe ColdFusion 11 ColdFusion 11 (Codenamed: Splendor) was released on April 29, 2014. New or improved features available in all editions (Standard, Enterprise, and Developer) include: End-to-end mobile development A new lightweight edition (ColdFusion Express) Language enhancements WebSocket enhancements PDF generation enhancements Security enhancements Social enhancements REST enhancements Charting enhancements Compression enhancements ColdFusion 11 also removed many features previously identified simply as "deprecated" or no longer supported in earlier releases. For example, the CFLOG tag long offered date and time attributes which were deprecated (and redundant, as the date and time is always logged). As of CF11, their use would not cause the CFLOG tag to fail. Adobe ColdFusion (2016 release) Adobe ColdFusion (2016 release), Codenamed: Raijin (and also known generically as ColdFusion 2016) was released on February 16, 2016. New or improved features available in all editions (Standard, Enterprise, and Developer) include: Language enhancements Command Line Interface (CLI) PDF generation enhancements Security enhancements External session storage (Redis) Swagger document generation NTLM support API Manager Adobe ColdFusion (2018 Release) Adobe ColdFusion (2018 release), known generically as ColdFusion 2018, was released on July 12, 2018. ColdFusion 2018 was codenamed Aether during prerelease. As of July 2020, Adobe had released 10 updates for ColdFusion 2018. New or improved features available in all editions (Standard, Enterprise, and Developer) include: Language enhancements (including NULL, abstract classes and methods, covariants and finals, closures in tags, and more) Asynchronous programming, using Futures Command line REPL Auto lockdown capability Distributed cache support (Redis, memcached, JCS) REST playground capability Modernized Admin UI Performance Monitoring Toolset Adobe ColdFusion (2021 Release) Adobe ColdFusion (2021 Release) was released on Nov 11th, 2020. ColdFusion 2021 was code named Project Stratus during pre-release. New or improved features available in all editions (Standard, Enterprise, and Developer) include: Lightweight installer ColdFusion Package Manager Cloud storage services Messaging services No-SQL database Single sign-on Core language changes Performance Monitoring Tool set Development roadmap In Sep 2017, Adobe announced the roadmap anticipating releases in 2018 and 2020. Among the key features anticipated for the 2016 release were a new performance monitor, enhancements to asynchronous programming, revamped REST support, and enhancements to the API Manager, as well as support for CF2016 projected into 2024. As for the 2020 release, the features anticipated at that time (in 2017) were configurability (modularity) of CF application services, revamped scripting and object-oriented support, and further enhancements to the API Manager. Features PDF generation ColdFusion can generate PDF documents using standard HTML (i.e. no additional coding is needed to generate documents for print). CFML authors place HTML and CSS within a pair of cfdocument tags (or new in ColdFusion 11, cfhtmltopdf tags). The generated document can then either be saved to disk or sent to the client's browser. ColdFusion 8 introduced also the cfpdf tag to allow for control over PDF documents including PDF forms, and merging of PDFs. These tags however do not use Adobe's PDF engine but cfdocument uses a combination of the commercial JPedal Java PDF library and the free and open source Java library iText, and cfhtmltopdf uses an embedded WebKit implementation. ColdFusion Components (Objects) ColdFusion was originally not an object-oriented programming language like PHP versions 3 and below. ColdFusion falls into the category of OO languages that do not support multiple inheritance (along with Java, Smalltalk, etc.). With the MX release (6+), ColdFusion introduced basic OO functionality with the component language construct which resembles classes in OO languages. Each component may contain any number of properties and methods. One component may also extend another (Inheritance). Components only support single inheritance. Object handling feature set and performance enhancing has occurred with subsequent releases. With the release of ColdFusion 8, Java-style interfaces are supported. ColdFusion components use the file extension cfc to differentiate them from ColdFusion templates (.cfm). Remoting Component methods may be made available as web services with no additional coding and configuration. All that is required is for a method's access to be declared 'remote'. ColdFusion automatically generates a WSDL at the URL for the component in this manner: http://path/to/components/Component.cfc?wsdl. Aside from SOAP, the services are offered in Flash Remoting binary format. Methods which are declared remote may also be invoked via an HTTP GET or POST request. Consider the GET request as shown. http://path/to/components/Component.cfc?method=search&query=your+query&mode=strict This will invoke the component's search function, passing "your query" and "strict" as arguments. This type of invocation is well-suited for Ajax-enabled applications. ColdFusion 8 introduced the ability to serialize ColdFusion data structures to JSON for consumption on the client. The ColdFusion server will automatically generate documentation for a component if you navigate to its URL and insert the appropriate code within the component's declarations. This is an application of component introspection, available to developers of ColdFusion components. Access to a component's documentation requires a password. A developer can view the documentation for all components known to the ColdFusion server by navigating to the ColdFusion URL. This interface resembles the Javadoc HTML documentation for Java classes. Custom Tags ColdFusion provides several ways to implement custom markup language tags, i.e. those not included in the core ColdFusion language. These are especially useful for providing a familiar interface for web designers and content authors familiar with HTML but not imperative programming. The traditional and most common way is using CFML. A standard CFML page can be interpreted as a tag, with the tag name corresponding to the file name prefixed with "cf_". For example, the file IMAP.cfm can be used as the tag "cf_imap". Attributes used within the tag are available in the ATTRIBUTES scope of the tag implementation page. CFML pages are accessible in the same directory as the calling page, via a special directory in the ColdFusion web application, or via a CFIMPORT tag in the calling page. The latter method does not necessarily require the "cf_" prefix for the tag name. A second way is the developments of CFX tags using Java or C++. CFX tags are prefixed with "cfx_", for example "cfx_imap". Tags are added to the ColdFusion runtime environment using the ColdFusion administrator, where JAR or DLL files are registered as custom tags. Finally, ColdFusion supports JSP tag libraries from the JSP 2.0 language specification. JSP tags are included in CFML pages using the CFIMPORT tag. Interactions with other programming languages ColdFusion and Java The standard ColdFusion installation allows the deployment of ColdFusion as a WAR file or EAR file for deployment to standalone application servers, such as Macromedia JRun, and IBM WebSphere. ColdFusion can also be deployed to servlet containers such as Apache Tomcat and Mortbay Jetty, but because these platforms do not officially support ColdFusion, they leave many of its features inaccessible. As of ColdFusion 10 Macromedia JRun was replaced by Apache Tomcat. Because ColdFusion is a Java EE application, ColdFusion code can be mixed with Java classes to create a variety of applications and use existing Java libraries. ColdFusion has access to all underlying Java classes, supports JSP custom tag libraries, and can access JSP functions after retrieving the JSP page context (GetPageContext()). Prior to ColdFusion 7.0.1, ColdFusion components could only be used by Java or .NET by declaring them as web services. However, beginning in ColdFusion MX 7.0.1, ColdFusion components can now be used directly within Java classes using the CFCProxy class. Recently, there has been much interest in Java development using alternate languages such as Jython, Groovy and JRuby. ColdFusion was one of the first scripting platforms to allow this style of Java development. ColdFusion and .NET ColdFusion 8 natively supports .NET within the CFML syntax. ColdFusion developers can simply call any .NET assembly without needing to recompile or alter the assemblies in any way. Data types are automatically translated between ColdFusion and .NET (example: .NET DataTable → ColdFusion Query). A unique feature for a Java EE vendor, ColdFusion 8 offers the ability to access .NET Assemblies remotely through proxy (without the use of .NET Remoting). This allows ColdFusion users to leverage .NET without having to be installed on a Windows operating system. Acronyms The acronym for the ColdFusion Markup Language is CFML. When ColdFusion templates are saved to disk, they are traditionally given the extension .cfm or .cfml. The .cfc extension is used for ColdFusion Components. The original extension was DBM or DBML, which stood for Database Markup Language. When talking about ColdFusion, most users use the acronym CF and this is used for numerous ColdFusion resources such as user groups (CFUGs) and sites. CFMX is the common abbreviation for ColdFusion versions 6 and 7 (a.k.a. ColdFusion MX). Alternative server environments ColdFusion originated as proprietary technology based on Web technology industry standards. However, it is becoming a less closed technology through the availability of competing products. Such alternative products include (in alphabetical order): BlueDragon - Proprietary .NET-based CFML engine and free open source Java-based CFML engine (Open BlueDragon). Coral Web Builder IgniteFusion OpenBD - The open source version of BlueDragon was released as Open BlueDragon (OpenBD) in December 2008. Lucee - Free, open source CFML engine forked from Railo. Lucee's aim is to provide the functionality of CFML using less resources and giving better performance and to move CFML past its roots and into a modern and dynamic web programming platform. Lucee is backed by community supporters and members of the Lucee Association. Railo - Free, open source CFML engine. It comes in three main product editions, and other versions. SmithProject The argument can be made that ColdFusion is even less platform-bound than raw Java EE or .NET, simply because ColdFusion will run on top of a .NET app server (New Atlanta), or on top of any servlet container or Java EE application server (JRun, WebSphere, JBoss, Geronimo, Tomcat, Resin Server, Jetty (web server), etc.). In theory, a ColdFusion application could be moved unchanged from a Java EE application server to a .NET application server. Vulnerabilities In March 2013, a known issue affecting ColdFusion 8, 9 and 10 left the National Vulnerability Database open to attack. The vulnerability had been identified and a patch released by Adobe for CF9 and CF10 in January. In April 2013, a ColdFusion vulnerability was blamed by Linode for an intrusion into the Linode Manager control panel website. A security bulletin and hotfix for this had been issued by Adobe a week earlier. In May 2013, Adobe identified another critical vulnerability, reportedly already being exploited in the wild, which targets all recent versions of ColdFusion on any servers where the web-based administrator and API have not been locked down. The vulnerability allows unauthorized users to upload malicious scripts and potentially gain full control over the server. A security bulletin and hotfix for this was issued by Adobe 6 days later. In April 2015, Adobe fixed a cross-site scripting (XSS) vulnerability in Adobe ColdFusion 10 before Update 16, and in ColdFusion 11 before Update 5, that allowed remote attackers to inject arbitrary web script or HTML; however, it's exploitable only by users who have authenticated through the administration panel. In September 2019, Adobe fixed two command injection vulnerabilities (CVE-2019-8073) that enabled arbitrary code and an alleyway traversal (CVE-2019-8074). See also Adobe ColdFusion Builder - Builder Software Comparison of programming languages 4GL References External links Adobe software Scripting languages Macromedia software Web development software CFML compilers CFML programming language JVM programming languages
26187906
https://en.wikipedia.org/wiki/Peirous
Peirous
In Greek mythology, Peirous or Peiroos (Ancient Greek: Πείροος) was a Thracian war leader from the city of Aenus and an ally of King Priam during the Trojan War. Peirous was the son of Imbrasus and father of Rhygmus (who fought at Troy alongside his father). Peirous was killed by Thoas, leader of the Aetolians. Namesake 2893 Peiroos, Jovian asteroid named after Peirous See also List of Trojan War characters Notes References Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. People of the Trojan War Characters in Greek mythology
11014498
https://en.wikipedia.org/wiki/DVD
DVD
The DVD (common abbreviation for Digital Video Disc or Digital Versatile Disc) is a digital optical disc data storage format invented and developed in 1995 and released in late 1996. Currently allowing up to 17.08 GB of storage, the medium can store any kind of digital data and was widely used for software and other computer files as well as video programs watched using DVD players. DVDs offer higher storage capacity than compact discs while having the same dimensions. Prerecorded DVDs are mass-produced using molding machines that physically stamp data onto the DVD. Such discs are a form of DVD-ROM because data can only be read and not written or erased. Blank recordable DVD discs (DVD-R and DVD+R) can be recorded once using a DVD recorder and then function as a DVD-ROM. Rewritable DVDs (DVD-RW, DVD+RW, and DVD-RAM) can be recorded and erased many times. DVDs are used in DVD-Video consumer digital video format and in DVD-Audio consumer digital audio format as well as for authoring DVD discs written in a special AVCHD format to hold high definition material (often in conjunction with AVCHD format camcorders). DVDs containing other types of information may be referred to as DVD data discs. Etymology The Oxford English Dictionary comments that, "In 1995, rival manufacturers of the product initially named digital video disc agreed that, in order to emphasize the flexibility of the format for multimedia applications, the preferred abbreviation DVD would be understood to denote digital versatile disc." The OED also states that in 1995, "The companies said the official name of the format will simply be DVD. Toshiba had been using the name 'digital video disc', but that was switched to 'digital versatile disc' after computer companies complained that it left out their applications." "Digital versatile disc" is the explanation provided in a DVD Forum Primer from 2000 and in the DVD Forum's mission statement. History Development There were several formats developed for recording video on optical discs before the DVD. Optical recording technology was invented by David Paul Gregg and James Russell in 1963 and first patented in 1968. A consumer optical disc data format known as the LaserDisc was developed in the United States, and first came to market in Atlanta, Georgia in December 1978. It used much larger discs than the later formats. Due to the high cost of players and discs, consumer adoption of the LaserDisc was very low in both North America and Europe, and was not widely used anywhere outside Japan and the more affluent areas of Southeast Asia, such as Hong Kong, Singapore, Malaysia and Taiwan. Released in 1987, CD Video used analog video encoding on optical discs matching the established standard size of audio CDs. Video CD (VCD) became one of the first formats for distributing digitally encoded films in this format, in 1993. In the same year, two new optical disc storage formats were being developed. One was the Multimedia Compact Disc (MMCD), backed by Philips and Sony (developers of the CD and CD-i), and the other was the Super Density (SD) disc, supported by Toshiba, Time Warner, Matsushita Electric, Hitachi, Mitsubishi Electric, Pioneer, Thomson, and JVC. By the time of the press launches for both formats in January 1995, the MMCD nomenclature had been dropped, and Philips and Sony were referring to their format as Digital Video Disc (DVD). The Super Density logo would later be reused in Secure Digital. Representatives from the SD camp asked IBM for advice on the file system to use for their disc, and sought support for their format for storing computer data. Alan E. Bell, a researcher from IBM's Almaden Research Center, got that request, and also learned of the MMCD development project. Wary of being caught in a repeat of the costly videotape format war between VHS and Betamax in the 1980s, he convened a group of computer industry experts, including representatives from Apple, Microsoft, Sun Microsystems, Dell, and many others. This group was referred to as the Technical Working Group, or TWG. On May 3, 1995, an ad hoc group formed from five computer companies (IBM, Apple, Compaq, Hewlett-Packard, and Microsoft) issued a press release stating that they would only accept a single format. The TWG voted to boycott both formats unless the two camps agreed on a single, converged standard. They recruited Lou Gerstner, president of IBM, to pressure the executives of the warring factions. In one significant compromise, the MMCD and SD groups agreed to adopt proposal SD 9, which specified that both layers of the dual-layered disc be read from the same side—instead of proposal SD 10, which would have created a two-sided disc that users would have to turn over. As a result, the DVD specification provided a storage capacity of 4.7 GB (4.38 GiB) for a single-layered, single-sided disc and 8.5 GB (7.92 GiB) for a dual-layered, single-sided disc. The DVD specification ended up similar to Toshiba and Matsushita's Super Density Disc, except for the dual-layer option (MMCD was single-sided and optionally dual-layer, whereas SD was two half-thickness, single-layer discs which were pressed separately and then glued together to form a double-sided disc) and EFMPlus modulation designed by Kees Schouhamer Immink. Philips and Sony decided that it was in their best interests to end the format war, and on September 15, 1995 agreed to unify with companies backing the Super Density Disc to release a single format, with technologies from both. After other compromises between MMCD and SD, the computer companies through TWG won the day, and a single format was agreed upon. The TWG also collaborated with the Optical Storage Technology Association (OSTA) on the use of their implementation of the ISO-13346 file system (known as Universal Disk Format) for use on the new DVDs. The format's details were finalized on December 8, 1995. Shortly after the format's finalization, talks began in mid-December 1995 on how to distribute the format at retail. In November 1995, Samsung announced it would start mass-producing DVDs by September 1996. The format launched on November 1, 1996 in Japan, mostly only with music video releases. The first major releases from Warner Home Video arrived on December 20, 1996, with four titles being available. The format's release in the U.S. was delayed multiple times, from August 1996, to October 1996, November 1996, before finally settling on early 1997. Players began to be produced domestically that winter, with March 24, 1997 as the U.S. launch date of the format proper in seven test markets. However, the launch was planned for the following day (March 25), leading to a distribution change with retailers and studios to prevent similar violations of breaking the street date. The nationwide rollout for the format happened on August 22, 1997. In 2001, blank DVD recordable discs cost the equivalent of US$32.55 in 2020. Adoption Movie and home entertainment distributors adopted the DVD format to replace the ubiquitous VHS tape as the primary consumer video distribution format. They embraced DVD as it produced higher quality video and sound, provided superior data lifespan, and could be interactive. Interactivity on LaserDiscs had proven desirable to consumers, especially collectors. When LaserDisc prices dropped from approximately $100 per disc to $20 per disc at retail, this luxury feature became available for mass consumption. Simultaneously, the movie studios decided to change their home entertainment release model from a rental model to a for purchase model, and large numbers of DVDs were sold. At the same time, a demand for interactive design talent and services was created. Movies in the past had uniquely designed title sequences. Suddenly every movie being released required information architecture and interactive design components that matched the film's tone and were at the quality level that Hollywood demanded for its product. DVD as a format had two qualities at the time that were not available in any other interactive medium: enough capacity and speed to provide high quality, full motion video and sound, and low cost delivery mechanism provided by consumer products retailers. Retailers would quickly move to sell their players for under $200, and eventually for under $50 at retail. In addition, the medium itself was small enough and light enough to mail using general first class postage. Almost overnight, this created a new business opportunity and model for business innovators to re-invent the home entertainment distribution model. It also gave companies an inexpensive way to provide business and product information on full motion video through direct mail. Immediately following the formal adoption of a unified standard for DVD, two of the four leading video game console companies (Sega and The 3DO Company) said they already had plans to design a gaming console with DVDs as the source medium. Sony stated at the time that they had no plans to use DVD in their gaming systems, despite being one of the developers of the DVD format and eventually the first company to actually release a DVD-based console. Game consoles such as the PlayStation 2, Xbox, and Xbox 360 use DVDs as their source medium for games and other software. Contemporary games for Windows were also distributed on DVD. Early DVDs were mastered using DLT tape, but using DVD-R DL or +R DL eventually became common. TV DVD combos, combining a standard definition CRT TV or an HD flat panel TV with a DVD mechanism under the CRT or on the back of the flat panel, and VCR/DVD combos were also once available for purchase. Specifications The DVD specifications created and updated by the DVD Forum are published as so-called DVD Books (e.g. DVD-ROM Book, DVD-Audio Book, DVD-Video Book, DVD-R Book, DVD-RW Book, DVD-RAM Book, DVD-AR (Audio Recording) Book, DVD-VR (Video Recording) Book, etc.). DVD discs are made up of two discs; normally one is blank, and the other contains data. Each disc is 0.6 mm thick, and are glued together to form a DVD disc. The gluing process must be done carefully to make the disc as flat as possible to avoid both birefringence and "disc tilt", which is when the disc is not perfectly flat, preventing it from being read. Some specifications for mechanical, physical and optical characteristics of DVD optical discs can be downloaded as freely available standards from the ISO website. There are also equivalent European Computer Manufacturers Association (Ecma) standards for some of these specifications, such as Ecma-267 for DVD-ROMs. Also, the DVD+RW Alliance publishes competing recordable DVD specifications such as DVD+R, DVD+R DL, DVD+RW or DVD+RW DL. These DVD formats are also ISO standards. Some DVD specifications (e.g. for DVD-Video) are not publicly available and can be obtained only from the DVD Format/Logo Licensing Corporation (DVD FLLC) for a fee of US$5000. Every subscriber must sign a non-disclosure agreement as certain information on the DVD Books is proprietary and confidential. Additionally the DVD6C patent pool holds patents used by DVD drives and discs. The capacity of DVDs is conventionally stated in gigabytes (GB), with the decimal definition of this term such that 1 GB = 109 bytes. Discs with multiple layers Like other optical disc formats before it, a basic DVD disc—known as DVD-5 in the DVD Books, while called Type A in the ISO standard—contains a single data layer readable from only one side. However, the DVD format also includes specifications for three types of discs with additional recorded layers, expanding disc data capacity beyond the 4.7 GB of DVD-5 while maintaining the same physical disc size. Double-sided discs Borrowing from the LaserDisc format, the DVD standard includes DVD-10 discs (Type B in ISO) with two recorded data layers such that only one layer is accessible from either side of the disc. This doubles the total nominal capacity of a DVD-10 disc to 9.4 GB (8.75 GiB), but each side is locked to 4.7 GB. Like DVD-5 discs, DVD-10 discs are defined as single-layer (SL) discs. Double-sided discs identify the sides as A and B. The disc structure lacks the dummy layer where identifying labels are printed on single-sided discs, so information such as title and side are printed on one or both sides of the non-data clamping zone at the center of the disc. DVD-10 discs fell out of favor because, unlike dual-layer discs, they require users to manually flip them to access the complete content (a relatively egregious scenario for DVD movies) while offering only a negligible benefit in capacity. Additionally, without a non-data side, they proved harder to handle and store. Dual-layer discs Dual-layer discs also employ a second recorded layer, however both are readable from the same side (and unreadable from the other). These DVD-9 discs (Type C in ISO) nearly double the capacity of DVD-5 discs to a nominal 8.5 GB, but fall below the overall capacity of DVD-10 discs due to differences in the physical data structure of the additional recorded layer. However, the advantage of not needing to flip the disc to access the complete recorded data – permitting a nearly contiguous experience for A/V content whose size exceeds the capacity of a single layer – proved a more favorable option for mass-produced DVD movies. DVD hardware accesses the additional layer (layer 1) by refocusing the laser through an otherwise normally-placed, semitransparent first layer (layer 0). This laser refocus—and the subsequent time needed to reacquire laser tracking—can cause a noticeable pause in A/V playback on earlier DVD players, the length of which varies between hardware. A printed message explaining that the layer-transition pause was not a malfunction became standard on DVD keep cases. During mastering, a studio could make the transition less obvious by timing it to occur just before a camera angle change or other abrupt shift, an early example being the DVD release of Toy Story. Later in the format's life, larger data buffers and faster optical pickups in DVD players made layer transitions effectively invisible regardless of mastering. Dual-layer DVDs are recorded using Opposite Track Path (OTP). Most dual-layer discs are mastered with layer 0 starting at the inside diameter and proceeding outward—as is the case for most optical media, regardless of layer count—while Layer 1 starts at the absolute outside diameter and proceeds inward. Additionally, data tracks are spiraled such that the disc rotates the same direction to read both layers. DVD video DL discs can be mastered slightly differently: a single media stream can be divided between the layers such that layer 1 starts at the same diameter that layer 0 finishes. This modification reduces the visible layer transition pause because after refocusing, the laser remains in place rather than losing additional time traversing the remaining disc diameter. DVD-9 was the first commercially successful implementation of such technology. Combinations of the above DVD-18 discs (Type D in ISO) effectively combines the DVD-9 and DVD-10 disc types by containing four recorded data layers (allocated as two sets of layers 0 and 1) such that only one layer set is accessible from either side of the disc. These discs provide a total nominal capacity of 17.0 GB, with 8.5 GB per side. This format was initially used for television series released on DVD (like the first releases of Miami Vice and Quantum Leap), but was eventually abandoned in favor of single sided discs for reissues. The DVD Book also permits an additional disc type called DVD-14: a hybrid double-sided disc with one dual-layer side, one single-layer side, and a total nominal capacity of 12.3 GB. DVD-14 has no counterpart in ISO. Both of these additional disc types are extremely rare due to their complicated and expensive manufacturing. The above sections regarding disc types pertain to 12 cm discs. The same disc types exist for 8 cm discs: ISO standards still regard these discs as Types A–D, while the DVD Book assigns them distinct disc types. DVD-14 has no analogous 8 cm type. The comparative data for 8 cm discs is provided further down. DVD recordable and rewritable HP initially developed recordable DVD media from the need to store data for backup and transport. DVD recordables are now also used for consumer audio and video recording. Three formats were developed: DVD-R/RW, DVD+R/RW (plus), and DVD-RAM. DVD-R is available in two formats, General (650 nm) and Authoring (635 nm), where Authoring discs may be recorded with CSS encrypted video content but General discs may not. Although most current DVD writers can write in both the DVD+R/RW and DVD-R/RW formats (usually denoted by "DVD±RW" or the existence of both the DVD Forum logo and the DVD+RW Alliance logo), the "plus" and the "dash" formats use different writing specifications. Most DVD hardware plays both kinds of discs, though older models can have trouble with the "plus" variants. Some early DVD players would cause damage to DVD±R/RW/DL when attempting to read them. The form of the spiral groove that makes up the structure of a recordable DVD encodes unalterable identification data known as Media Identification Code (MID). The MID contains data such as the manufacturer and model, byte capacity, allowed data rates (also known as speed), etc.. Dual-layer recording Dual-layer recording (occasionally called double-layer recording) allows DVD-R and DVD+R discs to store nearly double the data of a single-layer disc—8.5 and 4.7 gigabyte capacities, respectively. The additional capacity comes at a cost: DVD±DLs have slower write speeds as compared to DVD±R. DVD-R DL was developed for the DVD Forum by Pioneer Corporation; DVD+R DL was developed for the DVD+RW Alliance by Mitsubishi Kagaku Media (MKM) and Philips. Recordable DVD discs supporting dual-layer technology are backward-compatible with some hardware developed before the recordable medium. Many current DVD recorders support dual-layer technology, and while the costs became comparable to single-layer burners over time, blank dual-layer media has remained more expensive than single-layer media. Capacity The basic types of DVD (12 cm diameter, single-sided or homogeneous double-sided) are referred to by a rough approximation of their capacity in gigabytes. In draft versions of the specification, DVD-5 indeed held five gigabytes, but some parameters were changed later on as explained above, so the capacity decreased. Other formats, those with 8 cm diameter and hybrid variants, acquired similar numeric names with even larger deviation. The 12 cm type is a standard DVD, and the 8 cm variety is known as a MiniDVD. These are the same sizes as a standard CD and a mini-CD, respectively. The capacity by surface area (MiB/cm2) varies from 6.92 MiB/cm2 in the DVD-1 to 18.0 MB/cm2 in the DVD-18. Each DVD sector contains 2,418 bytes of data, 2,048 bytes of which are user data. There is a small difference in storage space between + and - (hyphen) formats: All sizes are expressed in their decimal sense (i.e. 1 Gigabyte = 1,000,000,000 bytes etc.). All sizes are expressed in their decimal sense (i.e. 1 Gigabyte = 1,000,000,000 bytes etc.). All sizes are expressed in their decimal sense (i.e. 1 Gigabyte = 1,000,000,000 bytes etc.). DVD drives and players DVD drives are devices that can read DVD discs on a computer. DVD players are a particular type of devices that do not require a computer to work, and can read DVD-Video and DVD-Audio discs. Laser and optics All three common optical disc media (Compact disc, DVD, and Blu-ray) use light from laser diodes, for its spectral purity and ability to be focused precisely. DVD uses light of 650 nm wavelength (red), as opposed to 780 nm (far-red, commonly called infrared) for CD. This shorter wavelength allows a smaller pit on the media surface compared to CDs (0.74 µm for DVD versus 1.6 µm for CD), accounting in part for DVD's increased storage capacity. In comparison, Blu-ray Disc, the successor to the DVD format, uses a wavelength of 405 nm (violet), and one dual-layer disc has a 50 GB storage capacity. Transfer rates Read and write speeds for the first DVD drives and players were 1,385 kB/s (1,353 KiB/s); this speed is usually called "1×". More recent models, at 18× or 20×, have 18 or 20 times that speed. Note that for CD drives, 1× means 153.6 kB/s (150 KiB/s), about one-ninth as swift. DVDs can spin at much higher speeds than CDs – DVDs can spin at up to 32000 RPM vs 23000 for CDs. However, in practice, discs should never be spun at their highest possible speed, to allow for a safety margin and for slight differences between discs, and to prevent material fatigue from the physical stress. DVD recordable and rewritable discs can be read and written using either constant angular velocity (CAV), constant linear velocity (CLV), Partial constant angular velocity (P-CAV) or Zoned Constant Linear Velocity (Z-CLV or ZCLV). Due to the slightly lower data density of dual layer DVDs (4.25 GB instead of 4.7 GB per layer), the required rotation speed is around 10% faster for the same data rate, which means that the same angular speed rating equals a 10% higher physical angular rotation speed. For that reason, the increase of reading speeds of dual layer media has stagnated at 12× (constant angular velocity) for half-height optical drives released since around 2005, and slim type optical drives are only able to record dual layer media at 6× (constant angular velocity), while reading speeds of 8× are still supported by such. Disc quality measurements The quality and data integrity of optical media is measureable, which means that future data losses caused by deteriorating media can be predicted well in advance by measuring the rate of correctable data errors. Errors on DVDs are measured as: PIE — Parity Inner Error PIF — Parity Inner Failure POE — Parity Outer Error POF — Parity Outer Failure A higher rate of errors may indicate a lower media quality, deteriorating media, scratches and dirt on the surface, and/or a malfunctioning DVD writer. PI errors, PI failures and PO errors are correctable, while a PO failure indicates a CRC error, one 2048 byte block (or sector) of data loss, a result of too many consecutive smaller errors. Additional parameters that can be measured are laser beam focus errors, tracking errors, jitter and beta errors (inconsistencies in lengths of lands and pits). Support of measuring the disc quality varies among optical drive vendors and models. DVD-Video DVD-Video is a standard for distributing video/audio content on DVD media. The format went on sale in Japan on November 1, 1996, in the United States on March 24, 1997 to line up with the 69th Academy Awards that day; in Canada, Central America, and Indonesia later in 1997, and in Europe, Asia, Australia, and Africa in 1998. DVD-Video became the dominant form of home video distribution in Japan when it first went on sale on November 1, 1996, but it shared the market for home video distribution in the United States for several years; it was June 15, 2003, when weekly DVD-Video in the United States rentals began outnumbering weekly VHS cassette rentals. DVD-Video is still the dominant form of home video distribution worldwide except for in Japan where it was surpassed by Blu-ray Disc when Blu-ray first went on sale in Japan on March 31, 2006. Security The Content Scramble System (CSS) is a digital rights management (DRM) and encryption system employed on almost all commercially produced DVD-video discs. CSS utilizes a proprietary 40-bit stream cipher algorithm. The system was introduced around 1996 and was first compromised in 1999. The purpose of CSS is twofold: CSS prevents byte-for-byte copies of an MPEG (digital video) stream from being playable since such copies do not include the keys that are hidden on the lead-in area of the restricted DVD. CSS provides a reason for manufacturers to make their devices compliant with an industry-controlled standard, since CSS scrambled discs cannot in principle be played on noncompliant devices; anyone wishing to build compliant devices must obtain a license, which contains the requirement that the rest of the DRM system (region codes, Macrovision, and user operation prohibition) be implemented. While most CSS-decrypting software is used to play DVD videos, other pieces of software (such as DVD Decrypter, AnyDVD, DVD43, Smartripper, and DVD Shrink) can copy a DVD to a hard drive and remove Macrovision, CSS encryption, region codes and user operation prohibition. Consumer restrictions The rise of filesharing has prompted many copyright holders to display notices on DVD packaging or displayed on screen when the content is played that warn consumers of the illegality of certain uses of the DVD. It is commonplace to include a 90-second advertisement warning that most forms of copying the contents are illegal. Many DVDs prevent skipping past or fast-forwarding through this warning. Arrangements for renting and lending differ by geography. In the U.S., the right to re-sell, rent, or lend out bought DVDs is protected by the first-sale doctrine under the Copyright Act of 1976. In Europe, rental and lending rights are more limited, under a 1992 European Directive that gives copyright holders broader powers to restrict the commercial renting and public lending of DVD copies of their work. DVD-Audio DVD-Audio is a format for delivering high fidelity audio content on a DVD. It offers many channel configuration options (from mono to 5.1 surround sound) at various sampling frequencies (up to 24-bits/192 kHz versus CDDA's 16-bits/44.1 kHz). Compared with the CD format, the much higher-capacity DVD format enables the inclusion of considerably more music (with respect to total running time and quantity of songs) or far higher audio quality (reflected by higher sampling rates, greater sample resolution and additional channels for spatial sound reproduction). DVD-Audio briefly formed a niche market, probably due to the very sort of format war with rival standard SACD that DVD-Video avoided. Security DVD-Audio discs employ a DRM mechanism, called Content Protection for Prerecorded Media (CPPM), developed by the 4C group (IBM, Intel, Matsushita, and Toshiba). Although CPPM was supposed to be much harder to crack than a DVD-Video CSS, it too was eventually cracked, in 2007, with the release of the dvdcpxm tool. The subsequent release of the libdvdcpxm library (based on dvdcpxm) allowed for the development of open source DVD-Audio players and ripping software. As a result, making 1:1 copies of DVD-Audio discs is now possible with relative ease, much like DVD-Video discs. Successors and decline In 2006, two new formats called HD DVD and Blu-ray Disc were released as the successor to DVD. HD DVD competed unsuccessfully with Blu-ray Disc in the format war of 2006–2008. A dual layer HD DVD can store up to 30 GB and a dual layer Blu-ray disc can hold up to 50 GB. However, unlike previous format changes, e.g., vinyl to Compact Disc or VHS videotape to DVD, there is no immediate indication that production of the standard DVD will gradually wind down, as they still dominate, with around 75% of video sales and approximately one billion DVD player sales worldwide as of April 2011. In fact, experts claim that the DVD will remain the dominant medium for at least another five years as Blu-ray technology is still in its introductory phase, write and read speeds being poor and necessary hardware being expensive and not readily available. Consumers initially were also slow to adopt Blu-ray due to the cost. By 2009, 85% of stores were selling Blu-ray Discs. A high-definition television and appropriate connection cables are also required to take advantage of Blu-ray disc. Some analysts suggest that the biggest obstacle to replacing DVD is due to its installed base; a large majority of consumers are satisfied with DVDs. The DVD succeeded because it offered a compelling alternative to VHS. In addition, the uniform media size lets manufacturers make Blu-ray players (and HD DVD players) backward-compatible, so they can play older DVDs. This stands in contrast to the change from vinyl to CD, and from tape to DVD, which involved a complete change in physical medium. it is still commonplace for studios to issue major releases in "combo pack" format, including both a DVD and a Blu-ray disc (as well as a digital copy). Also, some multi-disc sets use Blu-ray for the main feature, but DVDs for supplementary features (examples of this include the Harry Potter "Ultimate Edition" collections, the 2009 re-release of the 1967 The Prisoner TV series, and a 2007 collection related to Blade Runner). Another reason cited (July 2011) for the slower transition to Blu-ray from DVD is the necessity of and confusion over "firmware updates" and needing an internet connection to perform updates. This situation is similar to the changeover from 78 rpm shellac recordings to 45 rpm and 33⅓ rpm vinyl recordings. Because the new and old media were virtually the same (a disc on a turntable, played by a needle), phonograph player manufacturers continued to include the ability to play 78s for decades after the format was discontinued. Manufacturers continue to release standard DVD titles , and the format remains the preferred one for the release of older television programs and films. Shows that were shot and edited entirely on film, such as Star Trek: The Original Series, cannot be released in high definition without being re-scanned from the original film recordings. Certain special effects were also updated to appear better in high-definition. Shows that were made between the early 1980s and the early 2000s were generally shot on film, then transferred to cassette tape, and then edited natively in either NTSC or PAL, making high-definition transfers impossible as these SD standards were baked into the final cuts of the episodes. Star Trek: The Next Generation is the only such show that has gotten a Blu-ray release. The process of making high-definition versions of TNG episodes required finding the original film clips, re-scanning them into a computer at high definition, digitally re-editing the episodes from the ground up, and re-rendering new visual effects shots, an extraordinarily labor-intensive ordeal that cost Paramount over $12 million. The project was a financial failure and resulted in Paramount deciding very firmly against giving Deep Space Nine and Voyager the same treatment. However, What We Left Behind included small amounts of remastered Deep Space Nine footage. DVDs are also facing competition from video on demand services. With increasing numbers of homes having high speed Internet connections, many people now have the option to either rent or buy video from an online service, and view it by streaming it directly from that service's servers, meaning they no longer need any form of permanent storage media for video at all. By 2017, digital streaming services had overtaken the sales of DVDs and Blu-rays for the first time. Longevity Longevity of a storage medium is measured by how long the data remains readable, assuming compatible devices exist that can read it: that is, how long the disc can be stored until data is lost. Numerous factors affect longevity: composition and quality of the media (recording and substrate layers), humidity and light storage conditions, the quality of the initial recording (which is sometimes a matter of mutual compatibility of media and recorder), etc. According to NIST, "[a] temperature of 64.4 °F (18 °C) and 40% RH [Relative Humidity] would be considered suitable for long-term storage. A lower temperature and RH is recommended for extended-term storage." According to the Optical Storage Technology Association (OSTA), "Manufacturers claim lifespans ranging from 30 to 100 years for DVD, DVD-R and DVD+R discs and up to 30 years for DVD-RW, DVD+RW and DVD-RAM." According to a NIST/LoC research project conducted in 2005–2007 using accelerated life testing, "There were fifteen DVD products tested, including five DVD-R, five DVD+R, two DVD-RW and three DVD+RW types. There were ninety samples tested for each product. [...] Overall, seven of the products tested had estimated life expectancies in ambient conditions of more than 45 years. Four products had estimated life expectancies of 30–45 years in ambient storage conditions. Two products had an estimated life expectancy of 15–30 years and two products had estimated life expectancies of less than 15 years when stored in ambient conditions." The life expectancies for 95% survival estimated in this project by type of product are tabulated below: See also List of computer hardware Book type Comparison of popular optical data-storage systems Digital video recorder Disk-drive performance characteristics DVD authoring DVD ripper DVD region code DVD TV game – Interactive movie Professional disc DVD single M-DISC Notes References Further reading External links Dvddemystified.com: DVD Frequently Asked Questions and Answers Dual Layer Explained – Informational Guide to the Dual Layer Recording Process YouTube "DVD Gallery": 1997 Toshiba DVD demo disc (segment) — an in-store Toshiba demonstration disc with technical information on the "then-new" DVD format. 120 mm discs Audiovisual introductions in 1996 Products introduced in 1996 Audio storage Consumer electronics Digital audio storage Home video . Dutch inventions Information technology in Japan Information technology in the Netherlands Japanese inventions Joint ventures Rotating disc computer storage media Science and technology in Japan Science and technology in the Netherlands . Video storage Digital media Objects with holes
59843257
https://en.wikipedia.org/wiki/Joanap
Joanap
Joanap is a remote access tool that is a type of malware used by the government of North Korea. It is two-stage malware, meaning it is "dropped" by another software (in this case the Brambul worm, which was part of the charges against Park Jin Hyok in 2018). Joanap establishes peer-to-peer communications and is used to manage botnets that can enable other operations. On Windows devices that have been compromised it allows data exfiltration, to drop and run secondary payloads, initialization of proxy communications, file management, process management, creation/deletion of directories, and node management. The US government believes HIDDEN COBRA (a US government term for malicious cyber activity conducted by North Korea) has most likely used Joanap, along with other malware like Brambul since at least 2009. According to the US government compromised IP addresses have been found in Argentina, Belgium, Brazil, Cambodia, China, Colombia, Egypt, India, Iran, Jordan, Pakistan, Saudi Arabia, Spain, Sri Lanka, Sweden, Taiwan, Tunisia. References Crime in North Korea Cyberattacks Types of cyberattacks
46378908
https://en.wikipedia.org/wiki/McAfee%20Institute
McAfee Institute
McAfee Institute LLC., is a professional certification body which administers several board certifications for the intelligence and investigative sectors. Topics available for training include cryptocurrency investigations, cyber intelligence and investigations, counterintelligence, human trafficking, workplace violence, active shooter, organized retail crime, leadership, incident response, digital forensics, fraud, deception detection and more. Background Founded in 2010, the McAfee Institute is a professional certification and training organization focused on the intelligence and investigative sectors. By 2018, the McAfee Institute has grown to offer several industry-recognized board certifications. The Governing Board of McAfee Institute oversees the certification affairs of the organization, contributes to the global standards of our certifications and helps to institute policy and actions by the membership during meetings or by ballot. The Governing Board consists of experts in Federal, Local & State law enforcement, Intelligence, loss prevention, cybersecurity & private investigation from around the globe. McAfee Institute's is headquartered in Chesterfield, Missouri. Accreditation | Government Agencies | Approved CPE Bodies The McAfee Institute has partnered with the Department of Homeland Securities' National Initiative for Cybersecurity Careers and Studies and is listed on their site as a provider of professional certifications in this space. The McAfee Institute is also partner with and recognized by ILEETA The International Law Enforcement Educators and Trainers Association. McAfee Institute is certified to operate with the Missouri Department of Education as a proprietary school. The central focus of the proprietary school certification program is consumer protection. This is accomplished through the establishment of standards for school operation and monitoring of those operations to ensure students are treated in a fair and equitable manner and receive education and training consistent with the published objectives of the instructional programs and the school. McAfee Institute is a CPE Sponsor with the National Registry of CPE Sponsors is a program offered by the National Association of State Boards of Accountancy (NASBA) to recognize CPE program sponsors who provide continuing professional education (CPE) programs in accordance with nationally recognized standards. McAfee Institute programs are approved for POST continuing education credits with the following POST. Missouri Peace Officer Standards and Training (CLEE)   McAfee Institute is recognized by SHRM to offer Professional Development Credits (PDCs) for SHRM-CP® or SHRM-SCP®. Military Programs and Partnerships The McAfee Institutes resident based (in-person) certification training programs have been approved through the Veterans Administration for G.I. Bill Benefits, effective November 12, 2018. McAfee Institute exams are approved for reimbursement through the VA if you are eligible. There is no limit to the number of exams you may take, or number of times you may take the same exam. And, VA will pay for exams even if you fail them. McAfee Institute Certifications are approved for Credentialing Assistance (ARMY COOL, NAVY COOL, DOD COOL, MARINES COOL, COAST GUARD COOL). Faculty The teaching faculty is led by Chris Edin Director of Training and organized into three different levels: Independent Certified Instructors, Adjunct Instructors, & Senior Instructors. Our Adjunct Instructors include Todd Hixson, Michelle Stewart, Amanda Markham, Todd Soong, Robert McKone II, Michael Orme, James "JJ" Davis, Connie Johnson, Karla Mastracchio, Derek Kingsbury, Raven Hicks, Charles Kwezera, Dr. Michele Giampolo, Jody O'Guinn, Tim Matthews, Daryl May. Certifications The institute provides following certifications: Certified Expert in Cyber Investigations (CECI) Certified Criminal Profiler (CCP) Certified in Open-source intelligence (C|OSINT) Certified All-Source Intelligence Professional (CASIP) Certified Cryptocurrency Forensic Investigator (CCFI) Certified Digital Currency Investigator (CDCI) Certified Cyber Intelligence Professional (CCIP) Certified Cyber Intelligence Investigator (CCII) Certified Counterintelligence Threat Analyst (CCTA) Certified Organized Retail Crime Investigator (CORCI) Certified Human Trafficking Investigator (CHTI) Certified Social Media Intelligence Expert (CSMIE) Social Media Intelligence Analyst (SMIA) Certified Workplace Violence & Threat Specialist (WVTS) Certified Professional Criminal Investigator (CPCI) McAfee Institute Certified Instructor (MICT) Certified Executive Leader (CEL) Retired Credentials for 2018 Certified Forensic Hi-Tech Investigator (CFHI) | Retired Certified eCommerce Fraud Investigator (CEFI) | Retired Certified Cyber Threat Forensic Investigator (CTFI) | Retired Governing Board The Governing Board of McAfee Institute oversees the certification affairs of the organization, contributes to the global standards of our certifications and helps to institute policy and actions by the membership during meetings or by ballot. The Governing Board consists of some of the worlds best in Federal, Local & State Law Enforcement, Intelligence, Loss Prevention, Cyber Security & Private Investigation from around the globe. Sean Marschke, WVTS Chief of Police Sturtevant Police Dept Sean Marschke has over 25 years of law enforcement experience. He has served in positions at the federal, state and local levels. He is the Chief of Police for the Sturtevant Police Department in Wisconsin. During his law enforcement career, Chief Marschke has been the lead investigator in major case investigations involving homicide, kidnapping, robbery, fraud and computer crimes. He loves coaching and helping L.E professionals to grow and succeed. Robert Wiley, CECI Detective/Fraud Investigator Texas City Police Department Wiley is a Fraud Investigator for the Texas city Police Department with 23 years of Law Enforcement experience. Certified Law Enforcement Instructor in Texas. Advanced Computer and Advanced Mobile Phone Forensics training by the United States Secret Service. Certified Missing and Exploited Children instructor. Attended Alvin Community College and Kaplan University. Mr Wiley is an inspiration to us all, feel free to connect with him to see how he can help you with your career. John Bryk, CCTA Threat Intelligence Analyst DNG-ISAC John Bryk retired from the U.S. Air Force as a colonel after a 30-year career, last serving as a military diplomat in central and western Europe and later as a civilian with the Defense Intelligence Agency. Bryk holds, among other degrees, an MBA, an M.S. in Cybersecurity, and an M.A. in Business and Organizational Security Management, a combination that gives him a unique outlook on the physical and cyberthreat landscapes. As an intelligence analyst for the private-sector, he focuses on the protection of our nation's natural gas critical cyber and physical infrastructure. Art Keller, CCIP Central Intelligence Agency (CIA) Clandestine Operative Art Keller served in the US Army during Operation Desert Storm and later spent seven years serving in the Central Intelligence Agency's Directorate of Operations, where he worked on cases to block the proliferation of Weapons of Mass Destruction and terrorism issues. While at the CIA, he served as a weapons inspector in the Iraq Survey Group, and concluded his time at the CIA as an Acting Chief of Base in the North West Frontier Province of Pakistan. He has appeared on CNN, CBS, PBS's News Hour, The National Geographic Channel, and the BBC. Alan Greggo CPP, CFE Director of Loss Prevention Microsoft Alan F Greggo has 30 years of Retail Loss Prevention leadership experience and is currently in the Asset Protection Department of Microsoft. He is Chairman of the ASIS Intl. Retail Loss Prevention Council. He has articles published in Security Management, LP Magazine, Optometric Management, and Seguridad En America Magazine. He has served on the editorial review committee of Fraud Magazine. Dr. Fred Newell, CECI Supervisor / Commander Charlotte-Mecklenburg Police Department. Dr. Newell currently serves as a Subject Matter Expert (SME) in the Graduate Criminal Justice/Homeland Security Curriculum within the Helms School of Government. He has also served in law enforcement for almost 20 years and have a passion for developing leaders in our industry. David Lorillo D.A. Investigator San Diego County District Attorney's Office He spent the majority of the past 7 years developing a comprehensive set of skills to deal with various types of fraud and economic crimes related investigations. Extensive experience leading multi-agency, high value fraud investigations that have led to the recovery of over $500K worth of stolen merchandise and have obtained dozens of convictions. Alfred Ducharme, CECI, CCTA, CSMIE Supervisory Intelligence Research Specialist Drug Enforcement Agency Mr. Ducarme has a passion for law enforcement and security. He has served both as a Supervisory Agent with the IRS & the Dept of Justice as well as a Supervisory Intelligence Research Specialist with the DEA. He holds a BS in Physical Sciences (Chem/Physics), was a graduate from the U.S. Coast Guard Academy and holds a Micro Degree as a Cyber Investigator/Intelligence Analyst McAfee Institute Colin MacGregor, CTFI Officer Standards Department of National Defence Colin started as a network technician in the Canadian military until he finished his degree at RMCC and accepted his commission as an officer. He spent the last 6 years working in a unit whose responsibilities was international deployments. While doing this he completed his Masters of Engineering in Information Assurance. Currently he works at the school of Communications and Electronics in the role of officer Standards for the Department of National Defence Eric Hurlburt, CECI Owner South Dakota Professional Services 13 Years Law Enforcement in Southern California/South Dakota; DRE (Drug Recognition Expert) Training In Sacramento, California; training in the backgrounds of Hispanic & Aryan street/prison gangs, and motorcycle gangs-1%er; formal training in interview, interrogation, and surveillance-from Wicklander-Zulawski; Certified K-9 Handler; 6 Years Lead Investigator and Manager in the private sector. Jeremy Tippett, MICT Fraud Investigator/ Insider Threat Analyst Northrop Grumman Professional with over 18 years of experience in the criminal justice field. Expert in areas team leadership, management, criminal investigations, interview and interrogation, report writing, and criminal law. Team-player skilled at mastering new concepts and producing results under pressure. Strong communication skills for promoting ideas clearly and effectively that led to training and supervising others. Roderick Bailey, CCIP CEO / Retired L.E. Offender Aid and Rehabilitation (OAR) Mr. Bailey currently serves as the President and CEO. Offender Aid and Rehabilitation (OAR) of North Carolina is a non-profit organization founded to help ex-offenders find the resources they need to re-enter society. He also over 20 years of leadership experience in law enforcement and retired from Charlotte Mecklenburg P.D Sherri Foster, CCII Litigative Consultant U.S. Attorneys Office Litigative Consultant with a history of violent crime investigations, poly drug investigations both domestic and foreign. Experience in financial investigations with an expertise in AML and Compliance. Skilled in Cyber Intelligence Analysis, counter intelligence, Cyber Security, Virtual Currencies, and Law Enforcement operations. Joshua Eudy CCII, MICT CTO & I.T Instructor Golden Gate Trailers Helps organizations with their technology investments to accelerate their business. 10 years’ experience. Has been a part of many opportunities from an IT leadership role and cybersecurity to drive businesses IT investments for success. George Torres CCII ORC & Corporate Investigations | CVS George Torres is a Loss Prevention Professional with 20 years of L.P experience. George holds a Bachelors of Science in Business Management as well as certifications from McAfee Institute, LP Foundation and the Association of Certified Fraud Examiners. Jamie B Bailey, CCII, CFI Director of Loss Prevention Scrubs & Beyond Jamie Bailey began his journey into the field of retail loss prevention twenty-one years ago. His hands on experience has given him in depth knowledge in areas such as internal/external theft, organized retail crime, ecommerce, cyber investigations, inventory management, CCTV applications and design, as well as face to face and phone interrogations. Dennis R. Thomas, MBA Corp. Loss Prevention Manager Delaware North Companies Over 23 years of corporate loss prevention experience. A loss prevention executive, author, trainer, public speaker in the investigation and prevention of internal and external criminal activity against corporations. Executive Protection, Workplace Violence Investigations, Corporate Crisis Management and Active Shooter Response. Roberto Alfaro, CECI Loss Prevention Manager Amazon Inc. Experienced in Asset Protection/Loss Prevention to support profit through shrink analysis. Skilled strategic manager/strategic planning in developing and testing new processes to ensure its effectiveness. Young professional with an MBA & BBA focused in Masters in Strategic Management and Global Operations from Davenport University. Jason Foust, CPCI Cyber Security Manager SAIC Cyber Security technical expert and problem solver with over 15 years of diverse experience from Windows driver writing to enterprise network security counter measures. I have been an instructor, a network administrator, a consultant, a developer, an engineer, a mentor, and several other positions. Jerry Biggs President of Asset Protection & Investigation Services He has dedicated over two decades heading or taking part in countless organized retail crime ("ORC") investigations across the country while working for some of the industry's largest retailers such as Walmart, Lowe's Home Improvement and Walgreens. Biggs has contributed to countless articles written, books published, TV documentaries created, and annual surveys conducted on the topic of ORC. Terrell Elliott Detective Sergeant City of Portland, Texas Has been in law enforcement for more than 30 years. 18 of those years as a criminal investigator. He is specialized in forensics and high tech crime investigation. He also served as the information technology manager for the City of Portland for the last 13 years. He is a master peace officer, fingerprint examiner, crisis intervention negotiator, and firearms instructor. Aaron Rowley Fraud Investigator eBay, Inc. Mr. Rowley has been a fraud investigator with eBay for over 5 years. His experience in solving complex fraud investigations has earned him a spot on McAfee Institutes board of advisors. His insights, partnerships and willingness to help combat fraud with other industry peers is second to none. Connect with him today to see how he can help you succeed. Sean Leader, CPCI, CCII, CTFI Cyber Security Analyst / Principle SAIC Sean is a technology professional with several years of significant experience in a variety of capacities including incident response, intelligence, network and system security. He has been able to make positive contributions in many industries - financial, health, telecommunications, networking vendors and DoD contractors. Joy Stanton Program Support Specialist Merrimack County Department of Corrections Has over seventeen years in the field of law enforcement. Initially working as a Dispatcher and Auxiliary Officer before becoming a Reserve Police Officer, D.A.R.E. Officer and working on undercover internet solicitation cases. After a forced relocation to New Hampshire for my husbands work she worked in the circuit courts before finding an exciting and always evolving position within the Department of Corrections. Dr. Nancy Grady Chief Data Scientist SAIC Dr. Nancy Grady is an SAIC Fellow and Chief Data Scientist with 35 years of experience specializing in the application of machine learning techniques for data and text automated analysis. She has led a number of SAIC's Internal Research and Development (IR&D) to develop big data analytics applications, data science process models, and cyber security data analysis. Mario Worsley Law Office Manager United States Air Force Executive experience in leadership, regulatory compliance, risk management, and personnel management. Practical experience in cyber security and process improvement. Skilled in strategic planning, developing and managing staff budgets. Administering to the health and the well-being of personnel. Karima Elsayed Investigative Open Source Intelligence Analyst SMIAWARE Karima is a graduate of Mercy Hurst University with experience in investigative analysis desired by professionals in law enforcement, business, and national security. She specializes in OSINT, applying analytical methodologies to provide actionable intelligence for decision-makers Christopher Sullivan Chief of Police Aledo Police Dept Chris Sullivan has a Master of Science degree from Lindenwood University and a Bachelor's degree from Western Illinois University. Sullivan is currently the Chief of Police for the City of Aledo, Illinois. He has served as a sworn law enforcement officer for 32 years, has been a police supervisor since 1993, and Chief of Police since 2002. See also Vocational education in the United States References Vocational education in the United States Educational institutions established in 2010 2010 establishments in Missouri
33675095
https://en.wikipedia.org/wiki/Qsort
Qsort
qsort is a C standard library function that implements a polymorphic sorting algorithm for arrays of arbitrary objects according to a user-provided comparison function. It is named after the "quicker sort" algorithm (a quicksort variant due to R. S. Scowen), which was originally used to implement it in the Unix C library, although the C standard does not require it to implement quicksort. Implementations of the qsort function achieve polymorphism, the ability to sort different kinds of data, by taking a function pointer to a three-way comparison function, as well as a parameter that specifies the size of its individual input objects. The C standard requires the comparison function to implement a total order on the items in the input array. History A qsort function was implemented by Lee McMahon in 1972. It was in place in Version 3 Unix as a library function, but was then an assembler subroutine. A C version, with roughly the interface of the standard C version, was in-place in Version 6 Unix. It was rewritten in 1983 for BSD. The function was standardized in ANSI C (1989). In 1991, Bell Labs employees observed that McMahon's and BSD versions of qsort would consume quadratic time for some simple inputs. Thus Jon Bentley and Douglas McIlroy engineered a new faster and more robust implementation. Example The following piece of C code shows how to sort a list of integers using qsort. #include <stdlib.h> /* Comparison function. Receives two generic (void) pointers to the items under comparison. */ int compare_ints(const void *p, const void *q) { int x = *(const int *)p; int y = *(const int *)q; /* Avoid return x - y, which can cause undefined behaviour because of signed integer overflow. */ if (x < y) return -1; // Return -1 if you want ascending, 1 if you want descending order. else if (x > y) return 1; // Return 1 if you want ascending, -1 if you want descending order. return 0; // All the logic is often alternatively written: return (x > y) - (x < y); } /* Sort an array of n integers, pointed to by a. */ void sort_ints(int *a, size_t n) { qsort(a, n, sizeof(*a), compare_ints); } Extensions Since the comparison function of the original qsort only accepts two pointers, passing in additional parameters (e.g. producing a comparison function that compares by the two value's difference with another value) must be done using global variables. The issue was solved by the BSD and GNU Unix-like systems by a introducing qsort_r function, which allows for an additional parameter to be passed to the comparison function. The two versions of qsort_r have different argument orders. C11 Annex K defines a qsort_s essentially idential to GNU's qsort_r. The macOS and FreeBSD libcs also contain qsort_b, a variant that uses blocks, an analogue to closures, as an alternate solution to the same problem. References C standard library Sorting algorithms
3881570
https://en.wikipedia.org/wiki/MILO%20%28boot%20loader%29
MILO (boot loader)
MILO, or the Alpha Linux Mini Loader, is a firmware replacement for early Alpha AXP hardware that allows the system to boot the Alpha version of the Linux operating system. It is capable of running Linux device drivers and reads available filesystems rather than looking for boot blocks. Newer Alpha hardware uses aboot. References External links Official website through the Internet Archive MILO HOWTO at the Linux Documentation Project the AlphaLinux.org homepage Free boot loaders
3406477
https://en.wikipedia.org/wiki/Descent%203
Descent 3
Descent 3 (stylized as Descent³) is a first-person shooter video game developed by Outrage Entertainment and published by Interplay Entertainment. It was originally released for Microsoft Windows in North America on June 17, 1999. Descent 3 is the third game in the Descent video game series and a sequel to Descent II. The game takes place in a science fiction setting of the Solar System where the player is cast as Material Defender, a mercenary who must help an organization known as the Red Acropolis Research Team to stop robots infected by an alien virus. Unlike in standard first-person shooters, the player must control a flying ship that has a six degrees of freedom movement scheme, allowing the player to move and rotate in any 3D direction. In addition to a single-player campaign mode, Descent 3 features an online multiplayer mode where numerous players can compete against each other in eight different game types. The game features both indoor and outdoor environments, made possible with the use of a hybrid engine that combines the capabilities of a portal rendering engine with those of a flight simulator-like terrain engine. Descent 3 received positive reviews from critics, holding a score of 89 out of 100 at review aggregate website Metacritic. The most praised aspects of the game were its graphics, artificial intelligence of enemies, and outdoor environments. An official expansion pack, Descent 3: Mercenary, was released on December 3, 1999. The expansion pack includes a new series of missions, multiplayer maps, and a level editor. After its release on Microsoft Windows, the game was subsequently ported to Mac OS and Linux platforms. Gameplay Like its predecessors Descent and Descent II, Descent 3 is a six degrees of freedom shooter where the player controls a flying ship from a first-person perspective in zero-gravity. It is differentiated from standard first-person shooters in that it allows the player to move and rotate in any 3D direction. Specifically, the player is free to move forward/backward, up/down, left/right, and rotate in three perpendicular axes, often termed pitch, yaw, and roll. Aboard the ship, the player can shoot enemies, turn on the ship's afterburners to temporarily increase its acceleration and speed, and fire flares or turn on the ship's headlight to explore darkened areas. In the game's single-player mode, the player must complete a series of levels where multiple enemies controlled by the game's artificial intelligence will try to hinder the player's progress. The game primarily takes place inside labyrinthine underground facilities, but the player can occasionally travel over the surface of the planets where the facilities are buried to reach other nearby areas. The underground facilities are composed of a set of tunnels and rooms separated by doors. Most of them can be opened by either firing weapons at them or bumping into them, but others require special actions to be performed first before entry is allowed. For instance, some doors require special keys that must be collected. To finish a level and proceed to the next one, the player must complete a set of objectives that range from collecting items to activating switches, defeating enemies, and destroying objects. As the player progresses throughout the game, two additional ships become available for use. Each of the game's three ships offers a different balance of speed, weapons, and maneuverability. Within the levels, the player may collect power-ups that enhance the ship's weaponry. Weapons are categorized into three different types: primary weapons, secondary weapons, and countermeasures. Primary weapons range from a variety of laser plasma cannon and the napalm cannon, which projects a stream of burning fuel. Secondary weapons include different types of missiles, while countermeasures range from proximity mines to portable turrets. Most primary weapons consume energy in different rate, but some, such as the Napalm Cannon, use their own type of ammunition. In contrast, all secondary weapons and countermeasures require their own ammunition suppliers. The player's ship is protected by a shield which decreases when attacked by enemies. If the shield is fully depleted, the player dies and must start the game again from a previous section of the fight without any collected power-ups. Nevertheless, the player can reclaim the missing power-ups from the ruins of the destroyed ship. Shield, energy, and ammunition suppliers are dispersed among the levels to help players increase their resources. The player can also collect equipment items which grant special powers. For example, the Quad Laser modifies the laser weapons to fire four shots at once instead of the standard two, while the Cloaking Device renders the player invisible to enemies for 30 seconds. During the game, the player may also deploy the Guide-Bot, an assistant that keeps track of the next objective and shows the player the way to a specific target. Multiplayer In addition to the single-player campaign mode, Descent 3 features an online multiplayer mode where numerous players can compete against each other in eight different game types. Notable game types include Anarchy, where the objective is to kill as many opponents as possible, Capture the Flag, where two to four teams compete against each other to capture opposing flags, and Monsterball, in which players must shoot and guide a ball into their opponents' goal. Aspects such as time limit, number of players, map to play on, and selection of what weapons are allowed, among others, can be customized to match player preference. The game also features an observer mode which allows players to watch a multiplayer game as a spectator and a co-operative mode that allows players to work together to complete campaign missions. Multiplayer games support the DirectPlay, IPX, and TCP/IP protocols. Online gameplay was also possible over Parallax Online, an online gaming service which kept track of players' statistics and rankings. Plot Descent 3 takes place in a science fiction setting of the Solar System where the player is cast as Material Defender MD1032, a mercenary working for a corporation called the Post Terran Mining Corporation (PTMC). The game begins moments after the events of Descent II, with the Material Defender escaping the destruction of a planetoid where he was clearing PTMC's robots infected by an alien virus. He was about to return to Earth to collect his reward, but a malfunction occurred with the prototype warp drive in the ship he was piloting, making it drift towards the Sun's atmosphere. At the very last moment, the Material Defender is rescued via a tractor beam by an organization known as the Red Acropolis Research Team. While the Material Defender recovers in the Red Acropolis station on Mars, the director of the team informs him that they have been investigating PTMC, and have uncovered a conspiracy: one of her acquaintances in the PTMC was killed by a robot, and when she contacted PTMC about it, they denied having ever employed such acquaintance, even though he had worked with them for years. The Red Acropolis had tried to notify the Collective Earth Defense (CED), a large police group, of the PTMC's actions, but they took no action, not daring to interfere with such a powerful corporation. The director also tells the Material Defender that, while he was clearing the mines during the events of Descent II, PTMC executive Samuel Dravis was actually testing and modifying the virus and deliberately tried to kill him by overloading the warp drive on his ship. After some persuasion and offers from the director, including a new ship and an AI assistant known as the Guide-Bot, the Material Defender accepts to help the Red Acropolis stop the virus. The Material Defender is first sent to Deimos to obtain information about the location of a scientist named Dr. Sweitzer who has evidence of the PTMC's actions. He is then rescued in the Novak Corporate Prison on Phobos. After recovering the evidence, the Material Defender delivers it to PTMC President Suzuki in Seoul before leaving with his reward. When the Material Defender arrives at the Red Acropolis Research Station, the director tells him that the PTMC president has been killed and that the Red Acropolis Research Team are now accused of being terrorists, resulting in the destruction of the then-abandoned station. After a series of missions, the Material Defender and the Red Acropolis Research Team manage to develop an antivirus and convince the CED that they are not terrorists. The CED suggest to broadcast the antivirus through their strategic platform orbiting Earth, but the results are unsuccessful. The Material Defender is then sent to Venus, where Dravis has been tracked by the Red Acropolis. In the ensuing confrontation in his stronghold, Dravis is mortally wounded by the Guide-Bot's flares and the Material Defender deactivates the virus, which disables all of the PTMC's robots. The game ends with the CED destroying the PTMC's orbital headquarters while the Material Defender returns to Earth. Development Descent 3 is the first project developed by Outrage Entertainment. The company was founded when Parallax Software, creators of previous Descent games, decided to split in two: Outrage Entertainment and Volition. Volition would focus on creating the combat space simulator FreeSpace games, while Outrage would continue with the Descent series. Development on Descent 3 began in November 1996 with a team of eight people. According to programmer Jason Leighton, one of the major problems during the game's development cycle was a lack of direction and control. He explained that the team had "no code reviews, no art reviews, [and] no way of saying, 'This is bad and we should be going in a different direction'". This "anarchistic" development environment worked for Descent and Descent II because they were developed by small groups that worked closely together and often in the same room. However, as Outrage started to grow from eight people to almost 20 by the end of the project, the developers did not introduce enough management to control the process. As Leighton recalls: "We literally had to build the team and company at the same time we started production on the game". Originally, Descent 3 was intended to support both a software and a hardware renderer, implying that the rendering process of the game could take place either in the CPU or dedicated hardware like a video card, but about six months after starting development, the team decided to go with a hardware-only renderer because it allowed them to create "visually stunning" graphics and maintain a solid frame rate without worrying about the limitations imposed by the software renderer. This was a difficult decision since the team had to scrap many tools and software rendering technology that were already developed. In addition, computers with hardware acceleration were not common at the time the decision was made. As the developers noted: "We knew just by looking at our progress on the game under acceleration that we had a beautiful looking game with all the latest technologies — but would anyone actually be able to play it?" Fortunately, as development progressed, hardware acceleration became more popular with each passing year. The game natively supports the Direct3D, Glide and OpenGL rendering APIs for graphics, and the A3D and DirectSound3D technologies for sound. The new technology also allowed the developers to create both indoor and outdoor environments; one of the biggest complaints of Descent II was the fact that it was considered too "tunnely". To this end, the developers created a new technology which featured an indoor portal rendering engine "hooked to a flight-sim-like terrain" engine, collectively called the Fusion Engine. The portal engine permitted designers to create small rooms with complex geometry. These rooms would later be linked together via shared dividing polygons called portals to create a portalized world for the player to fly through. In contrast, the terrain engine, which was initially planned for another game and whose function is to create more polygonal detail as players get closer to the ground and decrease polygons when they are farther away, gave designers the ability to create expansive outdoor terrains. Transitions between both engines were achieved using an external room (with its normal vectors inverted) that could be placed anywhere on the terrain map. With this technique, developers could create hybrid levels where the player could transit from indoor to outdoor areas in real-time and without loading screens. Leighton commented that whenever one of these transitions occurs, "the game code [switches] collision detection, rendering, and so on, to use the terrain engine". The company had no standardization of level design tools. Leighton said: "Some people used 3D Studio Max, some used Lightwave, and one designer even wrote his own custom modeler from scratch". This practice led to an inconsistent quality across the game's levels. For example, one designer would create structures with great geometry but bad texturing, while another would create the opposite. Once the structures were modeled individually, they were all imported into a custom editor, called D3Edit, so that the designers could "glue everything together". The D3Edit editor received constant updates because it initially did not feature an intuitive interface for designers. It was not until the last third of the development period that the editor improved significantly. As Leighton notes: "Even in the shipped game you can tell which levels were made early on and which were made near the end of the production cycle. The later levels are much better looking, have better frame rates, and generally have better scripts". Developers also considered the idea of shipping the game with a level editor based on the one they used to create the game's levels. Due to the constant changes the developers made to their own editor, it was hard for them to design a more user-friendly one. In addition to the changes in the game's engine, the developers decided to improve the artificial intelligence to give each enemy a distinct behavior. According to Matt Toschlog, president of Outrage Entertainment and lead programmer of Descent 3: "It's very rewarding for the player to meet a new enemy, get to know him, learn his quirks, and figure out the best way to kill him. It's great when a game requires both thinking and quick reactions". Originally, the developers planned to add weather effects that would disorient the player's ship during gameplay, but this feature was ultimately not implemented due to time and technology constraints. Multiplayer games were heavily tested to ensure their network stability and support IPX, TCP, and DirectPlay. The actual development of the game took 31 months to complete, with the developers describing it as both a joyful and painful process due to in part of the almost nonexistent management and the rapidly evolving technology at the time. Marketing and release Descent 3 was presented at the Electronic Entertainment Expo in 1998, where developers showed off a demonstration of the game. In the months leading to the game's release, the game's publisher, Interplay Productions, ran a program that allowed Descent fans to submit a digital photo of themselves along with a pilot name to the company. These photos would later be included in the game so that players could use them as their multiplayer profiles. Outrage also released two game demos that allowed customers to try the game before purchasing it. The second demo included a single-player level and several multiplayer matches which could be played through a matchmaking service provided by Outrage. From March to August 1999, Interplay held a Descent 3 tournament in the United States consisting of three phases where numerous players could compete against each other in multiplayer matches. The winner was awarded a prize of US$50,000. Descent 3 was initially released for Microsoft Windows on June 17, 1999. A level editor was released shortly afterwards, allowing users to create both single and multiplayer maps for the game. A Mac OS version of the game was released in November 1999. The Mac OS version was ported by programmer Duane Johnson, who previously worked on the 3dfx versions of the original Descent and Descent II. Descent 3 was ported to Linux platforms by Loki Entertainment Software after an agreement with the game's publisher. The port, which features a multiplayer mode optimized for 16 players, was released in July 2000. An expansion pack, titled Descent 3: Mercenary, was released for Microsoft Windows on December 3, 1999. The expansion introduces new features, a seven-level campaign, a fourth ship, and several multiplayer maps. It also includes the game's level editor. Although the expansion was praised for adding more replay value to the game, the level design of the new campaign was considered inferior to that of the base game. A compilation that includes both Descent 3 and its expansion pack was released on June 14, 2001. In 2014, the game was released on the Steam digital distribution service. Reception Descent 3 received positive reviews from video game critics. The most praised aspects were its graphics, artificial intelligence of enemies, and outdoor environments. Erik Wolpaw of GameSpot felt that the game "improves in almost every conceivable way on its predecessors and reestablishes the series as the premier example of the play style it single-handedly pioneered", while Next Generation praised both its originality and faithfulness to its predecessors. IGN lauded the game's new engine, noting that the transition between indoor and outdoor environments is seamless. GameRevolution remarked that the addition of outdoor environments allows "greater use of the maneuvering capabilities, adds variety to the levels, and ensures that the game never gets dull or boring". The reviewer also acknowledged that the game's six degrees of freedom movement scheme may be difficult to master for some players, stating that the game "can be confusing, dizzying, and even nauseating. This is a game for the pro's". The music and sound effects received similar praise. GameSpot pointed out that "explosions erupt with lots of satisfying, floor-rattling bass, lasers ping nicely, flamethrowers emit appropriate rumbling whooshes, and there's plenty of ambient beeping, hissing, and mechanical humming". Game Revolution praised the graphics for their "modeling, colored lighting, incredible special effects, wonderful animation, [and] sheer overall feel". Victor Lucas of Electric Playground stated similar pros, but also admitted that the game's hardware requirements were relatively high. Criticism was leveled at the game's story. GameSpot considered it not compelling, while Jason Cross, writing for Computer Games Magazine, felt that it "really doesn't have much to do with actual gameplay". PC Gamer reviewer Stephen Poole also criticized the Guide-Bot's efficiency, remarking that sometimes it can get lost or trapped while leading the player to a destination. The gameplay was praised for its variety of weapons and enemies. Game Revolution said that each enemy is "unique both in ability, structure, and behavior so that each requires a specific combat approach". Maximum PC reviewer Josh Norem praised the levels for their interesting objectives, stating that the missions "vary widely, ranging from finding lost colleagues to defending strategic structures against enemy assaults". Computer Games Magazine praised the fact that the developers replaced the wire-frame automaps of previous Descent games with flat-shaded polygons because they "provide more detail and make it easier to recognize where you are and how to get where you want to go". The multiplayer was highlighted positively due to its replay value and variety of game types. Computer Games Magazine also credited its "rock-solid performance on standard dialup modems and easy connectivity", while GameSpot praised it for being "fun and stable". The game was a runner-up for GameSpy's Action Game of the Year and a nominee for GameSpots PC Action Game of the Year. Sales Despite positive reviews and the commercial success of its predecessors, Descent 3 was a commercial disappointment. According to PC Data, its sales in the United States were under 40,000 units by the end of September 1999, which drew revenues of roughly $1.7 million. A writer for PC Accelerator remarked that this figure was "not enough to keep publishers plugging at long, expensive development cycles in the hope of scoring a Half-Life". By the end of 1999, Descent 3s sales had risen to 52,294 units in the United States. Daily Radars Andrew S. Bub presented Descent 3 with his "System Shock Award" (named after the 1994 game of the same name by Looking Glass Studios), arguing that it was difficult "to find a better game that under-performed sales-wise, this year". Descent 3s sales were similarly low in the German market. It debuted in 27th place on Media Control's computer game sales rankings, and fell to 33rd, 56th and 78th over the following three months, respectively. Interplay blamed its underperformance in the region on stiff competition in the genre. Conversely, PC Players Udo Hoffman reported a German retailer's view that "the genre is no longer popular", and that demand for a mission pack was at "0.0 percent". Other uses A study published in 2002 used Descent 3 to study hawkmoth flight activities. Using the game's editing module, the researchers created a virtual environment consisting of a flat plane with rectangular pillars, across which the animal successfully navigated. This was one of the first successful attempts at studying insect flight using virtual reality. References External links 1999 video games Cooperative video games Descent (series) First-person shooters Interplay Entertainment games Linux games Video games about robots Classic Mac OS games Fiction set on Mars' moons Video game sequels Video games developed in the United States Video games scored by Jerry Berlongieri Video games set in Seoul Video games with 6 degrees of freedom Video games with expansion packs Windows games Zero-G shooters Multiplayer and single-player video games Loki Entertainment games
9769596
https://en.wikipedia.org/wiki/John%20Riccitiello
John Riccitiello
John Riccitiello () is an American business executive who is chief executive officer (CEO) of Unity Technologies. Previously, he served as CEO, chief operating officer and president of Electronic Arts, and co-founded private equity firm Elevation Partners in 2004. Riccitiello has served on several company boards, including those of the Entertainment Software Association, the Entertainment Software Rating Board, the Haas School of Business and the USC School of Cinematic Arts. Early life and education John Riccitiello was born in Erie, Pennsylvania. He earned his Bachelor of Science degree from the University of California, Berkeley's Haas School of Business in 1981. Career Early in his career, Riccitiello worked at Clorox and PepsiCo, and served as managing director of the Häagen-Dazs division of Grand Metropolitan. He was named president and chief executive officer (CEO) of Wilson Sporting Goods, as well as chairman of MacGregor Golf, in late 1993. He then served as president and CEO of Sara Lee Corporation's Sara Lee Bakery Worldwide unit, from March 1996 to September 1997. Riccitiello joined video game company Electronic Arts (EA) in October 1997, initially serving as president and chief operating officer until 2004. He left the company to co-found and serve as partner of Elevation Partners, a private equity firm specializing in entertainment and media businesses, along with Roger McNamee and Bono. Riccitiello returned to EA to serve as CEO from February 2007 to March 2013, when the board of directors accepted his resignation because of the company's financial performance. Following EA, he worked as an advisor to startup companies and became an early investor in Oculus VR. Riccitiello became CEO of Unity Technologies in late 2014, having previously consulted for and joined the technology company's board in November 2013. During his tenure, he has overseen two fundraising rounds, raising $181 million in 2016 and $400 million in 2017. He has also worked to get Unity's game engine into Oculus' software development kit. Riccitiello has led efforts to develop the use of Unity's software tools beyond gaming, in industries such as automotive design, construction, and filmmaking. Board service Riccitiello chaired the Entertainment Software Association and Entertainment Software Rating Board during the early 2010s. He has served on the Haas School of Business' board, as well as the Board of Councilors for the University of Southern California's USC School of Cinematic Arts. Recognition Riccitiello was inducted into the Haas School of Business' Hall of Fame, and was ranked number 39 on Sports Illustrated 2013 list of the "50 Most Powerful People in Sports". Litigation On 5 June 2019, Anne Evans, formerly vice-president in human resources for Unity Technologies, filed a sexual harassment and wrongful termination lawsuit against the company, alleging that she had been harassed by Riccitiello and another co-worker, and was then terminated over the dispute with the latter. Unity Technologies responded that Evans' allegations were false and that she had been terminated due to misconduct and lapse in judgment. Personal life Riccitiello has two daughters, and has lived in various cities for work, including the U.S. cities of Birmingham, Alabama, Chicago, New York City, and San Francisco, as well as Düsseldorf, London, Nicosia, and Paris. He has been described as "politically active", and donated to Barack Obama's 2008 presidential campaign. Riccitiello delivered a commencement speech at his alma mater in 2011. He enjoys skiing, tennis, and video games. References Further reading 1950s births American chief executives of food industry companies American chief executives of manufacturing companies American chief operating officers American company founders American technology chief executives Electronic Arts employees Haas School of Business alumni Living people People from Erie, Pennsylvania PepsiCo people Video game businesspeople
66205739
https://en.wikipedia.org/wiki/Patrick%20Henry%20%28packet%29
Patrick Henry (packet)
The Patrick Henry (packet) was a three-masted, square-rigged, merchant-class, sailing packet ship that transported mail, newspapers, merchandise and thousands of people from 1839 to 1864, during the Golden Age of Sail, primarily between Liverpool and New York City, as well as produce, grains and clothing to aid in humanitarian efforts during an Gorta Mór. The ship was named for American Founding Father Patrick Henry. History The Patrick Henry was designed and constructed by the shipbuilding firm Brown & Bell, at the foot of Stanton Street and Houston Street on the East River in New York City. She was registered on November 6, 1839, at 880/905 tons (old/new measurement) and was 159 feet in length, 34 feet 10 inches in beam, and 21 feet 10 inches in depth of hold, with two decks and a draft of eighteen feet. Named for the American attorney, planter, politician, orator and Founding Father best known for his declaration to the Second Virginia Convention: "Give me liberty, or give me death!" the vessel was built to handle forty first-class passengers, "one thousand tons of merchandise" and featured "the full-length figure of the Virginian for her figurehead." She was said to cost $90,000, more than $2.5 million today, though no contract could be located. She sailed in the Blue Swallowtail Line (Fourth Line) of Packets, of Grinnell, Minturn & Co., between New York and Liverpool from 1839 until 1852 when she transferred to the Red Swallowtail Line of Packets between New York and London. Packet ships were named for the "packets" of mail they originally were designed to transport after which they began carrying freight and passengers and engaged in the packet trade. They are the predecessors of the 20th-century ocean liner and were the first to sail between American and European ports on regular schedules. Used extensively in European coastal mail services since the 17th century, they gradually added cramped passenger accommodation. The first scheduled transatlantic packet company, the Black Ball Line "Old Line," began operating January 1, 1818, offering a monthly service between New York and Liverpool with four ships. In 1821, Byrnes, Grimble & Co. inaugurated the Red Star Line of Liverpool Packets, with the four ships Panther, Hercules, Manhattan and Meteor. In 1822, Messrs Fish, Grinnell & Co. began the Swallowtail Line, known as the "Fourth Line of Packets for New York," their first ships being the Silas Richards, Napoleon, George and York, which soon moved to bi-weekly service. By 1825, vessels were advertised as leaving New York on the 8th and leaving Liverpool on the 24th of every month. Their actual schedules eventually varied, sometimes wildly, due to weather and other conditions. A Yankee captain of an American packet never takes "off his clothes to go to bed, during the whole voyage," according to an early Emigrants' Guide. "The consequence of this great watchfulness is, that, advantage is taken of every puff of wind, while the risk from the squalls and sudden gusts is, in a great measure, obviated." For a "quick and safe passage," American packets were the best. "The Americans sail faster than others, owing to the greater skill and greater vigilance of the captains, and to their great sobriety..." The Patrick Henry was developed as a "make-weight," or competitive response, to the Rosicus, a 1,030-ton packet ship built the previous year (1838) by the same shipyard that built the PH, Brown & Bell, for the new competitor, the Dramatic Line of Atlantic packets (Collins Line). Twelve years later (1851), Moses H. Grinnell, investor in the Patrick Henry and partner in Grinnell, Minturn & Co. purchased, for $90,000, the Flying Cloud, arguably the greatest clipper ship ever built. It sailed from New York to San Francisco (rounding Cape Horn) in 89 days and eight hours (1854), a record that held for 130 years. Performance The Patrick Henry's westward passages averaged 34 days, her shortest passage being 22 days, her longest 46 days. Beginning in 1852 her westbound passages averaged 32 days, her shortest passage being 26 days, her longest 41 days. The vessel's best homeward crossing of 22 days was better than the crossings of either of the grander packets: the Swallowtail's Cornelius Grinnell (built in 1850, 1,100 tons) and the Black Ball Line's Great Western (built in 1851, 1,443 tons). Her longest run in the London-Portsmouth run at 41 days was even better than the Grinnell (48 days) or the New World (built in 1846, 1,400 tons) (42 days), one of the largest Swallowtails. The Patrick Henry was among only four packets of the day—Montezuma, Southampton, St. Andrew, and the prestigious clipper Dreadnought—to make the eastbound passage from New York to Liverpool in fourteen days or less. Only two transatlantic sailing packets showed a better average speed record on the westbound crossing for a period of twenty-five years or more (thirty-three days) and only one equaled her average performance. By 1843 four lines of packets were advertising sailings on eighteen different ships between Liverpool and New York, every month, on the 1st, 7th, 13th, 19th and 25th. The Patrick Henry, Capt. Delano, was the first advertised. Construction The Patrick Henry mirrored her namesake in her radical nature, according to the Mayor of New York Philip Hone, who kept a diary said to be the most extensive and detailed on the first half of the US in the 19th century. "She is the ne plus ultra, or will be, until another ship of her class shall be built," he wrote in October 1839 after touring the "splendid new ship" with Henry Grinnell, one of her owners. For five years, the Patrick Henry was the largest packet ship among New York's eight packet lines. "This fine new ship, which arrived on Monday last, after a smart passage, is now lying at the east side of the Prince's Dock," according to a lengthy review of her construction published in the Liverpool Standard and General Commercial Advertiser after her maiden voyage. "She is of about a thousand tons burden, new measurement; was built for Messrs. Grinnell and Minturn's line of packets, (consigned to Messrs. Wildes, Pickersgill and Co., of this town), by Messrs. Brown and Bell, of New York, and is in every point a first-rate ship. A New York contemporary says, in noticing this vessel—"To speak of new packets is so common an occurrence, that it attracts but little attention. Nautical men, however, are never tired of seeing a new ship. A large number of gentlemen, familiar with the science of ship-building, have visited this extraordinary vessel, and have pronounced her to be, in every respect, one of the finest ships now belonging to this or any other port." Our contemporary will be gratified to learn that many gentlemen, equally versed in naval architecture, on this side the Atlantic, have visited the "Patrick Henry," and express a high opinion, corroborative of that of their Transatlantic brethren. As we feel assured—such is the interest taken in whatever belongs to navigation in this maritime country—that not only nautical, but mercantile men, and Englishmen generally, are never weary of hearing of improvement in naval architecture, we shall notice this new packet-ship more particularly. "The "Patrick Henry" is built of the very best materials, including live oak, African oak, elm, &c. She is of a fine model for a merchant vessel, has a handsome figure-head and decorated stern, and looks fine and warlike in the water. "She bears a strong resemblance to the "Roscius," one of the very finest packet-ships on the station; and, like her, has a poop deck (under which is the range of cabins), and a topgallant forecastle, but without a spar deck, the main deck being open, and running (including the cabin sole) fore-and-aft. She is fastened in a superior manner, and is neatly rigged in the usual square style of the large fast vessels, amongst which she takes a conspicuous place. The entrance or smoking cabin, forming a vestibule to the main saloon, is at the extreme stern, lighted by the stern windows; and the floor is a few steps below the quarter-deck, the upper part or roof rising three or four feet above it. In the fore part of this is the wheel, with a window in front; so that the helmsman is completely sheltered from the weather, and has, at the same time, a sufficient view of the sails and the effect of the rudder on the movements of the ship. Two neat staircases, one on each side, lead to the grand cabin or saloon. Saloon and staterooms "The saloon is a splendid apartment, of about fifteen yards in length. ["Saloon" refers to shared common space on sailing ships onto which staterooms or cabins opened, often used for dining and drinking.] The sides are beautifully empanelled in the finest choice wood of "every clime," in nearly the same style as the cabin of the "Roscius," forming an exceedingly rich and effective specimen of cabinet work. "The styles of the doors and intermediate paneling are of satin wood; the centres of the panels of the same, and also (the middle small panels) of the root of the American ash tree—a beautiful feathered white wood. The sunk parts or mouldings of those are of dark rosewood and zebra wood—the latter finely streaked and clouded. In the top panel of each of the state-room doors, which is of satin wood, is an oval Venetian blind, also of satin wood. Between each door and compartment, are convex pilasters—fourteen on each side—of rosewood thrown out on a broad convex ground or back-work of zebra-wood. "These pilasters are of satin-wood, inlaid with ebony, and with a central ornament. The bases of these are in imitation of dark veined marble: and the capitals are also inlaid, and richly gilded. The cornice is of dark wood and mahogany, with gilded mouldings. The contrast formed by these various coloured woods is striking; yet the whole harmonizes, and has a splendid and gorgeous effect. The ladies' cabin, farther forward, adjoins the saloon, the two forming, by the lowering of a large panel between, hung in the manner of a window, one continuous apartment of great length and elegance. It is finished in the same style as the other, and may be entered from the main deck, with which both cabins are flush. "Fronting the main deck is a fine stainedglass window. The state, or sleeping rooms, each containing two berths, are finished in front with polished rose, satin, and zebra wood to correspond. They are at once light and airy, each having a large side or port window, and a patent deck light. They are twenty-four in number, in all; and will consequently accommodate forty-eight passengers. The furniture in both cabins is appropriate and neat. The tables are of mahogany. At the upper end of the saloon there is a sofa across the full breadth of the apartment, double the length of ordinary sofas. Over this is fixed a handsome mirror. The subsidiary accommodations are of the first order. The steward's pantry, leading out of the ladies' cabin, and with an entrance from the main deck, is unusually spacious, and is replete with glass, china ware, &c., for the dispensing of "the good things of this life." In fine, the Patrick Henry is a well-built, well-found, and beautiful vessel. She will form a valuable addition to the line ships running between this port and New York, and we wish her every success." Launch Five days before her launch, on November 2, 1839, the New York Morning Herald published an article titled, "Fête On Board The Patrick Henry," that documented a "neat little ... pic-nic, or collation" given by its widely respected captain, "aboard the splendid new ship," and, "attended by all the elite." The reviewer called the vessel, "a perfect bijou," and wrote that "[h]er cabins are superb, her quarter-deck very convenient, and the new arrangement in the poop is admirable. Captain Delano is filling rapidly up with passengers..." She sailed on November 7 and arrived in England seventeen days later, on the 25th. Announced The Morning Post (London): Liverpool, Monday—By the arrival of the splendid new packet-ship Patrick Henry, Captain Delano, which entered the Mersey about one o'clock this day, after an extremely rapid passage of little more than seventeen days, New York papers to the 7th inst., inclusive, have been received. The Liverpool Standard and General Commercial Advertiser reported Captain Delano as "a gentleman whose skill as a seaman, and urbanity as a man, are well known and highly appreciated" and listed its first passengers:"...namely:—Mr. Baker, Mrs. Baker and sister, Boston; Mr. Peers, Mrs. Peers and daughter, Liverpool; Mr. Alfred, London; Mr. Davoren, Jamaica; Mr. Short, (King's Own) and Mrs. Short; Mrs. Wilson, New York; Mr. Carlsheim, Frankfort; Mr. Adie, Edinburgh; Mr. Thompson, Glasgow." Owners The Patrick Henry was purchased and owned by the once preeminent New York Shipping House Grinnell, Minturn & Co., a conglomerate of merchant and sailing magnates with New England Quaker roots. In 1851, she was owned by: Henry Grinnell (3/16), Moses H. Grinnell and Robert B. Minturn (8/16), Capt. Sheldon G. Hubbard (1/16), Capt. Joseph Rogers (2/16), and Capt. Joseph C. Delano (2/16). "During the days of sailing vessels, the house of Grinnell, Minturn & Co. was one of the wealthiest and most extensively engaged in business in the United States, and it is still among the most prominent in its line of business in the city," according to an 1881 issue of the New York Times. Partners Robert B. Minturn, Franklin H. Delano, Moses H. Grinnell and brother Henry Grinnell were among the wealthiest of the merchant-kings of New York in their day, who built one of 19th century America's largest transportation empires with a one-time fleet of more than fifty ships that sailed to every continent. In 1845, before the height of the emigrant trade from which the firm was profiting, Minturn was reported to be worth $200,000; today's equivalent (2020) of more than $2.31 billion in relative output. The Grinnells were worth $250,000, each, or near $5.8 billion combined; and partner Delano, who had married into the Astor family, was valued at what today would be roughly $5.76 billion when measuring what is called "relative output." Fellow shipping magnate A.A. Low wrote sarcastically about Moses and Robert in a letter to a sibling: "Our friends, Grinnell, Minturn are heartbroken about the famine. They have a house dinner to celebrate the fortune it is bringing them, and dine on terrapin, salmon, peas, asparagus, strawberries—all out of season, of course—then Mr. Grinnell gives the famine fund $360, which he had lost on a bet with Mr. Wetmore William S. Wetmore, founder of rival China trade firm Wetmore & Company]." Minturn served as a vice president on the relief committee that eventually sent the Macedonian, June 19, 1847, with supplies to Ireland, and was a Commissioner of Emigration and a founder of the Association for Improving the Condition of the Poor. Minturn reportedly once noted that the $5 million spent on ship fares in 1847, "substantially reduced the cost of carrying freight," and helped the economy by lowering the price of American cotton and grain for English buyers. According to the website, An Irish Passenger, An American Family, And Their Time, profit, "rather than humanitarian impulses" drove immigration, "and because government regulatory agencies and private philanthropies were unwilling or unable to exert much control over that business, 19th century emigrants were often literally treated as human freight." In 1844, Minturn offered Irish Catholic priest and teetotalist reformer Father Mathew free passage in any of their ships to come visit America, which he accepted in 1849, aboard the Ashburton (1842, 1,015 tons), beginning a two-year visit during which he acquired 600,000 followers who took his temperance pledge to treat alcohol abuse, alcohol dependence and alcoholism. Father Mathew befriended Frederick Douglass when Douglass visited Ireland in 1845. The priest wanted to remain singularly focused on helping people stop drinking alcohol and was criticized for not speaking out against slavery and foregoing the abolitionist cause, a complicated issue for the immigrant Irish Catholics who, some historians suggest, were competing with Blacks for jobs in the U.S. at the time. Minturn, with Quaker forebears, was an abolitionist reported to have purchased a number of slaves for the purpose of setting them free. He was a benefactor of the Freedmen's Association and one of the co-founders of Children's Village. He provided evidence before Parliament in 1848 that teetotalism was encouraged by American shipowners as underwriters offered "a return of 10% off the premium on voyages performed without the consumption of spirits." Robert Minturn died suddenly at age 60 in 1866. Fifteen years later, his second-born son, John Wendell Minturn, aged 42 and a principal owner of Grinnell, Minturn & Co with his older brother, died of suicide at 78 South Street, the company's headquarters. John was born the year the Patrick Henry was first launched. Donated lands of the family estate in Hastings, New York, were instrumental in the development of an 184-acre retreat, children's home and school in the 1890s that no longer exists. Robert Bowne Minturn's granddaughters were immortalized in an 1899 miniature oil painting held by the New-York Historical Society. Captain Her primary commander was Captain Joseph Clement Delano (1796–1886), New Bedford, Mass., of the famed Delano family with its many prominent mariners, whalers and shipbuilders whose commercial success advanced the family into the Massachusetts aristocracy, sometimes referred to as one of the Boston Brahmins (the "First Families of Boston"). Joseph was first cousin to Franklin Delano Roosevelt's maternal grandfather Warren Delano Jr., the American merchant who made a large fortune smuggling illegal opium into China. Joseph's first voyage may have been at age fifteen, in January 1812, when he sailed aboard the Arab, a merchantman launched in 1810 from Fairhaven, Mass., and owned by his paternal grandfather Captain Warren Delano, Sr., the great grandfather of Franklin Delano Roosevelt. The following year, in May, he was aboard the Hampton Roads, and on March 26, 1815, he wrote a letter aboard the Ship Virginia. On Christmas Day, 1818, he was aboard the Lagoda, a merchantman built at New Bedford that plied the Baltic and carried Russian iron. In 1826, Delano began as commander of the Columbia (built 1821, 492 tons), in what was then Fish, Grinnell & Co.'s London Red Swallowtail Line. In April 1830, he arrived at New York during the night, with a record westbound passage of only 15 days and 18 hours during which her average speed was 8 1/2 knots, a record that stood for 16 years. Packet ship commanders came to be "regarded as the aristocracy of the seas." In 1831, he hosted the American ornithologist, naturalist, and painter John James Audubon on several voyages during which Audubon, among others, shot two dozen Petrels. On another venture, Audubon recorded sketches of hundreds Palaropes along a bank of sea-weeds and froth, sixty miles off the coast of Nantucket. In 1833, Captain Delano transferred to the Liverpool Blue Swallowtail Line, as master of the Roscoe (1832, 622 tons) and then the Patrick Henry. After three years of command, the captain was reviewed by Theodore Ledyard Culyer, leading Presbyterian minister and religious writer, who sailed with him in 1842. "As the stormy Atlantic had not yet been carpeted by six-day steamers, I crossed in a fine new packet-ship, the Patrick Henry... Captain Joseph C. Delano was a gentleman of high intelligence and culture who, after he had abandoned salt water, became an active member of the American Association of Science." In 1847, Captain Delano was consulted by Captain Robert Bennett Forbes regarding the historic and unprecedented U.S. government-sponsored humanitarian voyage of the U.S. Navy's sloop-of-war USS Jamestown, carrying relief supplies from Boston, Mass. to Cork, Ireland, that year. "While preparing for sea, I consulted Captain J. C. Delano, of New Bedford," Forbes wrote. "[Delano] said that on the last days of March we would sail on the very worst day of the year for England, and that if we got to Cork in thirty days we ought to be well satisfied." The ship, leaving on March 28 and arriving on April 12, took just sixteen days. Captain Delano commanded the Patrick Henry between 1839 and 1845 and again between 1847 and 1849. His younger brother, John Allerton Delano, served as his first mate and later commanded the vessel as well. Delano the elder retired from sea in 1848 but returned to helm an 1851 voyage of his brother's charge, the American Packet Ship Albert Gallatin (built 1849, 1,435 tons), before becoming partner in a cotton mill and a business importing boghead coal. Though the Patrick Henry served more than ten captains through her quarter century of voyages, Capt. Joseph Delano was "said to have made more money for her tonnage than any other ship in their service." "Between 1842 and 1847, inclusive, twenty-nine new Western Ocean lines were formed," according to Queens of the Western Ocean: The Story of America's Mail and Passenger Sailing Lines. The busiest years for the Patrick Henry, and its firm, were between 1847 and 1851 when 2,769 passenger ships, mostly packets, sailed from Liverpool, carrying 765,159 passengers. Between 1845 and 1855 more than two million people left Ireland, on primarily packet ships but also steamboats and barks—one of the greatest mass exoduses from a single island in history. According to a review of passenger manifests across two decades, the Patrick Henry transported more than 12,000 passengers. Crew An early notable crew member was Peter Ogden, who served as steward on the Patrick Henry. The steward on packet ships was considered a high-ranking service position. He was generally responsible for feeding and dressing the captain, mates and passengers. An early description of the steward refers to him wearing "a brilliant-coloured morning-gown and red slippers" though this is considered to romanticize the role. Oftentimes, the steward was free, Black and multilingual. Ogden was a British Black man and member of the original Grand United Order of Odd Fellows, Victoria Lodge, No. 448, at Liverpool. He travelled frequently between Liverpool and America with Captain Delano and heard of Blacks attempting to join the all-white Odd Fellows at Philadelphia. During a visit to England, Ogden appealed to the Grand United Order, which did not discriminate against skin color and granted the charter March 1, 1843. Ogden returned to New York and established the first African American lodge of Odd Fellows later that same year, the Philomathean Lodge, No. 646, the beginning what would become the largest national Black fraternal organization in America. Ogden had dissuaded the New York group from applying to the U.S. Order thus: "[Ogden] thought it folly, a waste of time, if not self-respect, to stand, hat in hand, at the foot-stool of a class of men who, professing benevolence and fraternity, were most narrow and contracted, a class of men who judged another, not by principle and character, but by the shape of the nose, the curl of the hair, and the hue of the skin. He averred that the dispensation could be secured through his Lodge in Liverpool, and that to be connected with England and the Grand United Order was to obtain Odd Fellowship in its pristine purity." Notable voyages Between 1839 and 1864, the packet ship Patrick Henry made at least 60 (documented) roundtrip crossings, carrying passengers, specie, a wide range of newspapers, magazines and periodicals, personal letters and packages, business mail, transactions and documents, merchandise, freight and cargo, including large shipments of wheat and flour on commission for the Baring Brothers. Early career The Patrick Henry hosted six of the Twelve Apostles of the Latter Day Saint movement including George A. Smith and Brigham Young on a voyage from New York to Liverpool, where they began their mission in England as prophesied by Joseph Smith. At sea for 28 days in March 1840, Captain Joseph Delano and crew teased the group, calling them "landlubbers." "During our passage over we had two very heavy gales;" wrote Apostle Heber C. Kimball. "The ship's mate said he had not seen such for fifteen years back: the ship's crew was kind to us." Brigham Young was "so sick that he was confined to his berth nearly the entire voyage". The ailing Young was so thankful to set foot on land that he "gave a loud shout of hosanna." In June 1845, the Patrick Henry, commanded by Captain Joseph Delano, hosted Horatio Potter, the educator and the sixth bishop of the Episcopal Diocese of New York, who was offered free passage on the Patrick Henry by one of its owners, Robert Bowne Minturn, after he spoke to his friend about a "slight irritation about the throat" and a need for rest. Potter wrote in a letter dated May 26, 1845, to the vestry of St. Peter's Church, in Albany, where he was rector: "I feel that a sea voyage, removing me from all labor and care, giving me the benefit of sea air and travel in a foreign land, with time and opportunity to refresh my mind as well as my body, would be a great relief and a great benefit, not only to myself personally, but to my spiritual charge." Vs. the William J. Romer On February 9, 1846, hundreds of people gathered at the Battery overlooking the East River to watch the beginning of a notable, singular regatta from New York to Liverpool between what was considered one of the fastest pilot boats, the fifty-ton schooner William J. Romer, and the Patrick Henry, captained by Joseph's younger brother and former first mate, John Allerton Delano (1809–1893). Rumors circulated about the "mysterious" mission of the Romer. Was she to bring Queen Victoria? Did she carry news of the Oregon negotiation? Another was that she took six "heavy brass cannon" and was going "pirating or privateering." "As 12 o'clock began to draw nigh, the hands on board began to clear up the deck," one newspaper reporter wrote. "[A]nd a few minutes after 12, she rounded the pier and shot out into the river as neat as a courser. As she left the wharf, the assembled crowd sent out nine hearty cheers, which were returned from the boat in the same manner, and by the firing of a gun...About 4 o'clock, the packet ship Patrick Henry came down the river, and as she was passing out by Governor's Island, the pilot boat took a sudden start and shot across the bows of the packet, and soon left her far behind. The Romer carried a member of the Associated Press and a British attache and indeed brought communications from then Secretary of State James Buchanan, attracting much press, but it hit heavy gales, with thunder and lightning, and went to port at Cork. The Patrick Henry won the race. Fare and conditions The price in today's currency for what would have been a single, first-class (cabin or stateroom) passage on the Patrick Henry in January 1846 is $2,760; the price for one adult steerage ticket is $600. Steerage refers to the cargo hold. The gross receipts from passengers (at full capacity) on a single crossing of the PH across the Atlantic, from Liverpool to New York, would amount to as much as $225,000 today. In April 1846, with 383 primarily Irish passengers aboard the Patrick Henry, one steerage passenger wrote, "Of all the ships I have ever seen, this beat them all for disorder. There was neither rule, order, nor any kind of cleanliness observed." The writer suggested government inspectors shirked their duty because of "patronage" in the "chain of the aristocracy." "The way in which passengers are stowed away in these ships is shameful. Sometimes the Irish steamers don't bring their quota of passengers till the ship is on the point of sailing; then they are all huddled together, old and young, male and female, in the same berth; often four in one berth. For instance, in this ship, there were in one berth, three young girls going to their friends, about 16, 17, and 18 years of age, and an old man, 64 years, and the whole four entire strangers to each other till they met in the ship. There were a great many more cases equally bad. Several of the passengers expostulated with the captain on this strange usage, but the only answer he would give was, that he had nothing to do with it, as they had paid their money to agents." The writer suggested that two deaths occurred (which the manifest bears out), including five-year-old Thomas Healy, due to neglect from the crew and the captain. The writer accuses the crew of robbing passengers "of money, spirits, tea, coffee, and sugar," breaking into lockers and stealing, and going unpunished, as well as "prowling about the ship to find some simple females who will hearken to them." Joseph's brother John Delano was captain on the voyage. "The captain of this ship was a most inhuman man," the passenger wrote. "He did not seem to think the life of a passenger worthy of notice, particularly an Irish one ; although, as his name would indicate, of Irish extraction himself. I am told that his real name was Delaney." Black '47 In arguably the worst year of the Great Famine, on May 6 and again on September 7, 1847, the Patrick Henry, under Captain Joseph Delano, left the Burling Slip on South Street in New York and transported relief from societies in Brooklyn (primarily May 6 run), Albany, Rochester, New York, the State of Ohio, and Burlington, New Jersey, that included clothing, Indian corn, cornmeal, rye, wheat, peas, beans, flour, meal, barley, buckwheat, bread and pork, to Liverpool, valued at the time at $4,636.22 (May: $1,166.07; Sept: $3,470.15), today worth about $150,000,to be distributed by the Committee of Society of Friends in Dublin to the people of Ireland. The Patrick Henry under Captain Delano may have made her first humanitarian voyage in February 1846, when she was reported to carry "a very large cargo, principally of breadstuffs" to Cork, Ireland. The New York Herald reported that the PH arrived at Liverpool in May with the "largest cargo of biscuits yet imported." The HM Treasury, at the time, allowed "biscuits" to be imported duty-free, excepting "fancy biscuit" or "confectionery," but only until September 1." The September shipment is listed as containing the following: 2,143 bushels of corn; 25 bushels of rye; 290 barrels of meal; 96 barrels of flour; 34 barrels of meal; 5 boxes barley; 7 barrels of wheat; 51 barrels of rye flour; 3 barrels of beans; 1 barrel of peas; 14 packages of clothing; 192 barrels of corn; 2 barrels of pork; 8 barrels of sundries; and four packages of clothing. On July 27, 1847, the Patrick Henry commanded by Captain Joseph Delano arrived in New York with eighteen cabin passengers and 300 steerage passengers (no documented deaths) that included Michael Carolan (1844–1906), his father Thomas (1806–1870) and mother Elizabeth (1822–1875), and sisters Elizabeth, Catherine and Annie (infant; died shortly after arrival). A 2021 essay on New England Public Media commemorated the crossing. The Carolans left their ancestral home in Light Town, in the Drumbaragh Townland, on the border with the Balrath Demesne Townland, near the Springville/Dandlestown Townland, Civil Parish of Burry, three miles southwest of Kells. The population in Drumbaragh during an Gorta Mór plummeted 67 percent; in Springville, 54 percent, where there were fifty houses in 1841 and only eleven left in 1871. The family settled in Willow Grove, outside of Philadelphia, where seven more children were born including Thomas Spencer Carolan (1852–1915). According to IrishCentral, a namesake and descendant from the U.S. returned to the ancestral home in 2020. Owner passage The following May (1848), Captain Joseph Delano hosted shipowner Robert Minturn, his wife, sister-in-law, six children and servants on a voyage to England. "[The Patrick Henry] was one of the vessels which had so often before carried invalids, or tired clergymen, or young men broken down by study, sent by Mr. Minturn to recruit their strength by a voyage," a family member wrote in an hagiography published privately after his death. "He had so frequently done these kindnesses, that the application for them at last became incessant. Sometimes it was for an individual, sometimes for a family of foreigners, who had come to America in search of what they did not find — a living — and were most thankful to be sent back to their homes across the Atlantic." The Minturns took an eighteen-month tour of England, France, Italy, Switzerland, Germany, Jerusalem and Egypt that was said to have inspired plans that led to the creation of New York's Central Park. In England, Minturn met the poet William Wordsworth and Lord Palmerston, who is remembered, among other endeavors, for evicting 2,000 tenants on his County Sligo estate and financing the cheapest passages possible on coffin ships to Canada on which many died or became sick and died later. Minturn went also to Scotland, from where he took the shortest route to Portrush in Northern Ireland, the country whose refugees of an Gorta Mór had made him, in today's currency, a billionaire. He visited the Giant's Causeway, which "excited his imagination." He promptly went on to France. That fall, Captain John Delano (brother of Joseph) sailed the Patrick Henry from Liverpool to New York with twenty expelled Catholic priests from Rome, including Pietro Angelo Secchi (1818–1878), the Italian Jesuit priest and astrophysicist who made the first survey of the spectra of stars and suggested that stars be classified according to their spectral type. At quarantine On August 8, 1849, the Patrick Henry landed at New York and reported seven cabin passengers and 278 steerage passengers. The New York papers did not mention that seven people died on the 46-day passage nor that fever had broken out nor that the ship may have quarantined at the New York Marine Hospital on Staten Island, site of the later Staten Island Quarantine War. Minister and publisher Joseph Barker and a reporter, aboard the ship Hartford, which landed in New York after a 53-day passage, reported that 17 passengers aboard the Patrick Henry had died on the journey and the ship "had, in consequence, to remain some time in quarantine." He wrote that many of the passengers aboard the Hartford, including himself, had "regretted" not sailing on the Patrick Henry. "It is not unlikely that several of those who regretted so much that they had not gone by the Patrick Henry, would have fallen victims to the disease that prevailed on board, if they had gone by her. How true it is, as I have said, that men often covet things which, if they obtained them, would prove their death, and repine at things which, if they understood them properly, they would see to be their life and salvation." The dead reported on the ship manifest include Mr. & Mrs. Hackett, age 30, with their children Mary, 6, Glen, 4, Patrick, 3, and Mick, infant, as surviving; Ann Corcoran, wife of Peter, mother of Francis, both drapers; Elizabeth Peet, age 50; Corus McGillicuddy, age 9, son of Corus and Elizabeth; Francis Orme, 23, carpenter; and James Kelly, 35, no occupation. It was Capt. Joseph Delano's final voyage on the Patrick Henry. Rough crossings By 1851, J.E. Mulland was listed captain and on November 3, he arrived in New York (from London) with 374 passengers, 14 first-class and 360 steerage, with seven dead. It was his last voyage commanding the PH. In early 1854, the Patrick Henry's Seaman Matthew Barnabb and Seamen Louis Barroch were drowned on January 18, during an unseasonably harsh Atlantic winter that had begun the previous fall. The ship, on its London to New York run, was hove to and "struck by a sea which carried away the bowsprit and the knight heads and all the rigging attached." Barnabb was swept off the ship, and a few hours later, Barroch was clearing away the wrecked bowsprit when he fell overboard and drowned. A third crewmember, William Wallace, fell from the fore yard and was injured severely. "It was blowing a gale at the time," reported Captain John Hurlbut, who brought her to port February 4, after a 40-day passage. "And impossible to save them." That same month, January, the famous British steamer SS City of Glasgow, headed from Liverpool to Philadelphia, disappeared at sea with 480 passengers and crew. The packet-ship Rosicus was 51 days making the crossing; the Mary Annah 88 days, and the Celestial Empire took 60 days, with the loss of a seaman and ten passengers. A few months later (April), the American packet Powhattan ran aground off Harvey Cedars on Long Beach Island, New Jersey—today remembered as one of the State's shipwrecks with the greatest loss of life—between 200 and 365. The losses of the season were kicked off the previous September (1853), when the notable British steamer SS Artic went down after a wreck in the fog, taking 315 lives. Then in December, the American clipper Staffordshire was on the return leg of a stormy transatlantic crossing and ran aground and sank off Nova Scotia, taking 170 of her 214 passengers and crew with her. On another run of the Patrick Henry later that year (October), Capt. Hurlbut disembarked at New York after possibly transporting the most passengers the packet-ship ever saw on a single voyage: 403. He was in violation of the immigration law of one passenger per three tons of weight. Eleven passengers died at sea. Commerce and journalism Receiving information as quickly as possible—whether regarding particulars about trade, foreign markets, decision-making, professional partnerships, business documents, legal contracts, personal letters and political, government and military news—was of urgent importance to 19th-century commerce. Industry and business made special arrangements to beat their competitors so that sailing ships, especially packet ships involved in the packet trade, emerged as the central information superhighway of the era, and for the development of journalism as well. For instance, in late January 1840, the Patrick Henry arrived ahead of schedule and beat the competition to deliver the news from the continent for eager American readers. The Morning Herald (New York), February 1, on the front page, reported: "The foreign news given today is highly important. Yesterday afternoon, about half past three, we received it at this office being a full hour before any of the Wall street papers had their's—and by five oclock we issued an Extra, to gratify the immense crowd that surrounded our office. One of our clippers left town at 10 o'clock, and boarded the PH outside the bar at about one o'clock." The news was advertised as "Ten Days Later From England—-Highly Important" and included articles about war preparations by Russia, Queen Victoria's marriage that month, meeting of Parliament and the French Chamber, and the French King's speech. "By the arrival of the Patrick Henry, Captain Delano, we have received immense files of English papers and periodicals, due to the 25th London, 26th from Liverpool and 23rd from Paris...Neither the Cambridge nor the Independence had arrived out on the 26th of Dec. The Patrick Henry had a fine run of nine days to the long(itude) of 38, where she took, on the 4th inst, strong westerly gales, which prevailed since that time without change." Improvements in the speed of that communication was crucial for many commercial, financial and shipping business activities—speedier information made capital move faster, directly affecting world trade. In 1840, the Patrick Henry was among twenty sailing packet ships on the New York-Liverpool run, and notably among the speediest. The short round trips, however, did not depend on speed, but rather changes in the schedule. Efficiency may have been improved by tightening schedules, but this may have exacerbated delays and errors of judgment. For westbound sailings, there was a high risk of disaster. Nearly one packet in six was totally lost in service. This means that out of 6,000 crossings, about 22 ended in such wrecks. More than 600 British ships, of all types, were lost each year in between 1833 and 1835 and 1841 and 1842. The loss of lives varied between 1,450 and 1,560. By the time of the maiden voyage of the Patrick Henry, in 1839, packet captains had begun taking more risks against their competitors as steamships were coming into service. Indeed most shipwrecks took place during the period when the competition between sail and steam was hardest. From a mail, business and journalism transmission point of view, the trend was most alarming. "Between 1838 and 1847 no less than 21 mail-carrying ships were lost on the North Atlantic route – two each year on average. Two of the ships were Falmouth packets and two were steamers, while 17 were American sailing packets. Eight were on the New York–Liverpool route, two on the Boston–Liverpool route, two on the New York–London route, and five on the New York–Havre route. Six of the ships just disappeared, and were lost with all hands. It is notable that two out of every three wrecks took place in November–February, indicating that the packet captains took too heavy risks, especially during the rough winter sailings. The only precautionary measure to ensure solid business information transmission across the Atlantic was to send duplicates. This was very typical during the shift period. The duplicates also ensured the fastest possible dispatch of information." Most mail – especially eastwards – was still carried by sailing ships during the first decade after the advent of the transatlantic steamship service. Even if the size of the sailing packets grew markedly, their service speed did not follow the trend after the introduction of steamships on the route in the late 1830s. After 1835, there seems to be no signs of speed improvements. "Another phenomenon which indicates that the sailing packets were losing their hold on the first class business – mail, fine freight and cabin passengers – was that they no longer cared about the punctuality of the sailing dates as much as they did in the 1830s. If the reliability of a mail ship service is measured by the regularity of sailings and the safety records, the performance of the American sailing packets in the mid-1840s was noticeably below such expectations." Final days In 1860, Captain William B. Moore is listed master, and in 1864, after a quarter century of service, the Patrick Henry was "sold British" at Londonderry, due to the Civil War, to J.P. Allen & Co., Naval and Military Tailors, Londonderry. Her final voyage, with passengers, may have begun on June 26, 1871, when she left port at Pensacola, Florida, and sailed to Liverpool, arriving August 19. Lloyd's Register of Shipping for 1868/69 lists her as registered at Cork with measurements: 837/854/773 tons (net/gross/under deck); 159.3 x 35.8 x 21.6 ft; forecastle 21 ft, poop 41 ft. Originally rigged as a ship, in 1869–70 she was re-rigged as a bark and likely put into service as a timber transport between England and Canada—similar to the Flying Cloud, which became "just one of many prosaic vessels that tramped around the world looking for freight." The Patrick Henry was twelve years older than the Flying Cloud and lasted nearly a decade beyond her. In June 1875, townspeople bearing torches burned the broken-backed Cloud in the harbor of Saint John, New Brunswick, Canada. The Patrick Henry herself was surveyed the following year, at Quebec, in June 1876. Between 1876 and 1882, T.E. Sargent was her captain. Her final survey was in March 1877. The 1881/82 volume of Lloyd's Register lists her but her rating had expired. On September 12, 1882, the Patrick Henry was moving into the Fleetwood harbor, off the Lancashire coast of England. There were two other vessels that had run aground and the channel was very narrow, kept open by dredging the natural course of the River Wyre. By the time she dropped anchor, she had run ashore on the Cansh bank. The Liverpool tug, Fury from Holyhead, was unable to pull her off. She "broke her back and sustained such injuries that the cost of repairing her would be greater than she was worth." A lawsuit ensued against the harbormaster for the loss of the Patrick Henry after he gave the order to drop anchor, but a jury found in the harbormaster's favor in February 1883. A few months later, she was lying in the port where, 44 years earlier, at Waterloo Dock, she began transporting thousands of people to New York. She had been "thoroughly overhauled" for "1,300 loads of timber," and was for sale, with dimensions 169.3 feet length x 35.8 feet breadth x 21.6 feet depth. According to The American Neptune: A Quarterly Journal of Maritime History and Arts, she was broken up the following year (1884). Her final owner, James Edwin Pim, was a timber merchant & shipowner and purchased the Patrick Henry in 1868. J.E. was son of one of the original Pim Brothers, a large Dublin Quaker family of business entrepreneurs, merchants, Irish poplin manufacturers and drapers. His cousin Jonathan Pim was a Member of Parliament (MP) for Dublin and served as secretary for the Quaker Relief fund during the an Gorta Mór and bought an estate in the west of Ireland for the purpose of benefiting the tenants. The Quakers (Society of Friends) are recognized as saving thousands of lives in Ireland by establishing the first soup kitchens and tirelessly working to distribute and donate food during the Great Hunger. Fittingly, Jonathan Pim corresponded directly with the New York relief committee concerning the 1847 shipments of food aboard the very Patrick Henry that his cousin, James Edwin, later purchased. In art In 1858, Philip John Ouless (British, 1817–1885), a successful workmanlike painter of marine subjects, made preliminary sketches of the Patrick Henry and in 1859 completed, "The American Packet Ship "Patrick Henry" Off the Cliffs of Dover." Oil on canvas, 26 3/4 x 37 1/2 inches. Signed with the artist's monogram "PJO" and dated 1859, l.r. In a gilt period frame. In a private collection. In literature The American packet ships of the early to mid-nineteenth century played a large role in the making of a nation, according to The Western Ocean Packets; by their "sheer virility and heroic energy," "superb strength of brain and muscle," the "gallant, hard-sailed packets with their 'tween decks crowded with emigrants," became "one of the most, if not the most, important factor in this world's development along the lines of steady progress, whether moral or physical." "The greatest days of the New York ships followed quickly upon the closing of the second war with England. Its shrewd, farsighted Quaker element saw the possibilities of packet service to Europe. Sailing on advertised dates, the ships grew in tonnage from year to year and made their owners rich. The usual method of division in New York was by partial ownership. An agent owned an eighth, a builder, to ensure his getting the repair work, which amounted to about five hundred dollars a round trip, owned another eighth. The captain might own an equal share and perhaps a sixteenth each was held by the block maker and the sailmaker. The rest of course was vested in the owner." "The times required brave sailing. Sails were set at the piers. Crowds stood by and cheered the departures. The whole city became interested. Ships were even sailed right up to their berths and the seaman had every opportunity to satisfy his pride and exhibit his skill. The local delight in packet performance was well founded, for the whole country displayed interest. Competition was keen and yet the Dreadnought of New York was able to make a record and leave it standing untouched. Sailors called her "the wild boat of the Atlantic "and she had a song written about her and used it as a shanty. She once overhauled the Cunard steamer Canada which had left a day before her and was only able to dock in Boston at the same time that the Dreadnought reached New York. The Patrick Henry of one thousand tons was also a remarkably fine sailor, a favorite packet, and one that made more money for Grinnell, Minturn and Company than any other ship they owned...Shipping was coming into its own in the new days of peace and New York was booming." Additional mentions in print Albion, Robert Greenhalgh. Square-riggers on Schedule: The New York Sailing Packets to England, France, and the Cotton Ports. Princeton: Princeton University Press, 1938. Bradlee, Francis Boardman Crowninshield. The Dreadnought of Newburyport, Massachusetts: And Some Account of the Old Transatlantic Packet-ships. United States, Essex Institute, 1920, p. 16. OCLC 14251210 Clark, Arthur H. The clipper ship era: An epitome of famous American and British clipper ships, their owners, builders, commanders, and crews, 1843–1869. New York: G.P. Putnam's sons, 1910. Cutler Carl C. Queens Of The Western Ocean, The Story Of American's Mail And Passenger Sailing Lines Published by U.S. Naval Institute, 1961. ISBN 0-87021-531-0 Fairburn, William Armstrong. Merchant Sail. Volume 2. United States, Fairburn Marine Educational Foundation, 1945,p. 1164. 001350074 Ships and Shipping of Old New York: A Brief Account of the Interesting Phases of the Commerce of New York from the Foundation of the City to the Beginning of the Civil War. United States, Bank of Manhattan Company, 1915, p. 42. Internet archive. Schroeder, Gustavus W.. Articles about Vessels of All Descriptions, Ancient and Modern. United States, n.p, 1850, p. 218, 221. Staff, Frank. The Transatlantic Mail. United Kingdom, J. DeGraff, 1956, p. 123, 125. Stonehouse, James. Pictorial Liverpool: Its Annals; Commerce; Shipping; Institutions; Public Buildings; Sights; Excursions; &c., &c: A New and Complete Hand-book for Resident, Visitor, and Tourist. United Kingdom, H. Lacey, 1844, p. 27 References Three-masted ships Individual sailing vessels Merchant ships of the United States 1839 ships Ships named for Founding Fathers of the United States Patrick Henry
10812215
https://en.wikipedia.org/wiki/Windows%20File%20Protection
Windows File Protection
Windows File Protection (WFP), a sub-system included in Microsoft Windows operating systems of the Windows 2000 and Windows XP era, aims to prevent programs from replacing critical Windows system files. Protecting core system files mitigates problems such as DLL hell with programs and the operating system. Windows 2000, Windows XP and Windows Server 2003 include WFP under the name of Windows File Protection; Windows Me includes it as System File Protection (SFP). Operation With Windows File Protection active, replacing or deleting a system file that has no file lock to prevent it getting overwritten causes Windows immediately and silently to restore the original copy of the file. The original version of the file is restored from a cached folder which contains backup copies of these files. The Windows NT family uses the cached folder . Windows Me caches its entire set of compressed cabinet setup files and stores them in the folder. WFP covers all files which the operating system installs (such as , , , etc.), protecting them from deletion or from replacement by older versions. The digital signatures of these files are checked using code signing and the signature catalog files stored in the } folder. Only certain operating system components such as the Package Installer (Update.exe) or Windows Installer (Msiexec.exe) can replace these files. Changes made using any other methods in order to replace these files are reverted and the files are silently restored from the cache. If Windows File Protection cannot automatically find the file in the cached folder, it searches the network path or prompts the user for the Windows installation disc to restore the appropriate version of the file. WFP integrates with the System File Checker () utility. Windows Vista and later Windows systems do not include Windows File Protection, but they include Windows Resource Protection which protects files using ACLs. Windows Resource Protection aims to protect core registry keys and values and prevent potentially damaging system configuration changes, besides operating system files. The non-use of ACLs in Windows File Protection was a design choice: Not only did it allow operation on non-NTFS systems, but it prevented those same "bad" installers from failing completely from a file access error. External links Overview of Windows File Protection Registry settings for Windows File Protection Whitepaper on Windows File Protection Overview of System File Protection (Windows Me) Hacking Windows File Protection Effective Files Protection Tool Discontinued Windows components
28944000
https://en.wikipedia.org/wiki/Max%20Butler
Max Butler
Max Ray Vision (formerly Max Ray Butler, alias Iceman) is a former computer security consultant and hacker who served a 13-year prison sentence, the longest sentence ever given at the time for hacking charges in the United States. He was convicted of two counts of wire fraud, including stealing nearly 2 million credit card numbers and running up about $86 million in fraudulent charges. Early life Butler was born on July 10, 1972, and grew up in Meridian, Idaho with a younger sibling; his parents divorced when he was 14. His father was a Vietnam War veteran and computer store owner who married a daughter of Ukrainian immigrants. As a teenager, Max Butler became interested in bulletin board systems and hacking. After a parent reported a theft of chemicals from a lab room, Meridian High School Butler pleaded guilty to malicious injury to property, first-degree burglary, and grand theft. Butler ultimately received probation for his crimes, sent to live with his father, and transferred to Bishop Kelly High School. First offense Butler attended Boise State University for a year. In 1991, Butler was convicted of assault during his freshman year of college. His appeal was unsuccessful on procedural grounds, as a judge ruled that Butler's defense attorney did not raise the issue in an earlier appeal. The Idaho State Penitentiary paroled Butler on 26 April 1995. Professional and personal life Butler moved with his father near Seattle and worked in part-time technical support positions in various companies. He discovered Internet Relay Chat and frequently downloaded warez, or illegally downloaded software or media. After an Internet service provider in Littleton, Colorado traced Butler's uploads of warez to an unprotected file transfer protocol server –the uploads were consuming excessive bandwidth–to the CompuServe corporate offices in Bellevue, Washington, CompuServe fired Butler. After moving to Half Moon Bay, California, he changed his last name to Vision and lived in a rented mansion "Hungry Manor" with a group of other computer enthusiasts. Butler became a system administrator at computer gaming start-up MPath Interactive. The Software Publishers Association filed a $300,000 lawsuit against Butler for engaging in unauthorized distribution of software from CompuServe's office and later settled the case for $3,500 and free computer consulting. After marrying Kimi Winters, he moved to Berkeley, California, and worked as a freelance pentester and security consultant. During this time, he developed 'an online community resource called the "advanced reference archive of current heuristics for network intrusion detection systems," or arachNIDS.' FBI investigation, guilty plea, and sentencing In the spring of 1998, Butler installed a backdoor onto American federal government websites while trying to fix a security hole in the BIND server daemon. However, an investigator with the United States Air Force found Butler via pop-up notifications. He hired attorney Jennifer Granick for legal representation after hearing Granick speak at DEF CON. On 25 September 2000, Butler pleaded guilty to gaining unauthorized access to Defense Department computers. Starting in May 2001, Butler served an 18-month federal prison sentence handed down by US District Judge James Ware. After his release from prison in 2003 on supervised release, Butler exploited Wi-Fi technology to commit cyberattacks anonymously along with Chris Aragon from San Francisco. He advanced to programming malware, such as allowing the Bifrost trojan horse to evade virus scanner programs and exploited the HTML Application feature of Internet Explorer to steal American Express credit card information. Butler also targeted Citibank by using a Trojan horse towards a credit card identity thief and began distributing PINs to Aragon, who would have others withdraw the maximum daily amount of cash from ATMs until the compromised account was empty. Arrested in 2007, Butler was accused of operating CardersMarket, a forum where cyber criminals bought and sold sensitive data such as credit card numbers. After pleading guilty to two counts of wire fraud, stealing nearly 2 million credit card numbers, which were used for $86 million in fraudulent purchases, Butler was sentenced to 13 years in prison, which was the longest sentence ever given for hacking charges in the United States of America at the time. After prison, Butler will also face 5 years of supervised release and is ordered to pay $27.5 million in restitution to his victims. Butler was released from FCI Victorville Medium 2 on April 14, 2021. Butler's story was featured in an episode of the CNBC television program American Greed in 2010. References Further reading Kevin Poulsen, Kingpin: How One Hacker Took Over the Billion-Dollar Cybercrime Underground, 2011, publisher: Crown. Misha Glenny, DarkMarket: How Hackers Became the New Mafia, 2012, publisher: Vintage. 1972 births Living people American computer criminals American people convicted of assault American people of Ukrainian descent Place of birth missing (living people) Boise State University alumni People convicted of cybercrime People from Berkeley, California People from San Mateo County, California People from Meridian, Idaho People with bipolar disorder Prisoners and detainees of the United States federal government Carding (fraud)
579730
https://en.wikipedia.org/wiki/Data%20center
Data center
A data center (American English) or data centre (British English) is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems. Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town. History Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design-guidelines for controlling access to the computer room were therefore devised. During the boom of the microcomputer industry, and especially during the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center", as applied to specially designed computer rooms, started to gain popular recognition about this time. The boom of data centers came during the dot-com bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide enhanced capabilities, such as crossover backup: "If a Bell Atlantic line is cut, we can transfer them to ... to minimize the time of outage." The term cloud data centers (CDCs) has been used. Data centers typically cost a lot to build and to maintain. Increasingly, the division of these terms has almost disappeared and they are being integrated into the term "data center". Requirements for modern data centers Modernization and data center transformation enhances performance and energy efficiency. Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. A data center must, therefore, keep high standards for assuring the integrity and functionality of its hosted computer environment. Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize. Focus on modernization is not new: concern about obsolete equipment was decried in 2007, and in 2011 Uptime Institute was concerned about the age of the equipment therein. By 2018 concern had shifted once again, this time to the age of the staff: "data center staff are aging faster than the equipment." Meeting standards for data centers The Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center. Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to: Operate and manage a carrier's telecommunication network Provide data center based applications directly to the carrier's customers Provide hosted applications for a third party to provide services to their customers Provide a combination of these and similar data center applications Data center transformation Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security. Standardization/consolidation: Reducing the number of data centers and avoiding server sprawl (both physical and virtual) often includes replacing aging data center equipment, and is aided by standardization. Virtualization: Lowers capital and operational expenses, reduces energy consumption. Virtualized desktops can be hosted in data centers and rented out on a subscription basis. Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization. Automating: Automating tasks such as provisioning, configuration, patching, release management, and compliance is needed, not just when facing fewer skilled IT workers. Securing: Protection of virtual systems is integrated with existing security of physical infrastructures. Machine room The term "Machine Room" is at times used to refer to the large room within a Data Center where the actual Central Processing Unit is located; this may be separate from where high-speed printers are located. Air conditioning is most important in the machine room. Aside from air-conditioning, there must be monitoring equipment, one type of which is to detect water prior to flood-level situations. One company, for several decades, has had share-of-mind: Water Alert. The company, as of 2018, has two competing manufacturers (Invetex, Hydro-Temp) and three competing distributors (Longden, Northeast Flooring, Slayton). A raised floor standards guide named GR-2930 was developed by Telcordia Technologies, a subsidiary of Ericsson. Although the first raised floor computer room was made by IBM in 1956, and they've "been around since the 1960s", it was the 1970s that made it more common for computer centers to thereby allow cool air to circulate more efficiently. The first purpose of the raised floor was to allow access for wiring. Lights out The "lights-out" data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure. Data center levels and tiers The two organizations in the United States that publish data center standards are the Telecommunications Industry Association (TIA) and the Uptime Institute. International standards EN50600 and ISO22237 Information technology — Data centre facilities and infrastructures Class 1 single path solution Class 2 single path with redundancy solution Class 3 multiple paths providing a concurrent repair/operate solution Class 4 multiple paths providing a fault tolerant solution (except during maintenance) Telecommunications Industry Association The Telecommunications Industry Association's TIA-942 standard for data centers, published in 2005 and updated four times since, defined four infrastructure levels. Level 1 - basically a server room, following basic guidelines Level 4 - designed to host the most mission critical computer systems, with fully redundant subsystems, the ability to continuously operate for an indefinite period of time during primary power outages. Uptime Institute – Data center Tier Classification Standard Four Tiers are defined by the Uptime Institute standard: Tier I: is described as BASIC CAPACITY and must include a UPS Tier II: is described as REDUNDANT CAPACITY and adds redundant power and cooling Tier III: is described as CONCURRENTLY MAINTAINABLE and ensures that ANY component can be taken out of service without affecting production Tier IV: is described as FAULT TOLERANT allowing any production capacity to be insulated from ANY type of failure. Data center design The field of data center design has been growing for decades in various directions, including new construction big and small along with the creative re-use of existing facilities, like abandoned retail space, old salt mines and war-era bunkers. a 65-story data center has already been proposed the number of data centers as of 2016 had grown beyond 3 million USA-wide, and more than triple that number worldwide Local building codes may govern the minimum ceiling heights and other parameters. Some of the considerations in the design of data centers are: size - one room of a building, one or more floors, or an entire building, and can hold 1,000 or more servers space, power, cooling, and costs in the data center. Mechanical engineering infrastructure - heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization. Electrical engineering infrastructure design - utility service planning; distribution, switching and bypass from power sources; uninterruptible power source (UPS) systems; and more. Design criteria and trade-offs Availability expectations: Cost of avoiding downtime should not exceed the cost of downtime itself Site selection: Location factors include proximity to power grids, telecommunications infrastructure, networking services, transportation lines and emergency services. Others are flight paths, neighboring uses, geological risks and climate (associated with cooling costs). Often available power is hardest to change. High availability Various metrics exist for measuring the data-availability that results from data-center availability beyond 95% uptime, with the top of the scale counting how many "nines" can be placed after "99%". Modularity and flexibility Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed. A modular data center may consist of data center equipment contained within shipping containers or similar portable containers. Components of the data center can be prefabricated and standardized which facilitates moving if needed. Environmental control Temperature and humidity are controlled via: Air conditioning indirect cooling, such as using outside air, Indirect Evaporative Cooling (IDEC) units, and also using sea water. Electrical power Backup power consists of one or more uninterruptible power supplies, battery banks, and/or diesel / gas turbine generators. To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure. Low-voltage cable routing Options include: Data cabling can be routed through overhead cable trays Raised floor cabling, for security reasons and to avoid the addition of cooling systems above the racks. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Air flow Air flow management addresses the need to improve data center computer cooling efficiency by preventing the recirculation of hot air exhausted from IT equipment and reducing bypass airflow. There are several methods of separating hot and cold airstreams, such as hot/cold aisle containment and in-row cooling units. Aisle containment Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers. Computer cabinets are often organized for containment of hot/cold aisles. Ducting prevents cool and exhaust air from mixing. Rows of cabinets are paired to face each other so that cool air can reach equipment air intakes and warm air can be returned to the chillers without mixing. Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised floor vented tiles. Either the cold aisle or the hot aisle can be contained. Another alternative is fitting cabinets with vertical exhaust ducts (chimney) Hot exhaust exits can direct the air into a plenum above a drop ceiling and back to the cooling units or to outside vents. With this configuration, traditional hot/cold aisle configuration is not a requirement. Fire protection Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage. Two water-based options are: sprinkler mist no water – some of the benefits of using chemical suppression (clean agent gaseous fire suppression system). Security Physical access is usually restricted. Layered security often starts with fencing, bollards and mantraps. Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information. Fingerprint recognition mantraps is starting to be commonplace. Logging access is required by some data protection regulations; some organizations tightly link this to access control systems. Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent power distribution units, so that locks are networked through the same appliance. Energy use Energy use is a central issue for data centers. Power draw ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building. For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center. Power costs for 2012 often exceeded the cost of the original capital investment. Greenpeace estimated worldwide data center power consumption for 2012 as about 382 billion kWh. Global data centers used roughly 416 TWh in 2016, nearly 40% more than the entire United Kingdom; USA DC consumption was 90 billion kWh. Greenhouse gas emissions In 2007 the entire information and communication technologies or ICT sector was estimated to be responsible for roughly 2% of global carbon emissions with data centers accounting for 14% of the ICT footprint. The US EPA estimates that servers and data centers are responsible for up to 1.5% of the total US electricity consumption, or roughly .5% of US GHG emissions, for 2007. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 2020. In an 18-month investigation by scholars at Rice University's Baker Institute for Public Policy in Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-related emissions will more than triple by 2020. Energy efficiency and overhead The most commonly used energy efficiency metric of data center energy efficiency is power usage effectiveness (PUE), calculated as the ratio of total power entering the data center divided by the power used by IT equipment. It measures the percentage of power used by overhead (cooling, lighting, etc.). The average USA data center has a PUE of 2.0, meaning two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art is estimated to be roughly 1.2. Google publishes quarterly efficiency from data centers in operation. The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile of energy efficiency of all reported facilities. The Energy Efficiency Improvement Act of 2015 (United States) requires federal facilities — including data centers — to operate more efficiently. California's title 24 (2014) of the California Code of Regulations mandates that every newly constructed data center must have some form of airflow containment in place to optimize energy efficiency. European Union also has a similar initiative: EU Code of Conduct for Data Centres. Energy use analysis and projects The focus of measuring and analyzing energy use goes beyond what's used by IT equipment; facility support hardware such as chillers and fans also use energy. In 2011 server racks in data centers were designed for more than 25 kW and the typical server was estimated to waste about 30% of the electricity it consumed. The energy demand for information storage systems was also rising. A high availability data center was estimated to have a 1 mega watt (MW) demand and consume $20,000,000 in electricity over its lifetime, with cooling representing 35% to 45% of the data center's total cost of ownership. Calculations showed that in two years the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware. Research in 2018 has shown that substantial amount of energy could still be conserved by optimizing IT refresh rates and increasing server utilization. In 2011 Facebook, Rackspace and others founded the Open Compute Project (OCP) to develop and publish open standards for greener data center computing technologies. As part of the project Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy. By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card, was also saved. In 2016 Google joined the project and published the designs of its 48V DC shallow data center rack. This design had long been part of Google data centers. By eliminating the multiple transformers usually deployed in data centers, Google had achieved a 30% increase in energy efficiency. In 2017 sales for data center hardware built to OCP designs topped $1.2 billion and are expected to reach $6 billion by 2021. Power and cooling analysis Power is the largest recurring cost to the user of a data center. Cooling it at or below wastes money and energy. Furthermore, overcooling equipment in environments with a high relative humidity can expose equipment to a high amount of moisture that facilitates the growth of salt deposits on conductive filaments in the circuitry. A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures. A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity. The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers. Energy efficiency analysis An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's power use effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics. However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise. Computational fluid dynamics (CFD) analysis This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling. By predicting the effects of these environmental conditions, CFD analysis in the data center can be used to predict the impact of high-density racks mixed with low-density racks and the onward impact on cooling resources, poor infrastructure management practices and AC failure or AC shutdown for scheduled maintenance. Thermal zone mapping Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center. This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units. Green data centers Data centers use a lot of power, consumed by two main usages: the power required to run the actual equipment and then the power required to cool the equipment. Power-efficiency reduces the first category. Cooling cost reduction from natural ways includes location decisions: When the focus is not being near good fiber connectivity, power grid connections and people-concentrations to manage the equipment, a data center can be miles away from the users. 'Mass' data centers like Google or Facebook don't need to be near population centers. Arctic locations can use outside air, which provides cooling, are getting more popular. Renewable electricity sources are another plus. Thus countries with favorable conditions, such as: Canada, Finland, Sweden, Norway, and Switzerland, are trying to attract cloud computing data centers. Bitcoin mining is increasingly being seen as a potential way to build data centers at the site of renewable energy production. Curtailed and clipped energy can be used to secure transactions on the Bitcoin blockchain providing another revenue stream to renewable energy producers. Energy reuse It is very difficult to reuse the heat which comes from air cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps. An alternative to heat pumps is the adoption of liquid cooling throughout a data center. Different liquid cooling techniques are mixed and matched to allow for a fully liquid cooled infrastructure which captures all heat in water. Different liquid technologies are categorized in 3 main groups, Indirect liquid cooling (water cooled racks), Direct liquid cooling (direct-to-chip cooling) and Total liquid cooling (complete immersion in liquid, see Server immersion cooling). This combination of technologies allows the creation of a thermal cascade as part of temperature chaining scenarios to create high temperature water outputs from the data center. Dynamic infrastructure Dynamic infrastructure provides the ability to intelligently, automatically and securely move workloads within a data center anytime, anywhere, for migrations, provisioning, to enhance performance, or building co-location facilities. It also facilitates performing routine maintenance on either physical or virtual systems all while minimizing interruption. A related concept is Composable infrastructure, which allows for the dynamic reconfiguration of the available resources to suit needs, only when needed. Side benefits include reducing cost facilitating business continuity and high availability enabling cloud and grid computing. Network infrastructure Communications in data centers today are most often based on networks running the IP protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world which are connected according to the data center network architecture. Redundancy of the Internet connection is often provided by using two or more upstream service providers (see Multihoming). Some of the servers at the data center are used for running the basic Internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers. Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, and so on. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center. Software/data backup Non-mutually exclusive options for data backup are: Onsite Offsite Onsite is traditional, and one major advantage is immediate availability. Offsite backup storage Data backup techniques include having an encrypted copy of the data offsite. Methods used for transporting data are: having the customer write the data to a physical medium, such as magnetic tape, and then transporting the tape elsewhere. directly transferring the data to another site during the backup, using appropriate links uploading the data "into the cloud" Modular data center For quick deployment or disaster recovery, several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in very short time. See also Notes References External links Lawrence Berkeley Lab - Research, development, demonstration, and deployment of energy-efficient technologies and practices for data centers DC Power For Data Centers Of The Future - FAQ: 380VDC testing and demonstration at a Sun data center. White Paper - Property Taxes: The New Challenge for Data Centers The European Commission H2020 EURECA Data Centre Project - Data centre energy efficiency guidelines, extensive online training material, case studies/lectures (under events page), and tools. Applications of distributed computing Cloud storage Computer networking Data management Distributed data storage systems Distributed data storage Heating, ventilation, and air conditioning Servers (computing)
1791112
https://en.wikipedia.org/wiki/Line%206%20%28company%29
Line 6 (company)
Line 6 is a musical instrument and audio equipment manufacturer. Their product lines include electric and acoustic guitars, basses, guitar and bass amplifiers, effects units, USB audio interfaces and guitar/bass wireless systems. The company was founded in 1996 and is headquartered in Calabasas, California. Since December 2013, it has been a wholly owned subsidiary of the Yamaha Corporation. Origin of the company Marcus Ryle and Michel Doidic (two former Oberheim designers) co-founded Fast-Forward Designs, where they helped develop several notable pro audio products such as the Alesis ADAT, Quadraverbs and QuadraSynth, and Digidesign SampleCell. As digital signal processing became more and more powerful and affordable during the 1980s, they began developing DSP-based products for guitarists. As Ryle tells the story, the name "Line 6" came about because the phone system at Fast-Forward Designs only had 5 lines. Because the new guitar-related products were developed in secrecy, the receptionist used "Line 6" as a code word of sorts, and paging them for a call on Line 6 meant to stop any guitar or amp-related sounds so that they wouldn't be overheard by other Fast-Forward clients or callers. History Line 6 launched in 1996, with their first digital-modeling guitar amplifier, the AxSys 212, a combo amp using two 12" speakers. This was followed in 1997 by the Flextone. The company underwent a rapid expansion in the early 2000s (decade) due to the success of their Pod product line, which isolated modeling circuitry from the AxSys amplifier. Digital modeling attempts to recreate the unique characteristics of musical instruments and pro audio gear. Early Line 6 products used digital modeling to emulate the signature tone of a guitar amp/speaker combination. Further development of Line 6's modeling technology has extended the emulation to include numerous guitar amplifier / guitar cabinet combinations, guitar effects, microphones, and even different guitars and other fretted instruments themselves. Digital modeling offers countless virtual combinations of a variety of music gear, but only as emulations, however convincing as they may be. Though Line 6 began with a modeling guitar amp, their breakthrough product line was arguably the POD guitar processor line and its later variants, but this modeling technology has been the foundation for most of Line 6's products, from guitar amps to software and computer audio interfaces. Line 6 has an active user community, and provides software that allows users to easily download and share patches or device settings for many of Line 6's products. In early 2008, Line 6 acquired X2 Digital Wireless, who had introduced digital wireless systems for guitar. Further developing this technology, Line 6 developed and introduced a family of digital wireless microphone systems in 2010. In December 2013, it was confirmed that Line 6 was to be bought by Yamaha Corporation, to operate as a wholly owned subsidiary with the internal management remaining the same. Line 6 products Guitar amplifiers Line 6 produces a number of guitar amplifiers (combos and heads), all featuring amplifier modeling software. In 2004, the Spider II 112 combo amp was released. The Spider III sold over 12,700 units in 2011, the 15-watt Spider IV amp was the best-selling guitar amplifier in America. According to their website, the Spider IV has also been the best-selling 15, 30, 75, and 120 watt amps. The latest model is the Spider V. Pedalboards/stomp boxes Pedal boards range from Pod amp modelers to modeling pedals for delay and other effects, as well as a jam looper. In 2015 Line 6 announced their 'next generation' multi-effects processor/ pedalboard named Helix, similar in function to the Floor Pod and Pod Live series, but with a redesigned interface and body. Portable recording devices The BackTrack and BackTrack+Mic are portable recording devices for electric and acoustic guitars, respectively. Audio interfaces/effects processors The earlier audio interface produced by Line 6, GuitarPort, is replaced by the TonePort line. POD amp modelers are available in a number of versions. Musical instruments Variax is a line of acoustic, bass and electric guitars. Some of the original development team members for the first Variax model introduced in 2002 moved on to other companies, including Damage Control USA and Vox Amplification Ltd. Software Line 6 software includes the Variax Workbench, a software application that can interface a home computer with a Variax electric guitar; the Gearbox, tone editing software; and Pod Farm, which contains all of the modeling features of the POD X3 emulated on the computer. Wireless systems In May 2008, Line 6 announced it had acquired X2 Digital Wireless; it now sells wireless systems for guitar, bass, vocals and wind instruments. POD product history Guitar Hero World Tour Line 6 products were placed in Guitar Hero World Tour. Line 6 guitar and bass amplifiers can be found on stage. In the Guitar Hero World Tour "Music Studio", gamers can use a stylized Line 6 POD to create their own songs. References External links Official website Audio interview with Line 6 founder Marcus Ryle Marcus Ryle Interview NAMM Oral History Program (2007) Michel Doidic Interview NAMM Oral History Program (2011) Audio equipment manufacturers of the United States Guitar manufacturing companies of the United States Guitar amplifier manufacturers Manufacturing companies established in 1996 Guitar effects manufacturing companies Companies based in Calabasas, California 1996 establishments in California 2014 mergers and acquisitions Yamaha Corporation
47162990
https://en.wikipedia.org/wiki/Technology%20Education%20and%20Literacy%20in%20Schools
Technology Education and Literacy in Schools
Technology Education and Literacy in Schools (TEALS) is a program that pairs high schools with software engineers who serve as part-time computer science teachers. The program was started in 2009 by Microsoft software engineer Kevin Wang, but after Wang's divisional president learned about the program, Microsoft incubated the program. TEALS' goal is to create self-perpetuating computer science programs within two or three years by having the software engineers teach the teachers. Volunteers undergo a three-month summer class that teach them about making lesson plans and leading classes. Afterwards, software engineers visit classrooms four or five mornings a week for the entire school year to teach computer science concepts to both students and teachers. TEALS volunteers are not required to be Microsoft employees and can have formal degrees or be self-taught in computer science. TEALS offers support for three classes: Introduction to Computer Science, Web Design, and AP Computer Science A. History Kevin Wang graduated from the University of California, Berkeley in 2002 with a degree in electrical engineering and computer science. To pursue his teaching passion, he declined several industry job offers. Wang taught in the Bay Area for several years, and attended the Harvard Graduate School of Education, where he received a Master of Education. He became a computer science teacher at Woodside Priory School in Portola Valley, California, teaching grades seven–twelve for three years. He convinced fellow Microsoft employees and other acquaintances to teach computer science at other schools. After joining Microsoft, Wang started volunteering to teach the morning computer science class at Issaquah High School, a nearby high school, in 2009. In 2009, Wang founded Technology Education and Literacy in Schools (TEALS), a program that aims to bring software engineers to high school classrooms to teach computer science part-time. He thought that he would have to resign from Microsoft to oversee the program's significant expansion. Wang sold his Porsche 911 to bankroll the program. After the vice president of Wang's Microsoft division discovered TEALS, the vice president took him to the divisional president who recommended he work full-time at Microsoft on managing TEALS. According to CNN, Microsoft chose to "incubate" TEALS for three primary reasons. First, the program fit with Microsoft's philanthropic goals. Second, Microsoft founder Bill Gates had an enduring desire to advocate for learning. Third, the software industry had a shortage of engineers. In a 2012 interview with GeekWire, Wang said TEALS has two long-term goals. The first is to give every American high school student the opportunity to take an introductory computer science course and an AP Computer Science course. The second is to have the same proportion of students taking AP Computer Science as those taking AP Biology, AP Chemistry, and AP Physics. TEALS is part of YouthSpark, a Microsoft initiative that plans to give more educational and employment to 300 million young people between 2012 and 2015. A 2015 article in the Altavista Journal quoted the TEALS website, noting that the United States has 80,000 unfilled jobs that need a computer science degree. The Altavista Journal further reported that this would cause the United States to lose $500 billion over the following 10 years and that only 10% of American high schools have computer science courses. TEALS is managed by Microsoft's Akhtar Badshah, the senior director of citizenship and public affairs. Program format Wang designed a three-month summer class for Microsoft employees who wanted to volunteer with TEALS. The class taught the employees about devising lesson plans and leading classes. TEALS aims to create self-perpetuating computer science programs within two or three years. The software engineers commit to being physically present at the school for around four or five days weekly. The classes are scheduled for first period since many volunteers do not start work until later in the morning. For rural schools that lack the capital to run a computer science class, TEALS enables software engineers to instruct students distantly through videoconferencing. The first two semesters, the software engineers to educate the teachers side by side with the students. The third semester, the software engineers and teachers coteach the students. By the fourth semester, the teachers lead the class, and the software engineers become "teaching assistants". The aim is to enable the teachers who have math and science backgrounds in the future to lead the classes by themselves. TEALS provides support for three classes. Two of the classes are one-semester long: Introduction to Computer Science and Web Design. The third class, Advanced Placement Computer Science A, is two-semesters long. In a 2015 interview with the Altavista Journal, Microsoft spokesperson Kate Frischmann said, "TEALS is open to everyone, inside and outside of Microsoft, who have a background or formal degree in the field of computer science." School participation In the 2010–2011 school year, the program's trial year, ten TEALS volunteers instructed 250 Puget Sound region high school students from four schools. In 2011–2012 school year, TEALS expanded to 30 volunteers and six assistants educating 800 high school students in 13 schools. In the 2012–2013 school year, 22 schools around Seattle participated in TEALS. Microsoft invited the students in Seattle to visit the company's campus, hoping to spark excitement in technology. That school year, TEALS expanded to 120 volunteers in seven states teaching 2,000 students at 37 high schools. The schools were in Washington, Kentucky, California, Virginia, Utah, Washington, D.C., Minnesota, and North Dakota. In the 2013–2014 school year, TEALS grew to 280 volunteers in 12 states educating 3,000 students at 70 schools. In the 2014–2015 school year, 490 TEALS volunteers worked in 131 schools educating 6,600 students. References External links Official website TEALS at Microsoft Computer science education Microsoft divisions Microsoft initiatives Educational charities based in the United States Organizations established in 2009
841665
https://en.wikipedia.org/wiki/Exec%20Shield
Exec Shield
Exec Shield is a project started at Red Hat, Inc in late 2002 with the aim of reducing the risk of worm or other automated remote attacks on Linux systems. The first result of the project was a security patch for the Linux kernel that emulates an NX bit on x86 CPUs that lack a native NX implementation in hardware. While the Exec Shield project has had many other components, some people refer to this first patch as Exec Shield. The first Exec Shield patch attempts to flag data memory as non-executable and program memory as non-writeable. This suppresses many security exploits, such as those stemming from buffer overflows and other techniques relying on overwriting data and inserting code into those structures. Exec Shield also supplies some address space layout randomization for the mmap() and heap base. The patch additionally increases the difficulty of inserting and executing shellcode, rendering most exploits ineffective. No application recompilation is necessary to fully utilize exec-shield, although some applications (Mono, Wine, XEmacs, Mplayer) are not fully compatible. Other features that came out of the Exec Shield project were the Position Independent Executables (PIE), the address space randomization patch for Linux kernels, a wide set of glibc internal security checks that make heap and format string exploits near impossible, the GCC Fortify Source feature, and the port and merge of the GCC stack-protector feature. Implementation Exec Shield works on all x86 CPUs utilizing the Code Segment limit. Because of the way Exec Shield works, it is very lightweight; however, it won't fully protect arbitrary virtual memory layouts. If the CS limit is raised, for example by calling mprotect() to make higher memory executable, then the protections are lost below that limit. Ingo Molnar points this out in an e-mail conversation. Most applications are fairly sane at this; the stack (the important part) at least winds up above any mapped libraries, so does not become executable except by explicit calls by the application. As of August, 2004, nothing from the Exec Shield projects attempt to enforce memory protections by restricting mprotect() on any architecture; although memory may not initially be executable, it may become executable later, so the kernel will allow an application to mark memory pages as both writable and executable at the same time. However, in cooperation with the Security-Enhanced Linux project (SELinux), the standard policy for the Fedora Core distribution does prohibit this behavior for most executables, with only a few exceptions for compatibility reasons. History Exec Shield was developed by various people at Red Hat; the first patch was released by Ingo Molnar of Red Hat and first released in May 2003. It is part of Fedora Core 1 through 6 and Red Hat Enterprise Linux since version 3. Other people involved include Jakub Jelínek, Ulrich Drepper, Richard Henderson and Arjan van de Ven. See also NX bit Openwall StackGuard W^X References External links Ingo Molnar's Exec Shield patch web page, includes documentation in the file ANNOUNCE-exec-shield Newsforge Feature Article Red Hat Magazine Feature/Project Article Negative security issues with ExecShield Linux Linux security software Operating system security
157959
https://en.wikipedia.org/wiki/Data%20General
Data General
Data General was one of the first minicomputer firms of the late 1960s. Three of the four founders were former employees of Digital Equipment Corporation (DEC). Their first product, 1969's Data General Nova, was a 16-bit minicomputer intended to both outperform and cost less than the equivalent from DEC, the 12-bit PDP-8. A basic Nova system cost or less than a similar PDP-8 while running faster, offering easy expandability, being significantly smaller, and proving more reliable in the field. Combined with Data General RDOS (DG/RDOS) and programming languages like Data General Business Basic, Novas provided a multi-user platform far ahead of many contemporary systems. A series of updated Nova machines were released through the early 1970s that kept the Nova line at the front of the 16-bit mini world. The Nova was followed by the Eclipse series which offered much larger memory capacity while still being able to run Nova code without modification. The Eclipse launch was marred by production problems and it was some time before it was a reliable replacement for the tens of thousands of Nova's in the market. As the mini world moved from 16-bit to 32, DG introduced the Data General Eclipse MV/8000, whose development was extensively documented in the popular book, The Soul of a New Machine. Although successful, the introduction of the IBM PC in 1981 marked the beginning of the end for minicomputers, and by the end of the decade, the entire market had largely disappeared. The introduction of the Data General/One in 1984 did nothing to stop the erosion. In a major business pivot, in 1989 DG released the AViiON series of scalable Unix systems which spanned from desktop workstations to departmental servers. This scalability was managed through the use of NUMA, allowing a number of commodity processors to work together in a single system. Following AViiON was the CLARiiON series of network attached storage systems which became a major product line in the later 1990s. This led to a purchase by EMC, the major vendor in the storage space at that time. EMC shut down all of DG's lines except for CLARiiON, which continued sales until 2012. History Origin, founding and early years: Nova and SuperNova Data General (DG) was founded by several engineers from Digital Equipment Corporation who were frustrated with DEC's management and left to form their own company. The chief founders were Edson de Castro, Henry Burkhardt III, and Richard Sogge of Digital Equipment (DEC), and Herbert Richman of Fairchild Semiconductor. The company was founded in Hudson, Massachusetts, in 1968. Edson de Castro was the chief engineer in charge of the PDP-8, DEC's line of inexpensive computers that created the minicomputer market. It was designed specifically to be used in laboratory equipment settings; as the technology improved, it was reduced in size to fit into a 19-inch rack. Many PDP-8s still operated decades later in these roles. De Castro was watching developments in manufacturing, especially more complex printed circuit boards (PCBs) and wave soldering that suggested that the PDP-8 could be greatly cost-reduced. DEC was not interested, having turned their attention increasingly to the high-end. Convinced he could do one better, De Castro began work on his own low-cost 16-bit design. The result was released in 1969 as the Nova. The Nova, like the PDP-8, used a simple accumulator-based architecture. It lacked general registers and the stack-pointer functionality of the more advanced PDP-11, as did competing products, such as the HP 1000; compilers used hardware-based memory locations in lieu of a stack pointer. Designed to be rack-mounted similarly to the later PDP-8 machines, it was packaged on four PCB cards and was thus smaller in height, while also including a number of features that made it run considerably faster. Announced as "the best small computer in the world", the Nova quickly gained a following, especially in scientific and educational markets, and made the company flush with cash. DEC sued for misappropriation of its trade secrets, but this ultimately went nowhere. With the initial success of the Nova, Data General went public in the fall of 1969. The original Nova was soon followed by the faster SuperNova, which replaced the Nova's 4-bit arithmetic logic unit (ALU) with a 16-bit version that made the machine roughly four times as fast. Several variations and upgrades to the SuperNova core followed. The last major version, the Nova 4, was released in 1978. During this period the Nova generated 20% annual growth rates for the company, becoming a star in the business community and generating US$ 100 million in sales in 1975. In 1977, DG launched a 16-bit microcomputer called the microNOVA to poor commercial success. The Nova series played a very important role as instruction-set inspiration to Charles P. Thacker and others at Xerox PARC during their construction of the Xerox Alto. Late 1970s to late 1980s: crisis and a short term solution In 1974, the Nova was supplanted by their upscale 16-bit machine, the Eclipse. Based on many of the same concepts as the Nova, it included support for virtual memory and multitasking more suitable to the small office environment. For this reason, the Eclipse was packaged differently, in a floor-standing case resembling a small refrigerator. Production problems with the Eclipse led to a rash of lawsuits in the late 1970s. Newer versions of the machine were pre-ordered by many of DG's customers, which were never delivered. Many customers sued Data General after more than a year of waiting, charging the company with breach of contract, while others simply canceled their orders and went elsewhere. The Eclipse was originally intended to replace the Nova outright, evidenced by the fact that the Nova 3 series, released at the same time and utilizing virtually the same internal architecture as the Eclipse, was phased out the next year. Strong demand continued for the Nova series, resulting in the Nova 4, perhaps as a result of the continuing problems with the Eclipse. Fountainhead While DG was still struggling with Eclipse, in 1977, Digital announced the VAX series, their first 32-bit minicomputer line, described as "super-minis". This coincided with the ageing 16-bit products, notably the PDP-11, which were coming due for replacement. It appeared there was an enormous potential market for 32-bit machines, one that DG might be able to "scoop". Data General immediately launched their own 32-bit effort in 1976 to build what they called the "world's best 32-bit machine", known internally as the "Fountainhead Project", or FHP for short (Fountain Head Project). Development took place off-site so that even DG workers would not know of it. The developers were given free rein over the design and selected a system that used a writable instruction set. The idea was that the instruction set architecture (ISA) was not fixed, programs could write their own ISA and upload it as microcode to the processor's writable control store. This would allow the ISA to be tailored to the programs being run, for instance, one might upload an ISA tuned for COBOL if the company's workload included significant numbers of COBOL programs. When Digital's VAX-11/780 was shipped in February 1978, however, Fountainhead was not yet ready to deliver a machine, due mainly to problems in project management. DG's customers left quickly for the VAX world. Eagle In the spring of 1978, with Fountainhead apparently in development hell, a secret skunkworks project was started to develop an alternative 32-bit system known as "Eagle" by a team led by Tom West. References to "the Eagle project" and "Project Eagle" co-exist. Eagle was a straightforward, 32-bit extension of the Nova-based Eclipse. It was backwards-compatible with 16-bit Eclipse applications, used the same command-line interpreter, but offered improved 32-bit performance over the VAX 11/780 while using fewer components. By late 1979, it became clear that Eagle would deliver before Fountainhead, igniting an intense turf war within the company for constantly shrinking project funds. In the meantime, customers were abandoning Data General in droves, driven not only by the delivery problems with the original Eclipse, including very serious quality control and customer service problems, but also the power and versatility of Digital's new VAX line. Ultimately, Fountainhead was cancelled and Eagle became the new MV series, with the first model, the Data General Eclipse MV/8000, announced in April 1980. The Eagle Project was the subject of Tracy Kidder's Pulitzer prize-winning book, The Soul of a New Machine, making the MV line the best-documented computer project in recent history. MV series The MV systems generated an almost miraculous turnaround for Data General. Through the early 1980s sales picked up, and by 1984 the company had over a billion dollars in annual sales. One of Data General's significant customers at this time was the United States Forest Service, which starting in the mid-1980s used DG systems installed at all levels from headquarters in Washington, D.C. down to individual ranger stations and fire command posts. This required equipment of high reliability and generally rugged construction that could be deployed in a wide range of places, often to be maintained and used by people with no computer background at all. The intent was to create new kinds of functional integration in an agency that had long prized its decentralized structure. Despite some tensions, the implementation was effective and the overall effects on the agency notably positive. The introduction, implementation, and effects of the DG systems in USFS were documented in a series of evaluative reports prepared in the late 1980s by the RAND Corporation. The MV series came in various iterations, from the MV/2000 (later MV/2500), MV/4000, MV/10000, MV/15000, MV/20000, MV/30000, MV/40000 and ultimately concluded with the MV/60000HA minicomputer. The MV/60000HA was intended to be a High Availability system, with many components duplicated to eliminate the single point of failure. Yet, there were failures among the system's many daughter boards, back-plane, and mid-plane. DG technicians were kept quite busy replacing boards and many blamed poor quality control at the DG factory in Mexico where they were made and refurbished. In retrospect, the nicely performing MV series was too little, too late. At a time when DG invested its last dollar into the dying minicomputer segment, the microcomputer was rapidly making inroads to the lower-end market segment, and the introduction of the first workstations wiped out all 16-bit machines, once DG's best customer segment. While the MV series did stop the erosion of DG's customer base, this now smaller base was no longer large enough to allow DG to develop their next generation. DG had also changed their marketing to focus on direct sales to Fortune 100 companies and thus alienated many resellers. Software Data General developed operating systems for its hardware: DOS and RDOS for the Nova, RDOS and AOS for the 16-bit Eclipse C, M, and S lines, AOS/VS and AOS/VS II for the Eclipse MV line, and a modified version of UNIX System V called DG/UX for the Eclipse MV and AViiON machines. The AOS/VS software was the most commonly used DG software product and included CLI (Command Line Interpreter) allowing for complex scripting, DUMP/LOAD, and other custom components. Related system software also in common use at the time included such packages as X.25, Xodiac, and TCP/IP for networking, Fortran, COBOL, RPG, PL/I, C and Data General Business Basic for programming, INFOS II and DG/DBMS for databases, and the nascent relational database software DG/SQL. Data General also offered an office automation suite named Comprehensive Electronic Office (CEO), which included a mail system, a calendar, a folder-based document store, a word processor (CEOWrite), a spreadsheet processor, and other assorted tools. All were crude by today's standards, but were revolutionary for their time. CEOWrite was also offered on the DG One Portable. Some software development from the early 1970s is notable. PLN (created by Robert Nichols) was the host language for a number of DG products, making them easier to develop, enhance, and maintain than macro assembler equivalents. PLN smacked of a micro-subset of PL/I, in sharp contrast to other languages of the time, such as BLISS. The RPG product (shipped in 1976) incorporated a language runtime system implemented as a virtual machine which executed pre-compiled code as sequences of PLN statements and Eclipse commercial instruction routines. The latter provided microcode acceleration of arithmetic and conversion operations for a wide range of now-arcane data types such as overpunch characters. The DG Easy product, a portable application platform developed by Nichols and others from 1975 to 1979 but never marketed, had roots easily traceable back to the RPG VM created by Stephen Schleimer. Also notable were several commercial software products developed in the mid to late 1970s in conjunction with the commercial computers. These products were popular with business customers because of their screen design feature and other ease-of-use features. The first product was IDEA (Interactive Data Entry/Access), which consisted of a screen design tool (IFMT), TP Controller (IMON), and a program development language (IFPL). The second was the CS40 line of products, which used COBOL and their own ISAM data manager. The COBOL variant used included an added screen section. Both of these products were a major departure from the transaction monitors of the day which did not have a screen design tool and used subroutine calls from COBOL to handle the screen. IDEA was identified by some market watchers as a precursor to fourth-generation programming languages. The original IDEA ran on RDOS and would support up to 24 users in an RDOS Partition. Each user could use the same or a different program. Eventually, IDEA ran on every commercial hardware product from the MicroNova (4 users) to the MV series under AOS/VS, the same IDEA program running all those systems. The CS40 (the first of this line) was a package system which supported four terminal users, each running a different COBOL program. These products also led to the development of a third product, TPMS (Transaction Processing Monitoring System (announced in 1980)) which could capably run a large number of COBOL or PL/I users with a smaller number of processors, a major resource and performance advantage on AOS and AOS/VS systems. TPMS had the same screen design tool as the earlier products. TPMS used defined subroutine calls for screen functions from COBOL or PL/I, which in some users' eyes made it more difficult to use. However, this product was aimed at the professional IS Programmers as were its competitors—IBM's CICS and DEC's TRAX. As with IDEA, TPMS used INFOS for information management and DG/DBMS for database management. Xodiac In 1979, DG introduced their Xodiac networking system. This was based on the X.25 standard at the lower levels, and their own application layer protocols on top. Because it was based on X.25, remote sites could be linked together over commercial X.25 services like Telenet in the US or Datapac in Canada. Data General software packages supporting Xodiac included Comprehensive Electronic Office (CEO). In June 1987, Data General announced its intention to replace Xodiac with the Open Systems Interconnection (OSI) protocol suite. Dasher terminals Data General produced a full range of peripherals, sometimes by rebadging printers for example, but Data General's own series of CRT-based and hard-copy terminals were high quality and featured a generous number of function keys, each with the ability to send different codes, with any combination of control and shift keys, which influenced WordPerfect design. The model 6053 Dasher 2 featured an easily tilted screen, but used many integrated circuits; the smaller, lighter D100, D200 and eventually the D210 replaced it as the basic user terminal, while graphics models such as the D460 (with ANSI X3.64 compatibility) occupied the very high end of the range. Terminal emulators for the D2/D3/D100/D200/D210 (and some features of the D450/460) do exist, including the Freeware 1993 DOS program in D460.zip. Most Data General software was written specifically for their own terminals (or the terminal emulation built into the Desktop Generation DG10, but the Data General One built-in terminal emulator is not often suitable), although software using Data General Business BASIC could be more flexible in terminal handling, because logging into a Business BASIC system would initiate a process whereby the terminal type would (usually) be auto-detected. Data General-One Data General's introduction of the Data General-One (DG-1) in 1984 is one of the few cases of a minicomputer company introducing a truly breakthrough PC product. Considered genuinely "portable", rather than "luggable", as alternatives often were called, it was a nine-pound battery-powered MS-DOS machine equipped with dual 3-inch diskettes, a 79-key full-stroke keyboard, 128K to 512K of RAM, and a monochrome liquid-crystal display (LCD) screen capable of either the full-sized standard 80×25 characters or full CGA graphics (640×200). The DG-1 was considered a modest advance over similar Osborne-Kaypro systems overall. Desktop Generation Data General also brought out a small-footprint "Desktop Generation" range, starting with the DG10 that included both Data General and Intel CPUs in a patented closely coupled arrangement, able to run MS-DOS or CP/M-86 concurrently with DG/RDOS, with each benefiting from the hardware acceleration given by other CPU as a co-processor that would handle (for instance) screen graphics or disk operations concurrently. Other members of the Desktop Generation range, the DG20 and DG30, were aimed more at traditional commercial environments, such as multi-user COBOL systems, replacing refrigerator-sized minicomputers with toaster-sized modular microcomputers based around the microECLIPSE CPUs and some of the technology developed for the microNOVA-based "Micro Products" range such as the MP/100 and MP/200 that had struggled to find a market niche. The Single-processor version of the DG10, the DG10SP, was the entry-level machine with, like the DG20 and 30, no ability to run Intel software. Despite having some good features and having less direct competition from the flood of cheap PC compatibles, the Desktop Generation range also struggled, partly because they offered an economical way of running what was essentially "legacy software" while the future was clearly either slightly cheaper Personal Computers or slightly more expensive "super minicomputers" such as the MV and VAX computers. Lock-in or no lock-in? Throughout the 1980s, the computer market had evolved dramatically. Large installations in the past typically ran custom-developed software for a small range of tasks. For instance, IBM often delivered machines whose only purpose was to generate accounting data for a single company, running software tailored for that company alone. By the mid-1980s, the introduction of new software development methods and the rapid acceptance of the SQL database was changing the way such software was developed. Now developers typically linked together several pieces of existing software, as opposed to developing everything from scratch. In this market, the question of which machine was the "best" changed; it was no longer the machine with the best price–performance ratio or service contracts, but the one that ran all of the third-party software the customer intended to use. This change forced changes on the hardware vendors as well. Formerly, almost all computer companies attempted to make their machines different enough that when their customers sought a more powerful machine, it was often cheaper to buy another from the same company. This was known as "vendor lock-in", which helped guarantee future sales, even though the customers detested it. With the change in software development, combined with new generations of commodity processors that could match the performance of low-end minicomputers, lock-in was no longer working. When forced to make a decision, it was often cheaper for the users to simply throw out all of their existing machinery and buy a microcomputer product instead. If this was not the case at present, it certainly appeared it would be within a generation or two of Moore's law. In 1988, two company directors put together a report showing that if the company were to continue existing in the future, DG would have to either invest heavily in software to compete with new applications being delivered by IBM and DEC on their machines, or alternately exit the proprietary hardware business entirely. Thomas West's report outlined these changes in the marketplace, and suggested that the customer was going to win the fight over lock-in. They also outlined a different solution: Instead of trying to compete against the much larger IBM and DEC, they suggested that since the user no longer cared about the hardware as much as software, DG could deliver the best "commodity" machines instead. "Specifically", the report stated, "DG should examine the Unix market, where all of the needed software already exists, and see if DG can provide compelling Unix solutions." Now the customer could run any software they wished as long as it ran on Unix, and by the early 1990s, everything did. As long as DG's machines outperformed the competition, their customers would return, because they liked the machines, not because they were forced; lock-in was over. AViiON De Castro agreed with the report, and future generations of the MV series were terminated. Instead, DG released a technically interesting series of Unix servers known as the AViiON. The name "AViiON" was a reversed play on the name of DG's first product, Nova, implying "Nova II". In an effort to keep costs down, the AViiON was originally designed and shipped with the Motorola 88000 RISC processor. The AViiON machines supported multi-processing, later evolving into NUMA-based systems, allowing the machines to scale upwards in performance by adding additional processors. CLARiiON An important element in all enterprise computer systems is high speed storage. At the time AViiON came to market, commodity hard disk drives could not offer the sort of performance needed for data center use. DG attacked this problem in the same fashion as the processor issue, by running a large number of drives in parallel. The overall performance was greatly improved and the resulting innovation was marketed originally as the HADA (High Availability Disk Array) and then later as the CLARiiON line. The CLARiiON arrays, which offered SCSI RAID in various capacities, offered a great price/performance and platform flexibility over competing solutions. The CLARiiON line was marketed not only to AViiON and Data General MV series customers, but also to customers running servers from other vendors such as Sun Microsystems, Hewlett Packard and Silicon Graphics. Data General also embarked on a plan to hire storage sales specialists and to challenge the EMC Symmetrix in the wider market. Joint venture with Soviet company On December 12, 1989, DG and Soviet Union software developer NPO Parma announced Perekat (Перекат, “Rolling Thunder,”) the first joint venture between an American computer company and a Soviet company. DG would provide hardware and NPO Parma the software, and Austrian companies Voest Alpine Industrieanlagenbau and their marketing group Voest Alpine Vertriebe would build the plant. Final downturn Despite Data General's betting the AViiON farm on the Motorola 88000, Motorola decided to end production of that CPU. The 88000 had never been very successful, and DG was the only major customer. When Apple Computer and IBM proposed their joint solution based on POWER architecture, the PowerPC, Motorola picked up the manufacturing contract and killed the 88000. DG quickly responded by introduced new models of the AViiON series based on a true commodity processor, the Intel x86 series. By this time a number of other vendors, notably Sequent Computer Systems, were also introducing similar machines. The lack of lock-in now came back to haunt DG, and the rapid commoditization of the Unix market led to shrinking sales. DG did begin a minor shift toward the service industry, training their technicians for the role of implementing a spate of new x86-based servers and the new Microsoft Windows NT domain-driven, small server world. This never developed enough to offset the loss of high margin server business however. Data General also targeted the explosion of the internet in the latter 1990s with the formation of the THiiN Line business unit, led by Tom West, which had a focus on creation and sale of so-called "internet appliances". The product developed was called the SiteStak web server appliance and was designed as an inexpensive website hosting product. EMC takeover CLARiiON was the only product line that saw continued success through the later 1990s after finding a large niche for Unix storage systems, and its sales were still strong enough to make DG a takeover target. EMC, the 800-pound gorilla in the storage market, announced in August 1999 that they would buy Data General and its assets for $1.1 billion or $19.58 a share. The acquisition was completed on October 12, 1999. Although details of the acquisition specified that EMC had to take the entire company, and not just the storage line, EMC quickly ended all development and production of DG computer hardware and parts, effectively ending Data General's presence in the segment. The maintenance business was sold to a third party, who also acquired all of DG's remaining hardware components for spare parts sales to old DG customers. The CLARiiON line continued to be a major player in the market and was marketed under that name until January 2012. CLARiiON was also widely sold by Dell through a worldwide OEM deal with EMC. The Clariion and Celerra storage products evolved into EMC's unified storage platform, the VNX platform. Data General would be only one of many New England based computer companies, including the original Digital Equipment Corporation, that collapsed or were sold to larger companies after the 1980s. On the Internet, even the old Data General domain (dg.com), which contained a few EMC webpages that only mentioned the latter company in passing, was sold to the Dollar General discount department store chain in October 2009. Marketing Data General exhibited a brash style of marketing and advertising, which acted to set the company in the spotlight. A memorable advertising campaign during the early 1980s Desktop Generation era, was issuance of T-shirts with the logo "We did it on a desktop". The early AViiON servers were portrayed as powerful computing in the size of a pizza box. Alumni DJ Delorie designed PC motherboards and BIOS code for Data General for four years. He authored DJGPP, and currently works for Red Hat on GCC. Peter Darnell was a developer of DG/L and went on to develop C compilers for Unix and Windows. He wrote a book on C and is the developer of the visual programming language VisSim by Visual Solutions. Jean-Louis Gassée was with Data General in France before moving to Apple Computer and Be Inc. Ronald H. Gruner was head of Data General's Fountainhead project which competed with the MV/8000. After leaving DG he co-founded Alliant Computer Systems along with former DG colleague Craig Mundie. David C. Mahoney founded Banyan Systems and pioneered Local Area network technologies in late 1980s along with Novell. Craig Mundie was a software developer at Data General and later became Chief Technologist at Microsoft. Mike Nash worked on AOS/VS kernel virtual terminal services for PCI and was a Corporate Vice President at Microsoft and is currently Vice President, Consumer PC & Solutions, Printing and Personal Systems Group, Hewlett-Packard Company. Ray Ozzie was a software developer at Data General. He subsequently worked for Software Arts, Lotus Development, Iris Associates, and Groove Networks. Groove Networks was acquired by Microsoft in 2005, and Ozzie replaced Bill Gates as chief software architect at Microsoft from 2006 until 2010. Jonathan Sachs co-founded Lotus Development where he authored 1-2-3. Jit Saxena founded Netezza, search technology company Christopher Stone founded Object Management Group (created CORBA) and became vice chairman/CEO of Novell. Asher Waldfogel was a software engineer in Special Systems (software) who later went on to found Redback Networks, Tollbridge Technologies and PeakStream. Steve Wallach cofounded Convex Computer. Joshua Weiss was a manager in the Xodiac Networking group who went on to co-found Prominet (bought by Lucent Technologies) and later was founder and CEO of Nauticus (bought by Sun Microsystems). Tom West was the manager for the MV/8000 and later projects. He was the main protagonist of the Pulitzer Prize winning non-fiction book The Soul of a New Machine. Edward Zander was product marketing manager at Data General before his positions at Apollo Computer, Sun Microsystems and Motorola as CEO. Wayne Rosing was hardware manager of Special Systems (hardware) who left to design the Lisa workstation for Apple. Though not a commercial success, stripped down it became the Macintosh. Rosing later went to Sun Microsystems where he was Vice President of Advanced Development, appearing on the cover of Fortune magazine. He retired as VP of Hardware at Google. George Woltman went on to found the Great Internet Mersenne Prime Search (GIMPS) and is the author of Prime95 (which is used to search for Mersenne Prime numbers and for hardware stress testing.) Notes References Kidder, Tracy (1981). The Soul of a New Machine. Little, Brown and Company. Reprint edition July 1997 by Modern Library. . External links Official Website, circa 1996 SimuLogics ("dedicated to preserving the history and legacy of the Data General Nova, Eclipse, MV and compatible computers") Carl Friend's Computer Museum (has pages for over a dozen DG systems) data general facebook alumni group Computer companies established in 1968 Dell EMC Defunct computer hardware companies Defunct software companies Defunct computer companies based in Massachusetts Computer companies disestablished in 1999 1968 establishments in Massachusetts 1999 disestablishments in Massachusetts
723844
https://en.wikipedia.org/wiki/Pax%20%28command%29
Pax (command)
pax is an archiving utility available for various operating systems and defined since 1995. Rather than sort out the incompatible options that have crept up between tar and cpio, along with their implementations across various versions of Unix, the IEEE designed a new archive utility that could support various archive formats with useful options from both archivers. The pax command is available on Unix and Unix-like operating systems and on IBM i, Microsoft Windows NT, and Windows 2000. IEEE, in 2001, defined a new pax format which is basically tar with additional extended attributes. The name "pax" is an acronym for portable archive exchange. The command invocation and structure is somewhat a unification of both tar and cpio. History The first public implementation of pax was written by Mark H. Colburn in 1989. Colburn posted it to as Usenix/IEEE POSIX replacement for TAR and CPIO. Manual pages for pax on HP-UX, IRIX, and SCO UNIX attribute pax to Colburn. Another version of the pax program was created by Keith Muller in 1992–1993. The version first appeared in 4.4BSD (1995). Pax was accepted into X/Open issue 4 (Single Unix Specification version 1) in 1995. These versions of pax only defined the command-line interface as a tar/cpio hybrid, but the pax file format was not yet defined. (The work on defining Pax likely precedes Muller's work; it appears to be in the early POSIX.2 and IEEE 1003.1b drafts circa 1991.) In 1997, Sun Microsystems proposed a method for adding extensions to the ustar format. This method was later accepted for the POSIX.1-2001 standard, as the new pax file format. The POSIX specification for the utility was updated to include this format. Features Modes pax has four general modes that are invoked by a combination of the ("read") and ("write") options. This table summarizes the modal behaviour: This model is similar to cpio, which has a similar set of basic operations. Examples List contents of an archive: Extract contents of an archive into the current directory: Create an archive of the current directory: Copy current directory tree to another location: (The target directory must exist beforehand!) Command invocation By default, pax uses the standard input/output for archive and listing operations. This can be changed with the "tar-style" option that specifies the archive file. Pax differs from cpio by recursively considering the content of a directory; to disable this behavior, POSIX pax has an option to disable it. The command is a mish-mash of and features. Like , processes directory entries recursively, a feature that can be disabled with for cpio-style behavior. The handling of file input/outputs is also a mix: when a list of file names is specified on the command line, they are taken as shell globs for file input or listing (tar-like); otherwise takes the -style behavior of using the standard input for a file list. Finally, supports reading/writing to a named archive file using tar's option. For example, if one desires a cpio-style archiving of the current directory, can be used with just like one does using cpio: (This construct is pointless without any filters for , as it becomes identical to the above example.) The command for extracting the contents for an archive is the same as : It is possible to invoke these commands in a tar-like syntax as well: Compression Most implementations of pax use the (gzip) and (bzip2) switches for compression; this feature however, is not specified by POSIX. It is important to note that pax cannot append to compressed archives. Example for extracting a gzipped archive: As in tar and cpio, pax output can be piped to another compressor/decompressor program. As an example xz is used here: and listing an xz-compressed archive as the input: Format support Almost all extant versions of pax stemmed from the original 4.4BSD implementation. Most of them inherit the formats supported by that version, selectable via the option: cpio – The extended cpio interchange format specified in the IEEE Std 1003.2 ("POSIX.2") standard. bcpio – The old binary cpio format. sv4cpio – The System V release 4 cpio. sv4crc – The System V release 4 cpio with file crc checksums. tar – The old BSD tar format as found in BSD4.3. ustar (default) – The tar interchange format specified in the IEEE Std 1003.2 ("POSIX.2") standard. Notably, the 2001 pax format is not supported by this legacy pax. This is the case on most Linux distributions (which uses the MirBSD branch of MirCPIO-paxmirabilis) and on FreeBSD. The Heirloom Project pax command was developed by Gunnar Ritter in 2003. It has support for the pax format as well as many extra formats. Multiple volumes pax supports archiving on multiple volumes. When the end of a volume is reached, the following message appears: $ pax -wf /dev/fd0 . ATTENTION! pax archive volume change required. /dev/fd0 ready for archive volume: 2 Load the NEXT STORAGE MEDIA (if required) and make sure it is WRITE ENABLED. Type "y" to continue, "." to quit pax, or "s" to switch to new device. If you cannot change storage media, type "s" Is the device ready and online? > When restoring an archive from multiple media, pax asks for the next media in the same fashion, when the end of the media is reached before the end of the archive. Standardization, reception and popularity Despite being standardized in 2001 by IEEE, as of 2010, pax enjoys relatively little popularity or adoption. This is in part because there was not any need for it from the Unix users; it was just the POSIX committee that wants to have a more consistent interface. Pax is also fairly chatty and expects user interactions when things go wrong. pax is required to be present in all conformant systems by Linux Standard Base since version 3.0 (released on July 6, 2005), but so far few Linux distributions ship and install it by default. However, most distributions include pax as a separately installable package. pax has also been present in Windows NT, where it is limited to file archives (tapes not supported). It was later moved to the Interix subsystem. It does not support archiving or restoring Win32 ACLs. Packages handled by the Installer (macOS) often carry the bulk of their contents in an Archive.pax.gz file that may be read using the system's pax (heirloom) utility. See also List of Unix commands List of archive formats Comparison of file archivers References Further reading External links Archiving with Pax Article in FreeBSD basics on ONLamp.com, by Dru Lavigne (2002-08-22) File archivers Unix archivers and compression-related utilities Unix SUS2008 utilities 1995 software
1468958
https://en.wikipedia.org/wiki/Docking%20station
Docking station
In computing, a docking station or port replicator (hub) or dock provides a simplified way to plug-in a mobile device, such as a laptop, to common peripherals. Because a wide range of dockable devices—from mobile phones to wireless mice—have different connectors, power signaling, and uses, docks are unstandardized and are therefore often designed for a specific type of device. A dock can allow some laptop computers to become a substitute for a desktop computer, without sacrificing the mobile computing functionality of the machine. Portable computers can dock and undock hot, cold or standby, depending on the abilities of the system. In a cold dock or undock, one completely shuts the computer down before docking/undocking. In a hot dock or undock, the computer remains running when docked/undocked. Standby docking or undocking, an intermediate style used in some designs, allows the computer to be docked/undocked while powered on, but requires that it be placed into a sleep mode prior to docking/undocking. Types Docking stations can be broadly split up into five basic varieties. Expansion dock Provides some sort of hardware expansion for the device docked to it. Such as an external storage drive, gpu or liquid cooling radiators. Port replicator/hub Port replicators (sometimes referred to as passthroughs) are functionally and logically identical to a bundle of extension cables, except that they are plugged in and unplugged together through the device. Some also include electrical adaptors to change from one pinout to another (e.g., Micro-DVI to normal DVI connector.) Often Bus-Powered and small form factor, they functionally duplicate ports from the device to easily access more of the existing ports like: SD cards, peripherals, audio jacks, etc. as well as external display(s). Breakout dock/multi-port adapter A breakout dock is conceptually a breakout box in the form of a dock. It is an extension to a typical port replicator in that it not only replicates existing ports already on the computer, but also offers additional ports. Modern computers most often accomplish this by using a special, often proprietary, connector that consolidates the signals from many concealed traces from onboard external buses into one connector. As such, the dock can offer a greater number of ports than is physically present on the computer. This allows the basic unit to have fewer physical ports while still allowing users a way to access to the full range of features of its motherboard. Most companies that produce laptops with such breakout ports also offer simpler adapters that grant access to one or two of the buses consolidated in them at a time. OEM/proprietary dock Similar to a breakout device, some docking stations produce multiple connections from one port, only instead of extracting them from internal chipsets, they create them inside the dock using converters. They are functionally identical to a hub with various converters plugged in. Often using proprietary connections and usually meant to be permanently installed, these self-powered docks are intended for specific models of notebook and would require upgrading at the same time as the notebook. Often, OEM docks contain a wired internet connection, dual displays, and a range of USB and Audio connections. Universal/third party dock Functionally similar to OEM docks, but available in seemingly endless combinations, self-powered non-OEM docks still produce multiple connections from one port, only instead of extracting them from internal chipsets, they create them inside the dock using converters. Typically USB-C or Thunderbolt-3 based, they incorporate a range of converters such as USB display adapters or a full external GPU, audio chipsets, NICs, storage enclosures, modems and memory card readers, or even PCI Express card slots connected through an internal USB hub or PCI Express bridge to give the host computer access to extra connections it did not previously possess. Because they are generally vendor neutral, they often support external device charging, more external monitors, and are more flexible to install. They are often the go-to choice for mixed device offices/workplaces to ease complications in deployments. Notebook/laptop stands and risers Notebook/laptop stands are sometimes incorrectly referred to as docking stations, but seems to be carried through from older technology as they do not connect to the computer in any way. They are an inert accessory designed merely to physically support a computer that is placed on it, typically to raise its screen up to a more ergonomic height, provide cooling, or just to conserve desk space. In 2019 some vendors have launched USB and QI (wireless) charging stands, but they are still independent and do not connect with the notebook. Vehicle docks Mobile docking stations, also sometimes referred to as vehicle mounts, provide stable platforms for notebook, laptop and tablet operation in vehicles. Many industries have adopted mobile computing and hence operators wish to have their vehicles fully equipped as mobile field-offices, commonly referred to as mobile offices. organizations use mobile docking stations in sectors such as (for example): law-enforcement, electricity, telecommunications, the military, emergency medical services, fire, construction, insurance, real-estate, agriculture, oil, gas, transportation, warehousing, food-distribution, surveying, and landscaping. Some manufacturers design mobile docking stations specially to withstand the rigors of travel; such products may have MIL-STD 810E construction-specifications for vibration and impact. Mobile docking stations usually come mated to an armature, laptop desk or standard rack in order to provide the ability to position the computer in a vehicle in a safe and ergonomic position. As with all docking stations, a mobile docking station will provide the user with a means to quickly and easily dock and un dock the computer. In addition, some mobile docking-stations provide a security-lock to discourage theft. Types of vehicle based docking stations Car Mounts are designed for use with most popular automobiles. Truck Mounts are designed for use with most popular pickup trucks, vans, SUVs and heavy-duty trucks. Cart Mounts and Forklift Mounts are designed to mount notebook, laptop & tablet computers on forklifts and warehouse carts. Vehicle mount components Basic components in the design of computer vehicle mounts include: Docking station: The actual station or cradle in which the mobile PC rests. These stations may come equipped with additional functionality, such as charging and USB ports. Motion attachments: Movable components of a mounting system that attach between the pole or tube and the mount base. These come in a number of varieties, allowing the user to configure their mount's range of motion to desired specifications. Mounting base: Attaches the mounting system to the vehicle. This element is commonly composed of rugged materials such as steel. Tunnel mounts attach to the raised floor area (drive shaft tunnel) between the driver and passenger. Passenger side mounts attach to floor on the passenger side of the vehicle. Console mounting brackets attach to an existing vehicle center console. Trunk mounts provide mounting options for use inside car trunks. Poles and tubes: Attachments that allow the user to adjust the length and particular setup of the mount for the particular vehicle style. Docking systems are shipped in their individual components, requiring the user to install and configure the device. See also Carputer Dock connector USB hub USB-C Thunderbolt 3 References Docking station Computer peripherals
36395668
https://en.wikipedia.org/wiki/Dhigawa%20Mandi
Dhigawa Mandi
Dhigawa Mandi also known as Dhigawa Jattan is a town located in the Bhiwani district of Haryana, India. It is on the National Highway 14 near Loharu on Delhi Pilani Route. There are five schools in the town, the Arya senior secondary school, the Shiv senior secondary school, the Genius senior secondary school, the Government secondary senior school, and the BDM Institute. All of them are well known schools in the town and surrounding villages. Dhigawa is known for its grain business in nearby area. Also there is BEd college in the town serving in the field of education. There are many temples, including Dada Bhomiya (Bhayia) Mandir, Shri Krishna Parnami Mandir, Ram Mandir, Shiv Mandir, Shyam mandir and Shri Balaji Mandir (Bal Club). It is said that in Dada Bhomiya Temple people come from across India with their wishes. Dhigawa Mandi has a prominent grain market that serves villages in a 30 km radius particularly serving the villages of Rajasthan. Also the area being cultivated by sprinkler irrigation this town is a base for the all needs of farmers like seeds, pesticides, cultivation machines and their repairs, etc. Being a hub for commercial activities, the transportation of goods is also a major activity and for the same there are no. of Trucks/TATA 407/ Tractor trolleys, etc. in the village. The transportation facility from/to Delhi and Pilani is also good one as there are Bus services as well as taxi services to serve the purpose and almost 1000 people travel from/to the town daily basis. This town is famous for international players (Shakti Singh - Discus thrower and Shot putter, Anil Sangwan - Discus and Javelin thrower), chartered accountants, doctors (Dr. Vinod Jangra MS) and engineers (Hanuman Prasad Jangra, Civil Engineer working in Railways as Class I officer), Dr. Seema Sharma (MBBS, MD- Now working at Rohtak PGI). Major. Ram Niwas Sheoran (Retd.) is also from nearby village of Allaudinpur(Bhungla). The following facilities are available. Okinawa Electric Vehicle showroom operated by OM E-Vehicle Services. Well known educational institute. Sports Stadium. Grain market. 22- to 24-hour-per-day electricity supply. 24-hour-per-day transport availability. Health care facilities.... Famous for Sunil Hospital and Parsuti Graha Azad market Dhigawa Mandi IT Center Rajeev Gandhi Computer Saksharta Mission Haryana Knowledge Corporation Ltd (HKCL) Nielit (National Institute of Electronics and Information Technology) Common Services Centres (CSC) Common Services Centres Implemented under the National e-Governance Plan (NeGP) formulated by the Department of Electronics and Information Technology (DeitY), Government of India, the Common Services Centers (CSCs) are ICT enabled front end service delivery points at the village level for delivery of Government, Financial, Social and Private Sector services in the areas of agriculture, health, education, entertainment, FMCG products, banking, insurance, pension, utility payments, etc. HS-CIT HS-CIT is an Information Technology (IT) literacy course started by HKCL in the year 2014. This course consists of: Reading and understanding a highly illustrated book eLearning based self-learning sessions Through HKCL's eLearning Revolution for All (ERA) Providing hands-on practice sessions. Learning facilitation by certified professionals With academic interactions, assessments, and collaboration Upon successfully completing the course, learners will be awarded the Haryana State - Certificate for Information Technology, jointly certified by Haryana Board of School Education, Bhiwani & Haryana Knowledge Corporation Limited, Panchkula. Introduction of Rajeev Gandhi Computer Saksharta Mission (RGCSM) Rajeev Gandhi Computer Saksharta Mission has been registered under the Society & Public Trust Act 21,1860,(Reg. No. S/47172) and Public Charitable Trust Act 1882 under section 17 vide Reg. No. 4286/IV from Govt. of India N.C.T., Delhi, working in different fields of Programme and Commercial training conducted by the State Government and Central Government to approach every class of the society. Society is also certified by ISO 9001 : 2008 Org. RGCSM is also associated with UP Electronic Corporation Ltd. (UPLC), Planning Commission of Govt. of India, National Skill Development Corporation (NSDC) of Govt. of India, New Delhi, National Skill Development Authority (NSDA) Govt. of India, New Delhi, Retailers Association Skill Council of India (RASCI), Mumbai, Telecom Sector Skill Council of India (TSSC), New Delhi, Banking, Financial services and Insurance Sector Skill Council of India (BFSISSC), New Delhi, National Digital Literacy Mission (NDLM), Tata Consultancy Services (TCS-ION), West Bengal State Rural Development Agency (WBSRDA). Rajeev Gandhi Computer Saksharta Mission provides training related to Computer Software, Hardware & Networking, Accounts, IT and ITES, Skill Development and various other sectors. The mission has conducted various commercial training and Skill Development programs for 20 years. RGCSM is working across whole nation with almost 2500 Authorized Study Center (ASC) and a wide network in 22 states of the country. Our society decided to work for "Information and technology for all", the slogan by the Indian Govt. For formulating the dream of Indian Govt. and to fulfill the requirement of employment of 22 Lac in I.T. Technologist and more than other 10 Lac ancillary requirement of computer operator / specialist in industrial development, small scale industries, our society makes an important role in the mission "Rajeev Gandhi Computer Saksharta Mission". Because the former Prime Minister Late Sh. Rajeev Gandhi introduced the computer revolution to India, the mission bears his name. The main function of the society is to provide higher technical education with nominal charges for every group of society of urban and rural areas all over India. Today, some big institutions have higher charges for their one-year or more than one-year programmes, which middle-class families cannot afford. Cities and towns in Bhiwani district
371301
https://en.wikipedia.org/wiki/Electronic%20voting
Electronic voting
Electronic voting (also known as e-voting) is voting that uses electronic means to either aid or take care of casting and counting votes. Depending on the particular implementation, e-voting may use standalone electronic voting machines (also called EVM) or computers connected to the Internet. It may encompass a range of Internet services, from basic transmission of tabulated results to full-function online voting through common connectable household devices. The degree of automation may be limited to marking a paper ballot, or may be a comprehensive system of vote input, vote recording, data encryption and transmission to servers, and consolidation and tabulation of election results. A worthy e-voting system must perform most of these tasks while complying with a set of standards established by regulatory bodies, and must also be capable to deal successfully with strong requirements associated with security, accuracy, integrity, swiftness, privacy, auditability, accessibility, cost-effectiveness, scalability and ecological sustainability. Electronic voting technology can include punched cards, optical scan voting systems and specialized voting kiosks (including self-contained direct-recording electronic voting systems, or DRE). It can also involve transmission of ballots and votes via telephones, private computer networks, or the Internet. In general, two main types of e-voting can be identified: e-voting which is physically supervised by representatives of governmental or independent electoral authorities (e.g. electronic voting machines located at polling stations); remote e-voting via the Internet (also called i-voting) where the voter submits his or her vote electronically to the election authorities, from any location. Benefits Electronic voting technology intends to speed the counting of ballots, reduce the cost of paying staff to count votes manually and can provide improved accessibility for disabled voters. Also in the long term, expenses are expected to decrease. Results can be reported and published faster. Voters save time and cost by being able to vote independently from their location. This may increase overall voter turnout. The citizen groups benefiting most from electronic elections are the ones living abroad, citizens living in rural areas far away from polling stations and the disabled with mobility impairments. Concerns It has been demonstrated that as voting systems become more complex and include software, different methods of election fraud become possible. Others also challenge the use of electronic voting from a theoretical point of view, arguing that humans are not equipped for verifying operations occurring within an electronic machine and that because people cannot verify these operations, the operations cannot be trusted. Furthermore, some computing experts have argued for the broader notion that people cannot trust any programming they did not author. The use of electronic voting in elections remains a contentious issue. Some countries such as Netherlands and Germany have stopped using it after it was shown to be unreliable, while the Indian Election commission recommends it. The involvement of numerous stakeholders including companies that manufacture these machines as well as political parties that stand to gain from rigging complicates this further. Critics of electronic voting, including security analyst Bruce Schneier, note that "computer security experts are unanimous on what to do (some voting experts disagree, but it is the computer security experts who need to be listened to; the problems here are with the computer, not with the fact that the computer is being used in a voting application)... DRE machines must have a voter-verifiable paper audit trails... Software used on DRE machines must be open to public scrutiny" to ensure the accuracy of the voting system. Verifiable ballots are necessary because computers can and do malfunction, and because voting machines can be compromised. Many insecurities have been found in commercial voting machines, such as using a default administration password. Cases have also been reported of machines making unpredictable, inconsistent errors. Key issues with electronic voting are therefore the openness of a system to public examination from outside experts, the creation of an authenticatable paper record of votes cast and a chain of custody for records. And, there is a risk that commercial voting machines results are changed by the company providing the machine. There is no guarantee that results are collected and reported accurately. There has been contention, especially in the United States, that electronic voting, especially DRE voting, could facilitate electoral fraud and may not be fully auditable. In addition, electronic voting has been criticised as unnecessary and expensive to introduce. While countries like India continue to use electronic voting, several countries have cancelled e-voting systems or decided against a large-scale rollout, notably the Netherlands, Ireland, Germany and the United Kingdom due to issues in reliability of EVMs. Moreover, people without internet access and/or the skills to use it are excluded from the service. The so-called digital divide describes the gap between those who have access to the internet and those who do not. Depending on the country or even regions in a country the gap differs. This concern is expected to become less important in future since the number of internet users tends to increase. The main psychological issue is trust. Voters fear that their vote could be changed by a virus on their PC or during transmission to governmental servers. Expenses for the installation of an electronic voting system are high. For some governments they may be too high so that they do not invest. This aspect is even more important if it is not sure whether electronic voting is a long-term solution. Types of system Electronic voting systems for electorates have been in use since the 1960s when punched card systems debuted. Their first widespread use was in the USA where 7 counties switched to this method for the 1964 presidential election. The newer optical scan voting systems allow a computer to count a voter's mark on a ballot. DRE voting machines which collect and tabulate votes in a single machine, are used by all voters in all elections in Brazil and India, and also on a large scale in Venezuela and the United States. They have been used on a large scale in the Netherlands but have been decommissioned after public concerns. In Brazil, the use of DRE voting machines has been associated with a decrease error-ridden and uncounted votes, promoting a larger enfranchisement of mainly less educated people in the electoral process, shifting government spending toward public healthcare, particularly beneficial to the poor. Internet voting systems have gained popularity and have been used for government elections and referendums in Estonia, and Switzerland as well as municipal elections in Canada and party primary elections in the United States and France. Internet voting has also been widely used in sub-national participatory budgeting processes, including in Brazil, France, United States, Portugal and Spain. There are also hybrid systems that include an electronic ballot marking device (usually a touch screen system similar to a DRE) or other assistive technology to print a voter verified paper audit trail, then use a separate machine for electronic tabulation. Paper-based electronic voting system Paper-based voting systems originated as a system where votes are cast and counted by hand, using paper ballots. With the advent of electronic tabulation came systems where paper cards or sheets could be marked by hand, but counted electronically. These systems included punched card voting, marksense and later digital pen voting systems. These systems can include a ballot marking device or electronic ballot marker that allows voters to make their selections using an electronic input device, usually a touch screen system similar to a DRE. Systems including a ballot marking device can incorporate different forms of assistive technology. In 2004, Open Voting Consortium demonstrated the 'Dechert Design', a General Public License open source paper ballot printing system with open source bar codes on each ballot. Direct-recording electronic (DRE) voting system A direct-recording electronic (DRE) voting machine records votes by means of a ballot display provided with mechanical or electro-optical components that can be activated by the voter (typically buttons or a touchscreen); that processes data with computer software; and that records voting data and ballot images in memory components. After the election it produces a tabulation of the voting data stored in a removable memory component and as a printed copy. The system may also provide a means for transmitting individual ballots or vote totals to a central location for consolidating and reporting results from precincts at the central location. These systems use a precinct count method that tabulates ballots at the polling place. They typically tabulate ballots as they are cast and print the results after the close of polling. In 2002, in the United States, the Help America Vote Act mandated that one handicapped accessible voting system be provided per polling place, which most jurisdictions have chosen to satisfy with the use of DRE voting machines, some switching entirely over to DRE. In 2004, 28.9% of the registered voters in the United States used some type of direct recording electronic voting system, up from 7.7% in 1996. In 2004, India adopted Electronic Voting Machines (EVM) for its elections to its parliament with 380 million voters casting their ballots using more than one million voting machines. The Indian EVMs are designed and developed by two government-owned defence equipment manufacturing units, Bharat Electronics Limited (BEL) and Electronics Corporation of India Limited (ECIL). Both systems are identical, and are developed to the specifications of Election Commission of India. The system is a set of two devices running on 7.5 volt batteries. One device, the voting Unit is used by the voter, and another device called the control unit is operated by the electoral officer. Both units are connected by a five-metre cable. The voting unit has a blue button for each candidate. The unit can hold 16 candidates, but up to four units can be chained, to accommodate 64 candidates. The control unit has three buttons on the surface – one button to release a single vote, one button to see the total number of votes cast till now, and one button to close the election process. The result button is hidden and sealed. It cannot be pressed unless the close button has already been pressed. A controversy was raised when the voting machine malfunctioned which was shown in Delhi assembly. On 9 April 2019, the Supreme Court ordered the ECI to increase voter-verified paper audit trail (VVPAT) slips vote count to five randomly selected EVMs per assembly constituency, which means ECI has to count VVPAT slips of 20,625 EVMs before it certifies the final election results. Public network DRE voting system A public network DRE voting system is an election system that uses electronic ballots and transmits vote data from the polling place to another location over a public network. Vote data may be transmitted as individual ballots as they are cast, periodically as batches of ballots throughout the election day, or as one batch at the close of voting. This includes Internet voting as well as telephone voting. Public network DRE voting system can utilize either precinct count or central count method. The central count method tabulates ballots from multiple precincts at a central location. Internet voting can use remote locations (voting from any Internet capable computer) or can use traditional polling locations with voting booths consisting of Internet connected voting systems. Corporations and organizations routinely use Internet voting to elect officers and board members and for other proxy elections. Internet voting systems have been used privately in many modern nations and publicly in the United States, the UK, Switzerland and Estonia. In Switzerland, where it is already an established part of local referendums, voters get their passwords to access the ballot through the postal service. Most voters in Estonia can cast their vote in local and parliamentary elections, if they want to, via the Internet, as most of those on the electoral roll have access to an e-voting system, the largest run by any European Union country. It has been made possible because most Estonians carry a national identity card equipped with a computer-readable microchip and it is these cards which they use to get access to the online ballot. All a voter needs is a computer, an electronic card reader, their ID card and its PIN, and they can vote from anywhere in the world. Estonian e-votes can only be cast during the days of advance voting. On election day itself people have to go to polling stations and fill in a paper ballot. Online voting Security experts have found security problems in every attempt at online voting, including systems in Australia, Estonia Switzerland, Russia, and the United States. It has been argued political parties that have more support from the less fortunate—who are unfamiliar with the Internet—may suffer in the elections due to e-voting, which tends to increase voting in the upper/middle class. It is unsure as to whether narrowing the digital divide would promote equal voting opportunities for people across various social, economic, and ethnic backgrounds. In the long run, this is contingent not only on internet accessibility but also depends on people's level of familiarity with the Internet. The effects of internet voting on overall voter turnout are unclear. A 2017 study of online voting in two Swiss cantons found that it had no effect on turnout, and a 2009 study of Estonia's national election found similar results. To the contrary, however, the introduction of online voting in municipal elections in the Canadian province of Ontario resulted in an average increase in turnout of around 3.5 percentage points. Similarly, a further study of the Swiss case found that while online voting did not increase overall turnout, it did induce some occasional voters to participate who would have abstained were online voting not an option. A paper on “remote electronic voting and turnout in the Estonian 2007 parliamentary elections” showed that rather than eliminating inequalities, e-voting might have enhanced the digital divide between higher and lower socioeconomic classes. People who lived greater distances from polling areas voted at higher levels with this service now available. The 2007 Estonian elections yielded a higher voter turnout from those who lived in higher income regions and who received formal education. Still regarding the Estonian Internet voting system, it was proved to be more cost-efficient than the rest of the voting systems offered in 2017 local elections. Electronic voting is perceived to be favored moreover by a certain demographic, namely the younger generation such as Generation X and Y voters. However, in recent elections about a quarter of e-votes were cast by the older demographic, such as individuals over the age of 55. Including this, about 20% of e-votes came from voters between the ages of 45 and 54. This goes to show that e-voting is not supported exclusively by the younger generations, but finding some popularity amongst Gen X and Baby Boomers as well. Online voting is widely used privately for shareholder votes. The election management companies do not promise accuracy or privacy. In fact one company uses an individual's past votes for research, and to target ads. Analysis Electronic voting systems may offer advantages compared to other voting techniques. An electronic voting system can be involved in any one of a number of steps in the setup, distributing, voting, collecting, and counting of ballots, and thus may or may not introduce advantages into any of these steps. Potential disadvantages exist as well including the potential for flaws or weakness in any electronic component. Charles Stewart of the Massachusetts Institute of Technology estimates that 1 million more ballots were counted in the 2004 USA presidential election than in 2000 because electronic voting machines detected votes that paper-based machines would have missed. In May 2004 the U.S. Government Accountability Office released a report titled "Electronic Voting Offers Opportunities and Presents Challenges", analyzing both the benefits and concerns created by electronic voting. A second report was released in September 2005 detailing some of the concerns with electronic voting, and ongoing improvements, titled "Federal Efforts to Improve Security and Reliability of Electronic Voting Systems Are Under Way, but Key Activities Need to Be Completed". Electronic ballots Electronic voting systems may use electronic ballot to store votes in computer memory. Systems which use them exclusively are called DRE voting systems. When electronic ballots are used there is no risk of exhausting the supply of ballots. Additionally, these electronic ballots remove the need for printing of paper ballots, a significant cost. When administering elections in which ballots are offered in multiple languages (in some areas of the United States, public elections are required by the National Voting Rights Act of 1965), electronic ballots can be programmed to provide ballots in multiple languages for a single machine. The advantage with respect to ballots in different languages appears to be unique to electronic voting. For example, King County, Washington's demographics require them under U.S. federal election law to provide ballot access in Chinese. With any type of paper ballot, the county has to decide how many Chinese-language ballots to print, how many to make available at each polling place, etc. Any strategy that can assure that Chinese-language ballots will be available at all polling places is certain, at the very least, to result in a significant number of wasted ballots. (The situation with lever machines would be even worse than with paper: the only apparent way to reliably meet the need would be to set up a Chinese-language lever machine at each polling place, few of which would be used at all.) Critics argue the need for extra ballots in any language can be mitigated by providing a process to print ballots at voting locations. They argue further, the cost of software validation, compiler trust validation, installation validation, delivery validation and validation of other steps related to electronic voting is complex and expensive, thus electronic ballots are not guaranteed to be less costly than printed ballots. Accessibility Electronic voting machines can be made fully accessible for persons with disabilities. Punched card and optical scan machines are not fully accessible for the blind or visually impaired, and lever machines can be difficult for voters with limited mobility and strength. Electronic machines can use headphones, sip and puff, foot pedals, joy sticks and other adaptive technology to provide the necessary accessibility. Organizations such as the Verified Voting Foundation have criticized the accessibility of electronic voting machines and advocate alternatives. Some disabled voters (including the visually impaired) could use a tactile ballot, a ballot system using physical markers to indicate where a mark should be made, to vote a secret paper ballot. These ballots can be designed identically to those used by other voters. However, other disabled voters (including voters with dexterity disabilities) could be unable to use these ballots. Cryptographic verification The concept of election verifiability through cryptographic solutions has emerged in the academic literature to introduce transparency and trust in electronic voting systems. It allows voters and election observers to verify that votes have been recorded, tallied and declared correctly, in a manner independent from the hardware and software running the election. Three aspects of verifiability are considered: individual, universal, and eligibility. Individual verifiability allows a voter to check that her own vote is included in the election outcome, universal verifiability allows voters or election observers to check that the election outcome corresponds to the votes cast, and eligibility verifiability allows voters and observers to check that each vote in the election outcome was cast by a uniquely registered voter. Voter intent Electronic voting machines are able to provide immediate feedback to the voter detecting such possible problems as undervoting and overvoting which may result in a spoiled ballot. This immediate feedback can be helpful in successfully determining voter intent. Transparency It has been alleged by groups such as the UK-based Open Rights Group that a lack of testing, inadequate audit procedures, and insufficient attention given to system or process design with electronic voting leaves "elections open to error and fraud". In 2009, the Federal Constitutional Court of Germany found that when using voting machines the "verification of the result must be possible by the citizen reliably and without any specialist knowledge of the subject." The DRE Nedap-computers used till then did not fulfill that requirement. The decision did not ban electronic voting as such, but requires all essential steps in elections to be subject to public examinability. In 2013, The California Association of Voting Officials was formed to maintain efforts toward publicly owned General Public License open source voting systems Coercion evidence In 2013, researchers from Europe proposed that the electronic voting systems should be coercion evident. There should be a public evidence of the amount of coercion that took place in a particular elections. An internet voting system called "Caveat Coercitor" shows how coercion evidence in voting systems can be achieved. Audit trails A fundamental challenge with any voting machine is to produce evidence that the votes were recorded as cast and tabulated as recorded. Election results produced by voting systems that rely on voter-marked paper ballots can be verified with manual hand counts (either valid sampling or full recounts). Paperless ballot voting systems must support auditability in different ways. An independently auditable system, sometimes called an Independent Verification, can be used in recounts or audits. These systems can include the ability for voters to verify how their votes were cast or enable officials to verify that votes were tabulated correctly. A discussion draft argued by researchers at the National Institute of Standards and Technology (NIST) states, "Simply put, the DRE architecture’s inability to provide for independent audits of its electronic records makes it a poor choice for an environment in which detecting errors and fraud is important." The report does not represent the official position of NIST, and misinterpretations of the report has led NIST to explain that "Some statements in the report have been misinterpreted. The draft report includes statements from election officials, voting system vendors, computer scientists and other experts in the field about what is potentially possible in terms of attacks on DREs. However, these statements are not report conclusions." Various technologies can be used to assure DRE voters that their votes were cast correctly, and allow officials to detect possible fraud or malfunction, and to provide a means to audit the tabulated results. Some systems include technologies such as cryptography (visual or mathematical), paper (kept by the voter or verified and left with election officials), audio verification, and dual recording or witness systems (other than with paper). Dr. Rebecca Mercuri, the creator of the Voter Verified Paper Audit Trail (VVPAT) concept (as described in her Ph.D. dissertation in October 2000 on the basic voter verifiable ballot system), proposes to answer the auditability question by having the voting machine print a paper ballot or other paper facsimile that can be visually verified by the voter before being entered into a secure location. Subsequently, this is sometimes referred to as the "Mercuri method." To be truly voter-verified, the record itself must be verified by the voter and able to be done without assistance, such as visually or audibly. If the voter must use a bar-code scanner or other electronic device to verify, then the record is not truly voter-verifiable, since it is actually the electronic device that is verifying the record for the voter. VVPAT is the form of Independent Verification most commonly found in elections in the United States and other countries such as Venezuela. End-to-end auditable voting systems can provide the voter with a receipt that can be taken home. This receipt does not allow voters to prove to others how they voted, but it does allow them to verify that the system detected their vote correctly. End-to-end (E2E) systems include Punchscan, ThreeBallot and Prêt à Voter. Scantegrity is an add-on that extends current optical scan voting systems with an E2E layer. The city of Takoma Park, Maryland used Scantegrity II for its November, 2009 election. Systems that allow the voter to prove how they voted are never used in U.S. public elections, and are outlawed by most state constitutions. The primary concerns with this solution are voter intimidation and vote selling. An audit system can be used in measured random recounts to detect possible malfunction or fraud. With the VVPAT method, the paper ballot is often treated as the official ballot of record. In this scenario, the ballot is primary and the electronic records are used only for an initial count. In any subsequent recounts or challenges, the paper, not the electronic ballot, would be used for tabulation. Whenever a paper record serves as the legal ballot, that system will be subject to the same benefits and concerns as any paper ballot system. To successfully audit any voting machine, a strict chain of custody is required. The solution was first demonstrated (New York City, March 2001) and used (Sacramento, California 2002) by AVANTE International Technology, Inc.. In 2004 Nevada was the first state to successfully implement a DRE voting system that printed an electronic record. The $9.3 million voting system provided by Sequoia Voting Systems included more than 2,600 AVC EDGE touchscreen DREs equipped with the VeriVote VVPAT component. The new systems, implemented under the direction of then Secretary of State Dean Heller replaced largely punched card voting systems and were chosen after feedback was solicited from the community through town hall meetings and input solicited from the Nevada Gaming Control Board. Hardware Inadequately secured hardware can be subject to physical tampering. Some critics, such as the group "Wij vertrouwen stemcomputers niet" ("We do not trust voting machines"), charge that, for instance, foreign hardware could be inserted into the machine, or between the user and the central mechanism of the machine itself, using a man in the middle attack technique, and thus even sealing DRE machines may not be sufficient protection. This claim is countered by the position that review and testing procedures can detect fraudulent code or hardware, if such things are present, and that a thorough, verifiable chain of custody would prevent the insertion of such hardware or software. Security seals are commonly employed in an attempt to detect tampering, but testing by Argonne National Laboratory and others demonstrates that existing seals can usually be quickly defeated by a trained person using low-tech methods. Software Security experts, such as Bruce Schneier, have demanded that voting machine source code should be publicly available for inspection. Others have also suggested publishing voting machine software under a free software license as is done in Australia. Testing and certification One method to detect errors with voting machines is parallel testing, which are conducted on the Election Day with randomly picked machines. The ACM published a study showing that, to change the outcome of the 2000 U.S. Presidential election, only 2 votes in each precinct would have needed to be changed. Cost Cost of having electronic machines receive the voter's choices, print a ballot and scan the ballots to tally results is higher than the cost of printing blank ballots, having voters mark them directly (with machine-marking only when voters want it) and scanning ballots to tally results, according to studies in Georgia, New York and Pennsylvania. Popular culture In the 2006 film Man of the Year starring Robin Williams, the character played by Williams—a comedic host of political talk show—wins the election for President of the United States when a software error in the electronic voting machines produced by the fictional manufacturer Delacroy causes votes to be tallied inaccurately. In Runoff, a 2007 novel by Mark Coggins, a surprising showing by the Green Party candidate in a San Francisco Mayoral election forces a runoff between him and the highly favored establishment candidate—a plot line that closely parallels the actual results of the 2003 election. When the private-eye protagonist of the book investigates at the behest of a powerful Chinatown businesswoman, he determines that the outcome was rigged by someone who defeated the security on the city's newly installed e-voting system. "Hacking Democracy" is a 2006 documentary film shown on HBO. Filmed over three years, it documents American citizens investigating anomalies and irregularities with electronic voting systems that occurred during America's 2000 and 2004 elections, especially in Volusia County, Florida. The film investigates the flawed integrity of electronic voting machines, particularly those made by Diebold Election Systems and culminates in the hacking of a Diebold election system in Leon County, Florida. The central conflict in the MMO video game Infantry resulted from the global institution of direct democracy through the use of personal voting devices sometime in the 22nd century AD. The practice gave rise to a 'voting class' of citizens composed mostly of homemakers and retirees who tended to be at home all day. Because they had the most free time to participate in voting, their opinions ultimately came to dominate politics. Electronic voting manufacturers AccuPoll Bharat Electronics Limited (India) Dominion Voting Systems (Canada) Electronics Corporation of India Ltd ES&S (United States) Hart InterCivic (United States) Nedap (Netherlands) Premier Election Solutions (formerly Diebold Election Systems) (United States) Safevote Sequoia Voting Systems (United States) Scytl (Spain) Smartmatic Academic efforts Bingo Voting DRE-i and DRE-ip Prêt à Voter Punchscan See also Certification of voting machines E-democracy Electoral fraud Soft error Vote counting system Voting machine References External links Electronic Vote around the World – Smartmatic Election Assistance Commission Vote.NIST.gov the National Institute of Standards and Technology Help America Vote Act page An Electronic Voting Case Study in KCA University, Kenya The Election Technology Library research list a comprehensive list of research relating to technology use in elections E-Voting information from ACE Project How do we vote in India with Electronic Voting machine NPR summary of current technology status in the states of the U.S., as of May 2008 Internet Voting in Estonia Progetto Salento eVoting a project for an e-voting test in Melpignano e Martignano (Lecce – Italy) designed by Prof. Marco Mancarella University of Salento a review of existing electronic voting systems and its verification systems in supervised environments Open Counting Systems Behind E-Voting VoteBox(tm) UK Online Voting Elections Electronic voting Voting
24092756
https://en.wikipedia.org/wiki/HiSoft
HiSoft
HiSoft Technology International Limited was a multinational information technology and business process outsourcing company headquartered in Dalian, China. Founded in 1996, HiSoft was listed on the NASDAQ public exchange in 2010. In November 2012, the company merged with China-based IT outsourcing industry peer VanceInfo to form Pactera. History HiSoft was founded in 1996 as Dalian Haihui Sci-Tech Co., Ltd. In 2002, the company established a Japan-based subsidiary, Haihui Sci-Tech Japan Co., Ltd., later renamed HiSoft Japan Co., Ltd. In 2004, HiSoft Technology International Limited, a Cayman Islands holding company had been formed with other units as wholly owned subsidiaries. They received venture capital funding from investors including Draper Fisher Jurvetson ePlanet Ventures, GGV Capital, GE Commercial Finance, International Finance Corporation, Intel Capital JAFCO Asia, Mitsubishi UFJ Securities, and Sumitomo Corporation Equity Asia Limited. With their initial public offering of $64–$74 million in American depositary shares on June 30, 2010, HiSoft became the 144th Chinese company to be listed on the NASDAQ public exchange. In July 2011, hiSoft acquired NouvEON Technology Partners, based in Charlotte, North Carolina. HiSoft acquired BearingPoint Australia for an undisclosed sum in July 2012. Operations HiSoft delivered software development, globalization, testing, and maintenance services to customers in the Americas, Europe and Asia, with focus on Fortune 500 firms in the telecommunication, software, financial services, pharmaceutical and manufacturing sectors. For 2009, the company reported approximately 35% of HiSoft's clients were Fortune 500, representing over 55% of revenue. They also reported 60% of their clients from US and Europe, 30% from Japan, and 10% from China. HiSoft's workforce grew from about 3000 at the end of 2007 to over 5000 in September 2010. HiSoft was the only Chinese company included on the 2010 Global Outsourcing 100 list of the International Association of Outsourcing Professionals, (in the list since its inception in 2006) and was in the top ten leaders in China, Japan. In addition to acquiring Envisage, Ensemble International and Teksen Systems, hiSoft established operations in the US, Singapore and Japan through acquisitions, partnerships and joint ventures. See also Software industry in China China Software Industry Association Dalian Software Park References External links hiSoft Official Website (English) Online companies of China Companies based in Dalian Software companies established in 1996 Software companies of China Companies formerly listed on the Nasdaq 1996 establishments in China Chinese brands
26267769
https://en.wikipedia.org/wiki/1995%20Cotton%20Bowl%20Classic
1995 Cotton Bowl Classic
The 1995 Mobil Cotton Bowl was the 59th Cotton Bowl Classic. The USC Trojans defeated the Texas Tech Red Raiders, 55–14. The Trojans took a 21–0 lead less than ten minutes into the game and led 34–0 at halftime. USC wide receiver Keyshawn Johnson, who finished with eight catches for a Cotton Bowl-record 222 yards and three touchdowns, was named offensive MVP. Trojan cornerback John Herpin had two interceptions, one for a touchdown, and was named defensive MVP. The game was televised nationally by NBC for the third consecutive year. The Cotton Bowl Classic would return to its longtime television home, CBS, the next year. It was also the last year that Mobil served as the game's title sponsor; the following year, the Cotton Bowl organizers began a seventeen-year relationship with what is now AT&T. Match-up USC's appearance was only the third in Cotton Bowl history by a team from the Pacific-10 Conference, following that of Oregon in 1949 and UCLA in 1989. Texas Tech's appearance was the last by a team from the Southwest Conference, which disbanded a year later. The Red Raiders finished 1-3 against ranked opponents, beating #19 Texas, but losing to #1 Nebraska, #21 Oklahoma, and #10 Texas A&M. They earned a share of the Southwest Conference championship, splitting it with Texas, Baylor, TCU, and Rice. Undefeated Texas A&M had the best record in the conference, but was ineligible for the conference title and could not play in a bowl game due to NCAA sanctions. The Longhorns were slated to play in the Sun Bowl, the Bears were slated to play in the Alamo Bowl, and the Horned Frogs were slated to play in the Independence Bowl, which left the Red Raiders to play in the Cotton Bowl Classic. Scoring summary USC - Shawn Walters 11 yard touchdown run (Ford kick), 6:51 remaining USC - Terry Barnum 19 yard touchdown pass from Rob Johnson (Ford kick), 6:39 remaining USC - John Herpin 26 yard interception return (Ford kick), 5:35 remaining USC - Keyshawn Johnson 12 yard touchdown pass from R. Johnson (Ford kick), 2:22 remaining USC - Ford 39 yard field goal, 6:50 remaining USC - Ford 42 yard field goal, 0:17 remaining USC - K. Johnson 22 yard touchdown pass from R. Johnson (Ford kick), 10:29 remaining USC - K. Johnson 86 yard touchdown pass from R. Johnson (Ford kick), 7:51 remaining Texas Tech - Zebbie Lethridge 5 yard touchdown run (Davis kick), 2:15 remaining USC - Jeff Diltz 2 yard touchdown pass from Brad Otton (Ford kick), 2:40 remaining Texas Tech - Stacy Mitchell 45 yard touchdown pass from Sone Cavazos (Davis kick), 0:00 remaining Wide receiver Keyshawn Johnson caught 8 passes for 222 yards and 3 touchdowns as USC trounced Texas Tech, who did not score until it was 48-0. Statistics References Cotton Bowl Cotton Bowl Classic USC Trojans football bowl games Texas Tech Red Raiders football bowl games Bowl Coalition January 1995 sports events in the United States 1995 in sports in Texas 1990s in Dallas 1995 in Texas
8313563
https://en.wikipedia.org/wiki/James%20A.%20D.%20W.%20Anderson
James A. D. W. Anderson
James Arthur Dean Wallace Anderson Known as James Anderson is a retired member of academic staff member in the School of Systems Engineering at the University of Reading, England, where he used to teach compilers, algorithms, fundamentals of computer science and computer algebra, and in the past he has taught programming and computer graphics. Anderson quickly gained publicity in December 2006 in the United Kingdom when the regional BBC South Today reported his claim of "having solved a 1200 year old problem", namely that of division by zero. However, commentators quickly responded that his ideas are just a variation of the standard IEEE 754 concept of NaN (Not a Number), which has been commonly employed on computers in floating point arithmetic for many years. Dr Anderson defended against the criticism of his claims on BBC Berkshire on 12 December 2006, saying, "If anyone doubts me I can hit them over the head with a computer that does it." Anderson was banned from teaching Transreal Arithmetic at the University of Reading in 2019 when he was reported to have been teaching it during a class "Fundamentals of Computer Science". Andersons nullity and transreal arithmetic are unaccepted by mathematicians and computer scientists alike, and is not a fundamental part of computer science. Shortly after he quit around the end of 2019. Research and background Anderson is a member of the British Computer Society, the British Machine Vision Association, Eurographics, and the British Society for the Philosophy of Science. He is also a teacher in the Computer Science department (School of Systems Engineering) at the University of Reading. He was a psychology graduate who worked in the Electrical and Electronic Engineering departments at the University of Sussex and Plymouth Polytechnic (now the University of Plymouth). His doctorate is from the University of Reading for (in Anderson's words) "developing a canonical description of the perspective transformations in whole numbered dimensions". He has written two papers on division by zero and has invented what he calls the "Perspex machine". Anderson claims that "mathematical arithmetic is sociologically invalid" and that IEEE floating-point arithmetic, with NaN, is also faulty. Transreal arithmetic Anderson's transreal numbers were first mentioned in a 1997 publication, and made well known on the Internet in 2006, but not accepted as useful by the mathematics community. These numbers are used in his concept of transreal arithmetic and the Perspex machine. According to Anderson, transreal numbers include all of the real numbers, plus three others: infinity (), negative infinity () and "nullity" (), a numerical representation of a non-number that lies outside of the affinely extended real number line. (Nullity, confusingly, has an existing mathematical meaning.) Anderson intends the axioms of transreal arithmetic to complement the axioms of standard arithmetic; they are supposed to produce the same result as standard arithmetic for all calculations where standard arithmetic defines a result. In addition, they are intended to define a consistent numeric result for the calculations which are undefined in standard arithmetic, such as division by zero. Transreal arithmetic and other arithmetics "Transreal arithmetic" closely resembles IEEE floating point arithmetic, a floating point arithmetic commonly used on computers. IEEE floating point arithmetic, like transreal arithmetic, uses affine infinity (two separate infinities, one positive and one negative) rather than projective infinity (a single unsigned infinity, turning the number line into a loop). Here are some identities in transreal arithmetic with the IEEE equivalents: The main difference is that IEEE arithmetic replaces the real (and transreal) number zero with positive and negative zero. (This is so that it can preserve the sign of a nonzero real number whose absolute value has been rounded down to zero. See also infinitesimal.) Division of any non-zero finite number by zero results in either positive or negative infinity. Another difference between transreal and IEEE floating-point operations that nullity compares equal to nullity, whereas NaN does not compare equal to NaN. This is due to conflicting semantics of the equality operator rather than a computational difference. In both cases, NaN and nullity are special error values assigned to an indeterminate. In IEEE, the inequality is because two expressions which both fail to have a numerical value cannot be numerically equivalent. In transreals, the equality allows certain identities which, although the result isn't numerical per se, both sides are either equal or both nullity (e.g. (-x)/0 = -(x/0)) Anderson's analysis of the properties of transreal algebra is given in his paper on "perspex machines". Due to the more expansive definition of numbers in transreal arithmetic, several identities and theorems which apply to all numbers in standard arithmetic are not universal in transreal arithmetic. For instance, in transreal arithmetic, is not true for all , since . That problem is addressed in ref. pg. 7. Similarly, it is not always the case in transreal arithmetic that a number can be cancelled with its reciprocal to yield . Cancelling zero with its reciprocal in fact yields nullity. Examining the axioms provided by Anderson, it is easy to see that any term which contains an occurrence of the constant is provably equivalent to . Formally, let be any term with a sub-term , then is a theorem of the theory proposed by Anderson. Media coverage Anderson's transreal arithmetic, and concept of "nullity" in particular, were introduced to the public by the BBC with its report in December 2006 where Anderson was featured on a BBC television segment teaching schoolchildren about his concept of "nullity". The report implied that Anderson had discovered the solution to division by zero, rather than simply attempting to formalize it. The report also suggested that Anderson was the first to solve this problem, when in fact the result of zero divided by zero has been expressed formally in a number of different ways (for example, NaN). The BBC was criticized for irresponsible journalism, but the producers of the segment defended the BBC, stating that the report was a light-hearted look at a mathematical problem aimed at a mainstream, regional audience for BBC South Today rather than at a global audience of mathematicians. The BBC later posted a follow-up giving Anderson's response to many claims that the theory is flawed. Applications Anderson has been trying to market his ideas for transreal arithmetic and "Perspex machines" to investors. He claims that his work can produce computers which run "orders of magnitude faster than today's computers". He has also claimed that it can help solve such problems as quantum gravity, the mind-body connection, consciousness and free will. See also Wheel theory Division by zero Extended real number line References Further reading External links Reading University Profile page Book of Paragon — personal homepage Living people Alumni of the University of Reading Academics of the University of Reading English computer programmers English computer scientists Members of the British Computer Society Computer arithmetic Place of birth missing (living people) 1958 births
4946451
https://en.wikipedia.org/wiki/Earl%20McCullouch
Earl McCullouch
Earl R. McCullouch (born January 10, 1946) is a retired American football wide receiver. McCullouch was the world record holder for the 110 meter men's high hurdle sprint from July 1967 to July 1969. When attending the University of Southern California, McCullouch was a member of the USC Trojan Football teams (wide receiver) and the USC Track & Field teams (120 yard high hurdles and 4×110 sprint relay) in 1967 and 1968. The USC Track 4×110 yard relay team, for which McCullouch ran the start leg, set the world record in 1967 that remains today, as the metric 4 × 100 m relay is now the commonly contested event. High school career McCullouch attended Long Beach Polytechnic High School. He tied the national high school record (also held by Don Castronovo from Oceanside High School in Oceanside, New York, and Steve Caminiti from Crespi Carmelite High School in Encino, California) in the 180 yard low hurdles at 18.1. The record was never broken and the event was discontinued in regular high school competition in 1974. He swept both the 120 yard high hurdles and the 180 low hurdles at the CIF California State Meet in 1964 (defeating Caminiti). In 1964 McCullouch was named Co-Athlete of the Year in the California Interscholastic Federation (CIF) Southern Section by the Helms Athletic Foundation. He earned the award in conjunction with pole vaulter Paul Wilson. College career Next he attended community college and played football at Long Beach City College, before transferring to the University of Southern California. McCullouch played college football at the University of Southern California, where he was part of the 1967 National Championship team. He was one of five USC Trojans players taken in the first round of the 1968 NFL Draft after his senior year. McCullouch was known for having elite sprinter speed and used it on both the track and the football field. Wearing No. 22 during the 1967 and 1968 seasons, McCulloch played wide receiver on an offensive USC Trojan Football squad that featured tailback O. J. Simpson. Defensive coverages had difficulty covering McCullouch in pass routes and chasing him after pass completions due to his sprinter's speed. McCullouch also provided down-field blocking on break-away plays, often for 1968 Heisman Trophy winner Simpson. As a member of the USC Track & Field team, McCulloch was the NCAA 110 Yard High Hurdle champion in 1967 and 1968, the NCAA 60 yard indoor high hurdle champion in 1968, and was the lead leg sprinter of the USC NCAA 4 X 110 yard sprint relay team in 1967 and 1968 (the team also featured Simpson and future Olympian sprinter Lennox Miller). The USC Trojan sprint relay team (McCulloch, Fred Kuller, Simpson, and Lennox Miller – in order) set a 4 X 110 yard sprint relay world record (38.6 sec.) in the 1967 NCAA Track & Field Championships in Provo, Utah on June 17, 1967. In the era of metric-distance sprint world records, this world record still stands today and is likely not to be broken. McCullough was on the cover of the April 1968 issue of Track and Field News. Professional career As the world record holder and National Champion in the hurdles, McCullouch was a favorite for the Olympic Gold Medal. In 1968, the Olympic Trials held a Semi-Final event a week after the National Championships. There, Campbell hit several hurdles and finished poorly in 7th place. The final Olympic Trials and Olympics were scheduled for late in the year, September and October respectively, well into the football season. And while the Olympics meant glory, there was no money to be made in the amateur days of the Olympics. McCullouch had a tough choice between his two sports. He chose to enter the NFL draft. Willie Davenport went on to win both the trials and the Olympics. A year later, Davenport finally beat McCullouch's world record. McCullouch was drafted by the Detroit Lions as their second pick of the first round (24th overall). By the time the Olympic races rolled around, Detroit had already played 5 official games of the regular season and was about to take the lead in the Central Division. By that time, McCullouch had already amassed 419 yards receiving and scored three touchdowns, including an 80-yard reception, from the Lions' other first round pick Greg Landry, in his first NFL game. He finished the season with 680 yards receiving, plus another 13 in 3 rushing attempts, 5 touchdowns and a 43-yard per touch average and was named the NFL Rookie of the Year in 1968. He played 7 seasons for the Lions between 1968 and 1973, then finished off his career with a non-productive season with the New Orleans Saints in 1974. References External links California State Records before 2000 Database Football NFL.com Pro Football Reference LBCC Hall of Champions The Races of Earl McCullouch (Internet Archive) USC Track 1946 births Living people People from Clarksville, Texas American male hurdlers American football wide receivers Athletes (track and field) at the 1967 Pan American Games USC Trojans men's track and field athletes USC Trojans football players Detroit Lions players New Orleans Saints players National Football League Offensive Rookie of the Year Award winners Pan American Games gold medalists for the United States Pan American Games medalists in athletics (track and field) World record setters in athletics (track and field) Track and field athletes from California Track and field athletes in the National Football League Long Beach City Vikings football players Medalists at the 1967 Pan American Games
53108275
https://en.wikipedia.org/wiki/Blue%20Prism
Blue Prism
Blue Prism is the trading name of the Blue Prism Group plc, a British multinational software corporation that pioneered and makes enterprise robotic process automation (RPA) software that provides a digital workforce designed to automate complex, end-to-end operational activities. Blue Prism's headquarters are at 2 Cinnamon Park Crab Lane Warrington WA2 0XP, UK with regional offices in the U.S. and Australia. The company is listed on the London Stock Exchange AIM market. History Formation Blue Prism was founded in 2001 by a group of process automation experts to develop technology that could be used to improve the efficiency and effectiveness of organisations. Initially their focus was on the back office where they recognised an enormous unfulfilled need for automation. The company was co-founded by Alastair Bathgate and David Moss to provide a new approach that today is known as robotic process automation, or RPA. In 2003, Blue Prism's first commercial product, Automate, was launched. In 2005, the second version of Automate was released with features for large scale processing. Co-operative Financial Services began using Blue Prism software in 2005 to automate manual processes in customer services. Robotic Process Automation Robotic process automation (RPA) is the application of technology that provides organizations with a digital workforce that follows rule-based business processes and interacts with the organizations' systems in the same way that existing users currently do. Blue Prism has been credited for coining the term "Robotic Process Automation." RPA is a growing industry and is expected to reach $3.11 billion by 2025. It has been used to handle the requests generated by the General Data Protection Regulation; by Tokio Marine Kiln for back-office transaction framework; by the nutrition company, Fonterra, to fix quantity mismatches in planning software SAP; and by Milaha for the entry, processing and transfer of data. The independent market research company Forrester Research identified Blue Prism as one of three companies that is considered a leader in the robotic process automation field both in terms of their market presence as well as the quality of their offering in a 2017 study. An October 2018 study by Grand View Research, Inc. stated that the key companies in the RPA market included: Automation Anywhere, Inc.; Blue Prism Group PLC; UIPath; Be Informed B.V.; OpenSpan; and Jacada, Inc. In 2019, Gartner released its 'Magic Quadrant' for RPA, and Blue Prism was one of the leaders in the market. Initial public offering On 18 March 2016, Blue Prism undertook an IPO when the company floated on the London Stock Exchange AIM market with a market capitalisation of £48.5 million. The company's shares rose 44 percent on the first day of trading on AIM, under CEO Alastair Bathgate. Customers include O2, Co-operative Bank and Fidelity Investment Management. By November 2016, it had offices in Chicago and Miami, as well as the United Kingdom. On 6 January 2017 Blue Prism announced it would open new offices in Austin, Texas, while remaining based in London. At the time, it employed 86 people worldwide. In March 2017, a group of shareholders sold stakes in Blue Prism. At the time, Blue Prism remained based in Merseyside. In June 2017, Blue Prism announced that a new version of its software would run on public clouds such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform. Previously, the software had run on customers' own servers. In 2019, Blue Prism announced changes to its platform and the issuing of new stock. It included a new AI engine, an updated marketplace for extensions, and a new lab for in-house AI innovation. As of April 2020, the leadership team is made up of chairman and CEO Jason Kingdon and co-founder and CTO David Moss. Kingdon, an early investor in the company, became chairman in 2008 and led the company until its successful IPO in 2016. Kingdon returned as chairman in 2019 and became CEO in April 2020 when former CEO Alastair Bathgate stepped down. In September 2021, it was announced that Vista Equity Partners plans to acquire Blue Prism for £1.095 billion ($1.5 billion USD). Vista intends to merge Blue Prism into TIBCO Software. Vista is said to be considering “potentially material headcount reductions” of 8 to 10 per cent of the approximately 4,750 employees of the merged companies. Blue Prism technology Blue Prism is built on the Microsoft .NET Framework. It automates any application and supports any platform (mainframe, Windows, WPF, Java, web, etc.) presented in a variety of ways (terminal emulator, thick client, thin client, web browser, Citrix and web services). It has been designed for a multi-environment deployment model (development, test, staging, and production) with both physical and logical access controls. Blue Prism RPA software includes a centralised release management interface and process change distribution model providing high levels of visibility and control. Additional control is provided to the business via a centralised model for process development and re-use. Blue prism records every system login, change in management action, and decisions and actions taken by the robots to identify statistics and real-time operational analytics. The software supports regulatory contexts such as PCI-DSS, HIPAA and SOX, with a large number of controls in place to provide the necessary security and governance. All of the process coding is automated on the back end, allowing even non-technical users to automate a process by dragging components into an interface. In 2016, Blue Prism received one of the top honors at the AIconics Awards, named as The Best Enterprise Application of AI. In 2017, the company was named one of MIT Tech Review's 50 Smartest Companies and was the winner of the UK Tech Awards. In 2019, Blue Prism announced its idea for connected-RPA. Connected-RPA is the offering of an automation platform with AI and cognitive features built in. It includes features like a Digital Exchange, with online access to drag and drop AI, machine learning, and cognitive and disruptive technologies; a web-based tool that reduces the time to prepare for a RPA deployment; and an online community for sharing knowledge and best practices. The Digital Exchange gives customers and partners the ability to create and share tools that can be used with Blue Prism's software.  To encourage innovation, Blue Prism has an AI engine for building connectors to advanced AI tools from Amazon, Google, IBM, and other AI platforms. In the summer of 2019, Blue Prism acquired Thoughtonomy for $100m. Thoughtonomy was later renamed Blue Prism Cloud. The acquisition provided Blue Prism with Cloud capabilities, allowing them to offer Automation as a Service over the cloud. The digital workforce Blue Prism's digital workforce is built, managed and owned by the user or customer, spanning operations and technology, adhering to an enterprise-wide robotic operating model. It is code-free and can automate any software in a non-invasive way. The digital workforce can be applied to automate processes in any department where clerical or administrative work is performed across an organisation. Business and markets Blue Prism has been deployed in a number of industries including banking, finance and insurance, consumer package goods, legal services, public sector, professional services, healthcare and utilities. Its RPA software has been purchased by companies such as Coca-Cola, Pfizer, Prudential, Sony, and Walgreens. In 2018, the UK water company United Utilities purchased robots from Blue Prism in order to use RPA to streamline its processes and increase efficiency. The robots monitor signals and alerts on the water network and automatically inform engineers as to any issues. The company IEG4 worked with Blue Prism to improve the handling of benefits claims with more efficient data processing. The finance company Fannie Mae used an RPA platform from Blue Prism to automate a review and notification process within its mortgage operations area. Another finance institution, Mashreq, worked with Blue Prism to automate multiple functions, including banking operations, compliance, customer care and help desk operations.  In the healthcare industry, Ascension Health has used Blue Prism's RPA technology for back-office functions, and to manage the licensure and certification of clinicians. In 2021, Blue Prism expanded its partnership with ABBYY, integrating their process and task mining capabilities. Blue Prism Ventures In 2021, Blue Prism launched Blue Prism Ventures which helps venture partners find collaboration opportunities within the field of software robotic process automation. The company's first venture is in South Korea, Blue Prism Korea, with GTPlus Ltd. References External links Blueprism.com 2001 establishments in England Business software companies Automation software Artificial intelligence British companies established in 2001 Software companies established in 2001 Companies based in Warrington Software companies of England Companies listed on the Alternative Investment Market 2016 initial public offerings Announced mergers and acquisitions
58970426
https://en.wikipedia.org/wiki/Untitled%20Goose%20Game
Untitled Goose Game
Untitled Goose Game is a 2019 puzzle stealth game developed by House House and published by Panic. Players control a goose who bothers the inhabitants of an English village. The player must use the goose's abilities to manipulate objects and non-player characters to complete objectives. It was released for Microsoft Windows, macOS, Nintendo Switch, PlayStation 4, and Xbox One. Untitled Goose Game originated from a stock photograph of a goose that a House House employee posted in the company's internal communications. This sparked a conversation about geese; the team put the idea aside for a few months until they realized that it had the potential to be a fun game. Inspired by Super Mario 64, and the Hitman series, House House worked on combining stealth mechanics with a lack of violence to create humorous in-game scenarios. The game's unusual name came from a last-minute decision in preparing the title as an entry for a games festival, and it stuck. Untitled Goose Game received positive reviews, with critics praising its gameplay and humor, and was named the D.I.C.E. Game of the Year, among other accolades. By the end of 2019, the game had sold more than a million copies. Gameplay Set in an idyllic English village, players control a domestic goose that can honk, duck down, run, flap its wings, and grab objects with its beak to bother various human villagers. The village is split up into multiple areas, each of which has a "to do" list of objectives, such as stealing certain objects or tricking humans into doing certain things. When enough of these objectives are cleared (one fewer than the total), an additional objective is added which, once cleared, allows the goose to move on to the next area. After completing four areas, the goose enters a miniature model of the village. There, the goose steals a golden bell before going back through the previous areas while the villagers try to stop it. In the game's ending, the goose deposits the bell into a ditch full of several other bells it had stolen in the past. There are several hidden optional objectives, many of which require traversing multiple areas or completing an area within a time limit. Completing all the optional objectives rewards the player with a crown for the goose to wear. A co-operative local multiplayer option, added in a later update, allows a second player to control a Chinese goose, with both geese trying to accomplish the goals together. Development and release Untitled Goose Game was developed by four-person indie studio House House, based in Melbourne, Australia. The game originated from a stock photograph of a goose that an employee posted in the company's internal communications, which sparked a conversation about geese. The team put the idea aside for a few months until they realized that it had the potential to be a fun game. Untitled Goose Game was published by Panic for Windows, macOS, and Nintendo Switch on 20 September 2019. The game is House House's second project, and like their debut, was supported by the government organization Film Victoria, who assisted the studio in getting set up properly. House House cited Super Mario 64 as the initial inspiration for the type of game that they hoped to build. They wanted the player to control a character who could run around in a 3D environment. Their previous game, Push Me Pull You, had 2D art with flat colors. They used a similar aesthetic in Untitled Goose Game by choosing to use low poly meshes, flat colors and untextured 3D models. The game's playable character, the goose, was originally just a stock image and the idea was non-player characters (NPCs) would react to it. They implemented a system where the NPCs would tidy up after an item was moved. After restricting the field of view of the NPCs, the gameplay evolved into a unique stealth-like experience. Instead of remaining hidden like in most stealth games, the goal was to have the goose attract the attention of NPCs and not get caught. House House created a structure to the game using missions with specific targets similar to the assassinations in the Hitman series mostly as a joke. House House member Jake Strasser stated "It has a set-up and a punchline. By removing the violence from it, we just let the situations exist as a joke." The team opted for the English village as the game's setting, as its "properness" was seen as "the antithesis of what the goose was all about", according to developer Nico Disseldorp. The name of Untitled Goose Game was a result of having to come up with a title quickly on learning that the game got accepted to be shown at the Fantastic Arcade part of the Fantastic Fest in Texas, and without any other ideas, used the title of the gameplay video they had applied with for the submission, which stuck since then. The Untitled Goose Game title stuck with fans when they started to promote the game on social media. The only other title they had come up with at one point was Some Like it Honk as an alternative, but the team never gave it serious consideration. The game was revealed in October 2017 with a trailer. The trailer gained viral popularity on social media sites, leading the team to recognize they had a popular concept on which to build. Following the initial trailer's release, Untitled Goose Game was present at the Game Developers Conference, PAX Australia, and PAX West (Prime) events in 2018. At E3 2019, the game was announced to be released for PC on the Epic Games Store. House House elaborated on their decision, stating that Epic's offer of an exclusivity deal allowed the developers the stability to go from part-time to full-time developers. The development team has stated that they are investigating porting the game to additional platforms, including mobile devices. PlayStation 4 and Xbox One versions were released on 17 December 2019. The Xbox One version was included as a part of the Xbox Game Pass service. Physical releases for the PlayStation 4 and Nintendo Switch versions of the game were produced by iam8bit and released on 29 September 2020. iam8bit is also publishing the game's soundtrack on vinyl record for release the same day. A free update on 23 September 2020 added a co-operative multiplayer mode for two players. This update was launched alongside the game's release on the Steam and itch.io storefronts. Music The trailer, scored by composer Dan Golding, features musical passages from the fifth prelude in Claude Debussy's Préludes, Les collines d'Anacapri. Comments on the trailer praised the apparent "reactive music" system where the music stopped and started depending on the action in the game, though House House said later this was a misconception, as the stops and starts were actually part of Debussy's composition. The positive reception to the trailer's music led House House sought to include Debussy's work as part of the game's soundtrack, and to actually develop a "reactive music" system for the game. To accomplish this, Golding sliced two versions of the piano tracks — one performed normally, and the other performed quietly — up into roughly 400 sections. These sections were categorized based on intensity, and played based on what was happening in the game. For example, the game would play a section of the quieter version of Minstrels when the goose was stalking its prey, but switch to the regular version once the goose was being chased. An official soundtrack was digitally released by Decca Records on 27 March 2020. It features both the regular and quiet versions of each track, as well as two original ones, called "Waltz For House House", which plays on the menu in the PlayStation 4 version of Untitled Goose Game, and "Untitled Goose Radio", which includes every track played from the in-game radio. A portion of the score was also released on vinyl by Iam8bit, which was pressed with double grooves to emulate the randomness of the in-game music. In 2020, Dan Golding earned an ARIA Music Awards nomination for Best Original Soundtrack or Musical Theatre Cast Album. Reception Untitled Goose Game received "generally favorable" reviews according to review aggregator Metacritic. IGN gave the game an 8/10 rating and praised its silliness stating, "Untitled Goose Game is a brief but endlessly charming adventure that had me laughing, smiling, and eagerly honking the whole way through." Game Informer praised the game for its silliness and creativity, but felt that the game was shallow and repetitive, stating, "Untitled Goose Game is a great concept, and ends in the same charming way it started. Pranking people is fun, and doing it as a goose just adds to the thrill. Most people will play it for the silly premise, complete it in a few hours, and go on their merry way without touching it again. If you just want to mess with people as a goose, here’s your chance – but the shallowness and repetition hold it back from being a truly engaging game." Destructoid positively compared the game to Shaun the Sheep, stating, "Untitled Goose Game reminds me greatly of the animated series Shaun the Sheep. There's little dialog, plenty of antics, and humans who keep getting outsmarted by birds. Unlike the titular Shaun from the show though, the goose in Untitled Goose Game is not a loveable little scamp who always comes to the aid of his friends. No, this goose is a dick." Kotaku gave the game a positive review, praising the gameplay and the understated humor of its "brief, endlessly funny interactions", and finding "an insidious joy in drawing out increasingly infuriated reactions from the small town's people". Untitled Goose Game drew similar attention as Goat Simulator, both sharing the nature of being animal-based sandbox-style games to create chaos in. After release, clips and stills from the game were shared on social media, with it becoming an Internet meme. Sales More than 100,000 copies were sold within its first two weeks of release on the Nintendo Switch. It was noted for having topped the Nintendo Switch sales charts in Australia, the United Kingdom, and the United States, even above the mainline Nintendo game released on the same day, The Legend of Zelda: Link's Awakening. By the end of 2019, it had sold over a million copies across all platforms. As part of a nationwide "Pay the Rent" campaign to provide reparations to the Indigenous Australians whose land had been taken from them by British colonisation, House House said it will donate 1% of revenues from Untitled Goose Game to the Wurundjeri, as their studio occupies land stolen from this nation. The end credits of the game contain the line "This game was made on the lands of the Wurundjeri people of the Kulin Nation. We pay our respects to their Elders, past and present. Sovereignty was never ceded." Awards References External links 2019 video games AIAS Game of the Year winners Cooperative video games Fictional geese Indie video games Internet memes introduced in 2019 MacOS games Nintendo Switch games PlayStation 4 games Puzzle video games Multiplayer and single-player video games Stealth video games Video game memes Video games about birds Video games developed in Australia Video games set in England Video games with cel-shaded animation Windows games Xbox One games
12395155
https://en.wikipedia.org/wiki/S.%20S.%20Ahluwalia
S. S. Ahluwalia
Surinderjeet Singh Ahluwalia (born 4 July 1951) is an Indian politician of the Bharatiya Janata Party (BJP). A former Union Minister of State in the Government of India, he is the member of Parliament of India representing Bardhaman-Durgapur Lok Sabha constituency in West Bengal in the 17th Lok Sabha (2019-2024). Previously he was a Member of Parliament representing Bihar and Jharkhand in the Rajya Sabha, the upper house of the Indian Parliament, over several terms, 1986–1992, 1992–1998 (as a member of Congress), and with BJP in 2000–2006, and 2006–2012. He subsequently entered the Lok Sabha, winning the 2014 Lok Sabha Elections from Darjeeling. He was earlier a member of the Indian National Congress. He was elected to the Rajya Sabha from Bihar in 1986 as a Congress member. Over the years of his career, he has held the posts of Minister for Urban Development, and Minister of Parliamentary Affairs in the Union Government headed by P. V. Narasimha Rao. He joined BJP in 1999. He was Deputy Leader of the Opposition in the Rajya Sabha till 2012 when he lost his Rajya Sabha seat in the Jharkhand Rajya Sabha elections. He is at present a Member of the National Executive Committee of the BJP. Career Ahluwalia is a lawyer by education. Political career Ahluwalia was a Member of Parliament from Rajya Sabha representing Bihar and Jharkhand in 1986-1992, 1992-1998, 2000–2006, and 2006-2012. He was elected to the Lok Sabha from the Darjeeling Parliamentary Constituency of West Bengal with active support of a local unrecognized party Gorkha Janmukti Morcha in May 2014. He served as Minister of State for Urban Affairs and Employment (Department of Urban Employment and Poverty Alleviation) and Parliamentary Affairs in P V Narasimha Rao Cabinet from 15 September 1995 to 16 May 1996. He was Deputy Leader of the Opposition in the Rajya Sabha from June 2010 to May 2012. He was inducted into the Union Council of Ministers as a Minister of State for Agriculture and Farmers Welfare and Parliamentary Affairs on 5 July 2016. Positions held 1984 - 86 Member, the G.S. Dhillon Committee constituted by Government of India for providing relief and rehabilitation to the victims of the November 1984 riots in the country 1986 - 92 Member, Consultative Committee for the Ministry of Science and Technology Member, General Council of the Indian School of Mines, Dhanbad 1987 - 88 Member, Select Committee on Medical Council Bill April 2000 - 2001 Member, Consultative Committee for the Ministry of Agriculture Sept 2000 - Aug 2004 Member, Committee on Finance 2001 Member, Consultative Committee for the Ministry of Information Technology Aug 2001 - April 2006 and June 2006 onwards Member, Business Advisory Committee Jan 2002 - Feb. 2004 Member, Consultative Committee for the Ministry of Communications and Information Technology Aug 2002 - Aug. 2004 Member, Committee on Information Technology Jan 2003 - July 2004 Member, Committee of Privileges Aug 2004 - April 2006 and May 2006 onwards Member, Committee on Home Affairs Aug 2004 - April 2006 and June 2006 onwards Member, House Committee Sept 2004 - Oct 2007 Member, National Monitoring Committee for Minorities Education Oct 2004 - 2006 Member, Consultative Committee for the Ministry of Finance June 2006 - Sept 2006 Member, Committee on Rules Sept 2006 onwards Member, Committee of Privileges April 2007 onwards Convenor, Sub-Committee on Civil Defence and Rehabilitation of J and K Migrants of the Committee on Home Affairs Sept 2007 onwards Member, Committee on Finance & Member, Consultative Committee for the Ministry of Communications and Information Technology Speeches and other contributions in international forums 1989 - Attended the United Nations High Commission for Refugees (UNHCR) Human Rights Conference in Geneva, Switzerland as Alternate Leader of the Indian Delegation. Speech Transcripts from the Conference: Violation of Human Rights in the Occupied Arab Territories including Palestine Speech Transcript Right to Development Speech Transcript Human Rights and Fundamental Freedoms Speech Transcript Torture and Enforced Disappearances Speech Transcript Report of Sub-Commission Speech Transcript Freedom of Religion Speech Transcript 2002 - Attended the United Nations General Assembly (UNGA) in New York, USA as a Delegate. Speech Transcripts from the Conference: Promotion and Protection of the Rights of Children Speech Transcript Social Development including questions relating to the World Speech Transcript Gender Equality, development and peace for the 21st Century Speech Transcript 2002 - Attended the International Parliamentarians Association for Information Technology (IPAIT) I Conference in Seoul, Korea as Chair of the Steering Committee and moderated over Theme 2. Joint Communiqué 2002 - Attended the Commonwealth Parliamentary Association Conference in Windhoek, Namibia as Leader of the Indian Delegation. Speech Transcript 2008 - Attended the International Parliamentarians Association for Information Technology (IPAIT) VI Conference in Sofia, Bulgaria as Vice-President of IPAIT. Joint Communiqué 2010 - IPU: (i) Served as Reporteur in the First Standing Committee on Peace & International Security in the 109th Assembly of IPU in Geneva. (ii) Served as Vice-President of the Standing Committee on Democracy and Human Rights in the 122nd IPU Assembly in Bangkok, Thailand. 2012 April - Farewell speech in the Rajya Sabha upon his retirement. Joint Parliamentary Committees (JPC) Aug 1992 - Member of Joint Parliamentary Committee to inquire into Irregularities in Securities and Banking Transactions April 2001 - Member of Joint Parliamentary Committee on Stock Market Scam and matters relating thereto August 2003 - Member of Joint Parliamentary Committee on pesticide residues in food products and safety standards for soft drinks, fruit juices and other beverages JPC on Pesticide residue in soft drinks Report on Pesticide residue in soft drinks March 2011 - Member of Joint Parliamentary Committee to probe the irregular allocation of 2G Spectrum June 2015 - Chairman, Joint Committee of Parliament to look into provisions of Land Acquisition Amendment Bill 2015 References External links Shri S.S. Ahluwalia - Profile Living people 1951 births Narendra Modi ministry 16th Lok Sabha members Lok Sabha members from West Bengal Bharatiya Janata Party politicians from West Bengal Rajya Sabha members from Bihar Rajya Sabha members from Jharkhand Indian National Congress politicians People from Paschim Bardhaman district People from West Bengal University of Calcutta alumni Punjabi people Indian Sikhs People from Asansol 17th Lok Sabha members Rajya Sabha members from the Bharatiya Janata Party
28835030
https://en.wikipedia.org/wiki/Cppcheck
Cppcheck
Cppcheck is a static code analysis tool for the C and C++ programming languages. It is a versatile tool that can check non-standard code. The creator and lead developer is Daniel Marjamäki. Cppcheck is free software under the GNU General Public License. Features Cppcheck supports a wide variety of static checks that may not be covered by the compiler itself. These checks are static analysis checks that can be performed at a source code level. The program is directed towards static analysis checks that are rigorous, rather than heuristic in nature. Some of the checks that are supported include: Automatic variable checking Bounds checking for array overruns Classes checking (e.g. unused functions, variable initialization and memory duplication) Usage of deprecated or superseded functions according to Open Group Exception safety checking, for example usage of memory allocation and destructor checks Memory leaks, e.g. due to lost scope without deallocation Resource leaks, e.g. due to forgetting to close a file handle Invalid usage of Standard Template Library functions and idioms Dead code elimination using unusedFunction option Miscellaneous stylistic and performance errors As with many analysis programs, there are many unusual cases of programming idioms that may be acceptable in particular target cases or outside of the programmer's scope for source code correction. A study conducted in March 2009 identified several areas where false positives were found by Cppcheck, but did not specify the program version examined. Cppcheck has been identified for use in systems such as CERNs 4DSOFT meta analysis package, for code verification in high energy particle detector readout devices, system monitoring software for radio telescopes as well as in error analysis of large projects, such as OpenOffice.org and the Debian archive. Development The project is actively under development and is actively maintained in different distributions. It has found valid bugs in a number of popular projects such as the Linux kernel and MPlayer. Plugins Plugins for the following IDEs or text editors exist CLion Code::Blocks - integrated. CodeLite - integrated. Eclipse Emacs gedit Hudson Jenkins Kate KDevelop Qt Creator Sublime Text Visual Studio Yasca See also List of tools for static code analysis References External links Cross-platform free software Free software programmed in C++ Free software testing tools Software using the GPL license Static program analysis tools
663489
https://en.wikipedia.org/wiki/Endre%20Szemer%C3%A9di
Endre Szemerédi
Endre Szemerédi (; born August 21, 1940) is a Hungarian-American mathematician and computer scientist, working in the field of combinatorics and theoretical computer science. He has been the State of New Jersey Professor of computer science at Rutgers University since 1986. He also holds a professor emeritus status at the Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences. Szemerédi has won prizes in mathematics and science, including the Abel Prize in 2012. He has made a number of discoveries in combinatorics and computer science, including Szemerédi's theorem, the Szemerédi regularity lemma, the Erdős–Szemerédi theorem, the Hajnal–Szemerédi theorem and the Szemerédi–Trotter theorem. Early life Szemerédi was born in Budapest. Since his parents wished him to become a doctor, Szemerédi enrolled at a college of medicine, but he dropped out after six months (in an interview he explained it: "I was not sure I could do work bearing such responsibility."). He studied in Eötvös Loránd University in Budapest and received his PhD from Moscow State University. His adviser was Israel Gelfand. This stemmed from a misspelling, as Szemerédi originally wanted to study with Alexander Gelfond. Academic career Szemerédi has been the State of New Jersey Professor of computer science at Rutgers University since 1986. He has held visiting positions at Stanford University (1974), McGill University (1980), the University of South Carolina (1981–1983) and the University of Chicago (1985–1986). Work Endre Szemerédi has published over 200 scientific articles in the fields of discrete mathematics, theoretical computer science, arithmetic combinatorics and discrete geometry. He is best known for his proof from 1975 of an old conjecture of Paul Erdős and Pál Turán: if a sequence of natural numbers has positive upper density then it contains arbitrarily long arithmetic progressions. This is now known as Szemerédi's theorem. One of the lemmas introduced in his proof is now known as the Szemerédi regularity lemma, which has become an important lemma in combinatorics, being used for instance in property testing for graphs and in the theory of graph limits. He is also known for the Szemerédi–Trotter theorem in incidence geometry and the Hajnal–Szemerédi theorem and Ruzsa–Szemerédi problem in graph theory. Miklós Ajtai and Szemerédi proved the corners theorem, an important step toward higher-dimensional generalizations of the Szemerédi theorem. With Ajtai and János Komlós he proved the ct2/log t upper bound for the Ramsey number R(3,t), and constructed a sorting network of optimal depth. With Ajtai, Václav Chvátal, and Monroe M. Newborn, Szemerédi proved the famous Crossing Lemma, that a graph with n vertices and m edges, where has at least crossings. With Paul Erdős, he proved the Erdős–Szemerédi theorem on the number of sums and products in a finite set. With Wolfgang Paul, Nick Pippenger, and William Trotter, he established a separation between nondeterministic linear time and deterministic linear time, in the spirit of the infamous P versus NP problem. Awards and honors Szemerédi has won numerous awards and honors for his contribution to mathematics and computer science. A few of them are listed here: Grünwald Prize (1967) Grünwald Prize (1968) Rényi Prize (1973) George Pólya Prize for Achievement in Applied Mathematics (SIAM), (1975) Prize of the Hungarian Academy of Sciences (1979) State of New Jersey Professorship (1986) The Leroy P. Steele Prize for Seminal Contribution to Research (AMS), (2008) The Rolf Schock Prize in Mathematics for deep and pioneering work from 1975 on arithmetic progressions in subsets of the integers (2008) The Széchenyi Prize of the Hungarian Republic for his many fundamental contributions to mathematics and computer science (2012) The Abel Prize for his fundamental contributions to discrete mathematics and theoretical computer science (2012) Hungarian Order of Saint Stephen (2020) Szemerédi is a corresponding member (1982), and member (1987) of the Hungarian Academy of Sciences and a member (2010) of the National Academy of Sciences. He is also a member of the Institute for Advanced Study in Princeton, New Jersey and a permanent research fellow at the Alfréd Rényi Institute of Mathematics in Budapest. He was the Fairchild Distinguished Scholar at the California Institute of Technology in 1987–88. He is an honorary doctor of Charles University in Prague. He was the lecturer in the Forty-Seventh Annual DeLong Lecture Series at the University of Colorado. He is also a recipient of the Aisenstadt Chair at CRM, University of Montreal. In 2008 he was the Eisenbud Professor at the Mathematical Sciences Research Institute in Berkeley, California. In 2012, Szemerédi was awarded the Abel Prize “for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory” The Abel Prize citation also credited Szemerédi with bringing combinatorics to the centre-stage of mathematics and noted his place in the tradition of Hungarian mathematicians such as George Pólya who emphasized a problem-solving approach to mathematics. Szemerédi reacted to the announcement by saying that "It is not my own personal achievement, but recognition for this field of mathematics and Hungarian mathematicians," that gave him the most pleasure. Conferences On August 2–7, 2010, the Alfréd Rényi Institute of Mathematics and the János Bolyai Mathematical Society organized a conference in honor of the 70th birthday of Endre Szemerédi. Prior to the conference a volume of the Bolyai Society Mathematical Studies Series, An Irregular Mind, a collection of papers edited by Imre Bárány and József Solymosi, was published to celebrate Szemerédi's achievements on the occasion of his 70th birthday. Another conference devoted to celebrating Szemerédi's work is the Third Abel Conference: A Mathematical Celebration of Endre Szemerédi. Personal life Szemerédi is married and has five children. References External links Personal Homepage at the Alfréd Rényi Institute of Mathematics 6,000,000 and Abel Prize - Numberphile Interview by Gabor Stockert (translated from the Hungarian into English by Zsuzsanna Dancso) 1940 births Living people Institute for Advanced Study visiting scholars Rolf Schock Prize laureates Rutgers University faculty 20th-century Hungarian mathematicians 21st-century Hungarian mathematicians Combinatorialists Theoretical computer scientists American computer scientists American mathematicians Hungarian computer scientists Members of the Hungarian Academy of Sciences Hungarian emigrants to the United States Members of the United States National Academy of Sciences Abel Prize laureates
30313587
https://en.wikipedia.org/wiki/OpenNI
OpenNI
OpenNI or Open Natural Interaction is an industry-led non-profit organization and open source software project focused on certifying and improving interoperability of natural user interfaces and organic user interfaces for Natural Interaction (NI) devices, applications that use those devices and middleware that facilitates access and use of such devices. PrimeSense, who was founding member of OpenNI, shutdown the original OpenNI project when it was acquired by Apple on November 24, 2013; since then Occipital and other former partners of PrimeSense are still keeping a forked version of OpenNI 2 (OpenNI version 2) active as an open source software, primary for their own Structure SDK (Software Development Kit) which is used by their Structure Product. History The organization was created in November 2010, with the website going public on December 8. One of the main members was PrimeSense, the company behind the technology used in the Kinect, a motion sensing input device by Microsoft for the Xbox 360 video game console. In December 2010, PrimeSense, whose depth sensing reference design Kinect is based on, released their own open source drivers along with motion tracking middleware called NITE. PrimeSense later announced that it had teamed up with Asus to develop a PC-compatible device similar to Kinect, which will be called Wavi Xtion and is scheduled for release in the second quarter of 2012. Their software is currently being used in a variety of open-source projects among academia and the hobbyist community. Recently, software companies have attempted to expand OpenNI's influence by making working with and integrating the technology dramatically simpler. After the acquisition of PrimeSense by Apple, it was announced that the website OpenNI.org would be shut down on April 23, 2014. Immediately after the shutdown, organizations that used OpenNI subsequently preserved documentation and binaries for future use. Today, Occipital and other former partners of PrimeSense is still keeping a forked version of OpenNI 2 (OpenNI version 2) active as an open source software for their Structure SDK for their Structure Product. Natural Interaction Devices Natural Interaction Devices or Natural Interfaces are devices that capture body movements and sounds to allow for a more natural interaction of users with computers in the context of a Natural user interface. The Kinect and Wavi X-tion are examples of such devices. OpenNI Framework The OpenNI framework provides a set of open source APIs. These APIs are intended to become a standard for applications to access natural interaction devices. The API framework itself is also sometimes referred to by the name OpenNI SDK. The APIs provide support for Voice and voice command recognition Hand gestures Body Motion Tracking Organization Pioneering Members PrimeSense - Natural Interaction & 3D Sensing Willow Garage - personal robotics applications ASUS - hardware manufacturer for full body motion apps and games Middleware Partners FORTH ICS – Institute of Computer Science TipTep University of Southern California– 3D Face Modeling & Recognition Ayotle – Vision Software SigmaRD See also Natural user interface Organic user interface Kinect (previously known as Project Natal) and PrimeSense References External links Structure.io/OpenNI OpenNI2 source code repository, forked from PrimalSince OpenNI2 by Occipital OpenNI Forums Application programming interfaces Standards organizations
51176872
https://en.wikipedia.org/wiki/Read%20%28Unix%29
Read (Unix)
read is a command found on Unix and Unix-like operating systems such as Linux. It reads a line of input from standard input or a file passed as an argument to its -u flag, and assigns it to a variable. In Unix shells, like Bash, it is present as a shell built in function, and not as a separate executable file. References Standard Unix programs Unix process- and task-management-related software
33809690
https://en.wikipedia.org/wiki/Verve%20%28operating%20system%29
Verve (operating system)
Verve is a research operating system developed by Microsoft Research. Verve is verified end-to-end for type safety and memory safety. Because of their complexity, a holy grail of software verification has been to verify properties of operating systems. Operating systems are usually written in low-level languages, such as C, that provide very few guarantees. The Singularity project took the approach of writing an operating system in C#, a type-safe, memory-safe language. A weakness of this approach is that operating systems necessarily need to call lower-level code to, for instance, move the stack pointer. Verve addresses this problem by partitioning the operating system into verified assembly language that is required to be low-level and a trusted interface to rest of the operating system, written in C#. There is a trusted specification that guarantees the low-level assembly code does not modify the heap and that the high-level C# code does not modify the stacks. Verve consists of a small Nucleus, which acts as a minimal hardware abstraction layer, and a Kernel, which uses primitives provided by the Nucleus to expose a more traditional interface to applications. All components of the system other than the Nucleus are written in managed code C# and compiled by Bartok (originally developed for the Singularity project) into typed assembly language (TAL), which is verified by a TAL checker. The Nucleus implements a memory allocator and garbage collection, support for stack switching, and managing interrupt handlers. It is written in BoogiePL, which serves as input to MSR's Boogie verifier, which proves the Nucleus correct using the Z3 Theorem Prover satisfiability modulo theories (SMT) automated theorem prover (solver). The Nucleus relies on the Kernel to implement threads, scheduling, synchronization, and to provide most interrupt handlers. Even though the Kernel is not formally verified, so, for example, a bug in scheduling could cause the system to hang, it cannot violate type or memory safety, and thus cannot directly cause undefined behavior. If it attempts to make invalid requests to the Nucleus, formal verification guarantees that the Nucleus handles the situation in a controlled manner. Verve's trusted computing base (TCB) is limited to: Boogie/Z3 for verifying the Nucleus's correctness; BoogieASM for translating it into x86 assembly; the BoogiePL specification of how the Nucleus should behave; the TAL verifier; the assembler and linker; and the bootloader. Notably, neither the C# compiler/runtime nor the Bartok compiler are part of the TCB. References Safe to the Last Instruction: Automated Verification of a Type-Safe Operating System, Jean Yang and Chris Hawblitzel. Programming Language Design and Implementation, 2010. Safe to the Last Instruction: Automated Verification of a Type-Safe Operating System, Jean Yang and Chris Hawblitzel. CACM Research Highlight. Communications of the ACM, September 2010. Technical Perspective: Safety First! Verve: A Type Safe Operating System. Interview with Chris Hawblitzel. Verve: A Type Safe Operating System. OSnews. Announcing Verve – A Type-Safe Operating System. InfoQ. Microsoft operating systems Microsoft Research Microkernel-based operating systems Microkernels Nanokernels
2399787
https://en.wikipedia.org/wiki/Time-tracking%20software
Time-tracking software
Time-tracking software is a category of computer software that allows its employees to record time spent on tasks or projects. The software is used in many industries, including those who employ freelancers and hourly workers. It is also used by professionals who bill their customers by the hour. These include lawyers, freelancers and accountants. The time-tracking software tool can be used stand-alone or integrated with other applications like project management software, customer support and accounting. Time tracking software is the electronic version of the traditional paper timesheet. Aside from timesheet software, time-tracking software also includes time-recording software, which uses user activity monitoring (UAM) to record the activities performed on a computer and the time spent on each project and task. Types of time-tracking software Timesheet Allows users to manually enter time spent on tasks. Time-tracking/recording Automatically records activities performed on a computer. Time-tracking software can be: Standalone: Used only to record timesheets and generate reports. Integrated as part of: Accounting systems, e.g. timesheet data fed directly to company accounts. Billing systems, e.g. to generate invoices, especially for contractors, lawyers, etc. Project management systems, e.g. timesheet data used by project management software to visualize the effort being spent on projects or tasks. Payroll systems, e.g. to pay employees based on time worked. Resource scheduling, e.g. bi-directional integration allows schedulers to schedule staff to tasks, which, once complete, can be confirmed and converted to timesheets. Timesheet software Timesheet software is software used to maintain timesheets. It was popularized when computers were first introduced to the office environment with the goal of automating heavy paperwork for big organizations. Timesheet software allows entering time spent performing different projects & tasks. When used within companies, employees enter the time they've spent on tasks into electronic timesheets. These timesheets can then be approved or rejected by supervisors or project managers. Since 2006, timesheet software has been moving to mobile platforms (smartphones, tablets, smartwatches, etc.) enabling better tracking of employees whose work involves multiple locations. Time-tracking/recording software Time-tracking/recording software automates the time-tracking process by recording the activities performed on a computer and the time spent on each of them. This software is intended to be an improvement over timesheet software. Its goal is to offer a general picture of computer usage. Automatic time-tracking/recording software records and shows the usage of applications, documents, games, websites, etc. When used within companies, this software allows monitoring the productivity of employees by recording the tasks they perform on their computers. It can be used to help filling out timesheets. When used by freelancers, this software helps to create reports for clients (e.g. timesheets and invoices) or to prove work that was done. Time-tracking methods There are several ways companies track employee time using time tracking software. Durational Employees enter the duration of the task but not the times when it was performed. Chronological Employees enter start and end times for the task. Automatic The system automatically calculates time spent on tasks or whole projects, using a connected device or a personal computer, and user input using start and stop buttons. Users can retrieve logged tasks and view the duration, or the start and stop times. Exception-based The system automatically records standard working hours except for approved time off or LOA. Clock-in clock-out Employees manually record arrival and departure times. Monitoring The system records active and idle time of employees. It might also record screen captures. Location-based The system determines the working status of employees based on their location. Resource-scheduling: by scheduling resources in advance, employees schedules can be easily converted to timesheets. Benefits of time-tracking software Tracking time can increase productivity, as businesses can track time spent on tasks and get a better understanding of what practices causes the employees to waste time. Time tracking software enhances accountability, by documenting the time it takes to finish given tasks. The data is collected in database and could be used for data analysis by the human resources departments. Features offered by time-tracking software include: Automatic generation of invoices to the professional's clients or customers based on the time spent. Tracking of cost overruns for fixed cost projects. workforce management packages which track attendance, employee absences, human resources issues, payroll, talent management, and labor analytics. Helps to track productive & non-productive hours. Create accountability & transparency between employers and employees. It helps evaluate the team’s workflow. See also Comparison of time-tracking software Computer surveillance Employee-scheduling software Meeting scheduling tool Project-management software Schedule (workplace) Time and attendance Time management References Accounting software Business software Time management
15139286
https://en.wikipedia.org/wiki/LINC-8
LINC-8
LINC-8 was the name of a minicomputer manufactured by Digital Equipment Corporation between 1966 and 1969. It combined a LINC computer with a PDP-8 in one cabinet, thus being able to run programs written for either of the two architectures. Architecture The LINC-8 contained one PDP-8 CPU and one LINC CPU, partially emulated by the PDP-8. At any one time, the computer was in either 'LINC mode' or 'PDP-8 mode' - both processors could not run in parallel. Instructions were provided to switch between modes. In the LINC-8, all interrupts were handled by the PDP-8 CPU, and programs that relied on the interrupt architecture of the LINC could not be run. The LINC was a 12-bit ones' complement accumulator machine, whereas the PDP-8, while also a 12-bit accumulator machine, operated in two's complement arithmetic. Memory addressing on the two architectures was also different. On the LINC, the full address space was divided into 1024-word segments, two of which were selected for use at any one time: the instruction field and the data field. Direct access of data in the instruction field was possible using 10-bit addresses. The data field could only be indirectly addressed. The Instruction field and Data field are theoretically capable of being chosen from up to 32 areas of 1K 12-bit words each as the maximum architecture is 32K total words. As a practical matter, few LINC-8 systems ever were expanded to 8K total. Memory expansion is accomplished first by adding PDP-8 memory extension hardware and extended memory instructions and a few minor LINC processor modifications to address the memory beyond the basic 4K total. Once this is accomplished, 4K memory "wings" can be added in a daisy-chained buss arrangement, which in theory could be expanded out as many as 7 times to implement the entire 32K. As a practical matter, it is always difficult to implement on the "regular" PDP-8, and, in the case of the LINC-8, it became necessary to slow down the CPU slightly just to add on the first additional 4K. Thus, as a practical matter, LINC-8 memory segments are limited to segment 0-3, or perhaps 0-7 on the few 8K implementations. However, basic 4K machines cannot address beyond 0-3 while extended memory models could attempt to address segments 0-37 octal even if non-existent memory. By convention, the segment 0 area is not available for normal fully emulated LINC operations. This is because the PDP-8 program usually known as PROGOFOP is loaded there to handle all interrupts, traps, etc. It is possible to write a program for a "partial" LINC CPU, meaning using only the hardware that actually exists. Whenever an operation is performed that it cannot handle, the PDP-8 operation resumes. However, the LINC operation could have been terminated for a variety of reasons. As such, it is always recommended that PROGOFOP be loaded when attempting to use "complete" LINC programs on this system. Many operating systems were written for this machine; some were essentially slightly modified versions designed for the original LINC CPU it is partially based on. Bootup conventions allowed an image of a custom version of PROGOFOP to first be loaded, followed by executing tape instructions to load the LINC-based operating system. In some cases, the bootup procedure was accomplished manually right on the LINC console switches; later systems self-started the system after loading PROGOFOP. Other operating systems are actually more generic and are designed to mostly ignore the LINC side of things. These are PDP-8-only systems, although perhaps custom configured for the vagaries of the specifics of a LINC-8. In some cases, this means that they cannot be run on any other machine; in other cases, the LINC-8 merely represented a normal variation of drivers off of an otherwise non-descript PDP-8 system. An advantage of a PDP-8-based system is that PROGOFOP is superfluous here. If needed, the PDP-8 system could load PROGOFOP as well as a user program primarily LINC-oriented to get at the laboratory peripherals. The LINC convention of the entire first 1K being unavailable reserved for PROGOFOP is exchanged for the far smaller PDP-8 convention of reserving only 07600-07777 or the last 128-word page of the first 4K of the machine. This corresponds to a small reserved area at the end of LINC segment 3 in exchange for much greater overall flexibility. The PDP-8 divided its memory into 128-word pages. An instruction could reference the current page, that being the page where the instruction itself was located, or page 0, the 128 words of memory at addresses 0-127. Indirect addressing could be used to produce 12-bit addresses. If more than 4K memory is implemented, the indirect addressing is extended to include the Data Field, thus it is possible to access any location indirectly in 32K maximum. Again, hardware limitations of the LINC-8 make it hard to achieve a total size of more than 8K total. Also implemented is the Instruction Field, making it possible to load larger programs into the same addressing space the Data Field controls. Transfer of control can be either direct or indirect as required. The new address is determined by first setting the new Instruction Field value, and then executing a JMP or JMS instruction into the new field's corresponding 12-bit address, thus effecting a 15-bit address overall. The computer included a number of LINC peripherals, which were controlled by special LINC mode instructions. These devices included analog inputs in the forms of knobs and jacks, relays for control of external equipment, LINCtape drives (the predecessor of the DECtape), an oscilloscope-like cathode ray tube under program control, as well as a Teletype Model 33 ASR. Actually, the CRT is a specially modified unit based on a standard Tektronix oscilloscope modified to only be driven by D-A converters and an intensifier interface; there are no sweep circuits as found in conventional oscilloscopes. Most of the modifications involve custom highly stripped down plug in modules, which also house the actual knobs hooked to the lowest A-D channels. Arguably, this is the precursor to the modern mouse interface; some software utilized knob twirling in a manner that would later suggest the two-dimensional form of a mouse; these are knobs controlling only one parameter at a time, etc. Some of these peripherals are simulated and are actually peripherals of the PDP-8. Any unimplemented operation stops the LINC CPU and interrupts the PDP-8 processor to handle the specifics. Most notably, the LINCtape is actually a PDP-8 peripheral; the tape class of LINC instructions are trapped and interrupt the PDP-8 which then emulates how a real LINC or PDP-12 would carry out the specifics of the latest tape instruction. Pressing a variety of keys on the seemingly present LINC console all cause PDP-8 interrupts; PROGOFOP is designed to emulate the functions as they would appear on the original LINC. An interesting feature is the FETCH/EXEC stop, which is implemented in all hardware in the LINC and PDP-12. The hardware, when enabled, continuously monitors instruction execution until specific conditions are met. This will cause a PDP-8 interrupt stalling the LINC program. Simulated console operations can be used to examine memory or make other changes, such as pressing the simulated DO key. The DO key executes any one instruction on the left switch register while the right switch register may have to also be set in the case of double word instructions such as most of the tape class. Booting certain operating systems consists of executing a tape read instruction directly from both sets of switches pressing the simulated DO key followed by pressing the simulated START 20 switch. In essence, the LINC-8 implements all of the functions of the console panel of the "real" LINC, then uses the PDP-8 to simulate most of them. Purpose The LINC-8 was built as a laboratory computer. It was small enough to fit in a laboratory environment, provided modest computing power at a low price, and included hardware capabilities necessary to monitor and control experiments. The LINCtape magnetic tape drive, designed by Wesley A. Clark for the LINC, was suitable for handling in a laboratory environment, and the tapes could be carelessly pocketed, dropped, or even pierced and cut without losing the data stored on them. Current status In 1969, DEC improved upon the LINC-8 with the PDP-12, a similar combination computer for lab use, and the LINC-8 was cancelled. Few LINC-8 computers were ever built, numbering only in the low hundreds, and so the model is a rare sight today. As of 2008, a project to emulate the LINC-8 on modern hardware is underway within the Update computer society at Uppsala University. References The PDP-8 FAQ PDP-8 Summary of Models and Options External links Project GreenPea a PDP-12 emulator DEC minicomputers Transistorized computers Computer-related introductions in 1966 12-bit computers