id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
3200740
https://en.wikipedia.org/wiki/HyperACCESS
HyperACCESS
HyperACCESS (sometimes known as HyperTerminal) is a family of terminal emulation software by Hilgraeve. A version of HyperACCESS, called HyperTerminal is included in some versions of Windows. History It was the first software product from Hilgraeve, and it was initially designed to let 8-bit Heath computers communicate over a modem. In 1985, this same product was ported to IBM PCs and compatible systems, as well as Heath/Zenith's Z-100 non-PC-compatible MS-DOS computer. Over the years, the same version of this technology would be ported to other operating systems, including OS/2, Windows 95 and Windows NT. It has earned a total of five Editor's Choice awards from PC Magazine. In 1995, Hilgraeve licensed a low-end version of HyperACCESS, known as HyperTerminal, to Microsoft for use in their set of communications utilities. It was bundled with Windows 95 through Windows XP, but is no longer bundled with Windows Vista, Windows 7, Windows 8, or later Windows. The commercial products, HyperTerminal Private Edition and HyperACCESS, both support all versions of Windows up to and including Windows 10. Protocols Display: Minitel, Viewdata, VT100, VT52 File transfer: ASCII, Kermit, XMODEM, YMODEM/YMODEM-G, and ZMODEM See also List of terminal emulators External links Hilgraeve company website Configuring Hyper Terminal for Serial Communication Communication software Terminal emulators DOS software OS/2 software Windows software Discontinued Windows components
31084767
https://en.wikipedia.org/wiki/Brekeke%20SIP%20Server
Brekeke SIP Server
Brekeke SIP Server is a SIP-based SIP Proxy Server and Registrar. This software was previously known as OnDO SIP Server from 2003 to 2006. Brekeke Software, Inc. released the second version of the software in 2007 and changed its name to Brekeke SIP Server. It authenticates and registers user agents, such as VoIP devices and softphones, and routes SIP sessions between user agents. Brekeke Software, Inc. has developed two versions of this software: Standard Edition and Advanced Edition. Brekeke SIP Server offers, among other things, SIP over TCP/UDP support, TLS support, original NAT traversal functionality, SIP Redundancy feature. SIP Compliant Brekeke SIP Server is SIP-compliant (RFC 3261 Standard), which ensures that it has the highest level of interoperability with other SIP devices and services. This also makes Brekeke SIP Server interoperable with other clients, such as Google Voice and Skype Connect. Reception In 2007, Brekeke SIP Server was one of 140 recipients of the Technology Marketing Corporation's 2006 Product of the Year Award for Communications Solutions. In 2011, Brekeke SIP Server was one of 120 recipients of the INTERNET TELEPHONY Magazine's 2010 Product Of The Year Award. References External links Telecommunications companies of the United States VoIP software
5210775
https://en.wikipedia.org/wiki/Sync%20%28Unix%29
Sync (Unix)
sync is a standard system call in the Unix operating system, which commits all data in the kernel filesystem to non-volatile storage buffers, i.e., data which has been scheduled for writing via low-level I/O system calls. Higher-level I/O layers such as stdio may maintain separate buffers of their own. As a function in C, the sync() call is typically declared as void sync(void) in <unistd.h>. The system call is also available via a command line utility also called sync, and similarly named functions in other languages such as Perl and Node.js (in the fs module). The related system call fsync() commits just the buffered data relating to a specified file descriptor. fdatasync() is also available to write out just the changes made to the data in the file, and not necessarily the file's related metadata. Some Unix systems run a kind of flush or update daemon, which calls the sync function on a regular basis. On some systems, the cron daemon does this, and on Linux it was handled by the pdflush daemon which was replaced by a new implementation and finally removed from the Linux kernel in 2012. Buffers are also flushed when filesystems are unmounted or remounted read-only, for example prior to system shutdown. Database use In order to provide proper durability, databases need to use some form of sync in order to make sure the information written has made it to non-volatile storage rather than just being stored in a memory-based write cache that would be lost if power failed. PostgreSQL for example may use a variety of different sync calls, including fsync() and fdatasync(), in order for commits to be durable. Unfortunately, for any single client writing a series of records, a rotating hard drive can only commit once per rotation, which makes for at best a few hundred such commits per second. Turning off the fsync requirement can therefore greatly improve commit performance, but at the expense of potentially introducing database corruption after a crash. Databases also employ transaction log files (typically much smaller than the main data files) that have information about recent changes, such that changes can be reliably redone in case of crash; then the main data files can be synced less often. Error reporting and checking To avoid any data loss return values of fsync() should be checked because when performing I/O operations that are buffered by the library or the kernel, errors may not be reported at the time of using the write() system call or the fflush() call, since the data may not be written to non-volatile storage but only be written to the memory page cache. Errors from writes are instead often reported during system calls to fsync(), msync() or close(). Prior to 2018, Linux's fsync() behavior under certain circumstances failed to report error status, change behavior was proposed on 23 April 2018. Performance controversies Hard disks may default to using their own volatile write cache to buffer writes, which greatly improves performance while introducing a potential for lost writes. Tools such as hdparm -F will instruct the HDD controller to flush the on-drive write cache buffer. The performance impact of turning caching off is so large that even the normally conservative FreeBSD community rejected disabling write caching by default in FreeBSD 4.3. In SCSI and in SATA with Native Command Queuing (but not in plain ATA, even with TCQ) the host can specify whether it wants to be notified of completion when the data hits the disk's platters or when it hits the disk's buffer (on-board cache). Assuming a correct hardware implementation, this feature allows the disk's on-board cache to be used while guaranteeing correct semantics for system calls like fsync. This hardware feature is called Force Unit Access (FUA) and it allows consistency with less overhead than flushing the entire cache as done for ATA (or SATA non-NCQ) disks. Although Linux enabled NCQ around 2007, it did not enable SATA/NCQ FUA until 2012, citing lack of support in the early drives. Firefox 3.0, released in 2008, introduced fsync system calls that were found to degrade its performance; the call was introduced in order to guarantee the integrity of the embedded SQLite database. Linux Foundation chief technical officer Theodore Ts'o claims there is no need to "fear fsync", and that the real cause of Firefox 3 slowdown is the excessive use of fsync. He also concedes however (quoting Mike Shaver) that "On some rather common Linux configurations, especially using the ext3 filesystem in the “data=ordered” mode, calling fsync doesn't just flush out the data for the file it's called on, but rather on all the buffered data for that filesystem." See also Buffer cache File descriptor File system inode Superblock References External links sync(8) - Linux man page. http://austingroupbugs.net/view.php?id=672 C POSIX library Data synchronization Standard Unix programs Unix file system-related software System calls
3385909
https://en.wikipedia.org/wiki/Jerusalem%20%28computer%20virus%29
Jerusalem (computer virus)
Jerusalem is a logic bomb DOS virus first detected at Hebrew University of Jerusalem, in October 1987. On infection, the Jerusalem virus becomes memory resident (using 2kb of memory), and then infects every executable file run, except for COMMAND.COM. COM files grow by 1,813 bytes when infected by Jerusalem and are not re-infected. Executable files grow by 1,808 to 1,823 bytes each time they are infected, and are then re-infected each time the files are loaded until they are too large to load into memory. Some .EXE files are infected but do not grow because several overlays follow the genuine .EXE file in the same file. Sometimes .EXE files are incorrectly infected, causing the program to fail to run as soon as it is executed. The virus code itself hooks into interrupt processing and other low level DOS services. For example, code in the virus suppresses the printing of console messages if, for example, the virus is not able to infect a file on a read-only device such as a floppy disk. One of the clues that a computer is infected is the mis-capitalization of the well-known message "Bad command or file name" as "Bad Command or file name". The Jerusalem virus is unique among other viruses of the time, as it is a logic bomb, set to go off on Friday the 13th on all years but 1987. Once triggered, the virus not only deletes any program run that day, but also infects .EXE files repeatedly until they grow too large for the computer. This particular feature, which was not included in all of Jerusalem's variants, is triggered 30 minutes after the system is infected, significantly slows down the infected computer, thus allowing for easier detection. Jerusalem is also known as "BlackBox" because of a black box it displays during the payload sequence. If the system is in text mode, Jerusalem creates a small black rectangle from row 5, column 5 to row 16, column 16. Thirty minutes after the virus is activated, this rectangle scrolls up two lines. As a result of the virus hooking into the low-level timer interrupt, PC-XT systems slow down to one fifth of their normal speeds 30 minutes after the virus has installed itself, though the slowdown is less noticeable on faster machines. The virus contains code that enters a processing loop each time the processor's timer tick is activated. Symptoms also include spontaneous disconnection of workstations from networks and creation of large printer spooling files. Disconnections occur since Jerusalem uses the 'interrupt 21h' low-level DOS functions that Novell NetWare and other networking implementations required to hook into the file system. Jerusalem was initially very common (for a virus of the day) and spawned a large number of variants. However, since the advent of Windows, these DOS interrupts are no longer used, so Jerusalem and its variants have become obsolete. Aliases 1808(EXE), due to the virus's length of 1808 bytes. 1813(COM), due to the virus's length of 1813 bytes. Friday13th (Note: The name can also refer to two viruses that are unrelated to Jerusalem: Friday-13th-440/Omega and Virus-B), due to its trigger date of Friday the 13th. Hebrew University, as it was discovered by students who attended Hebrew University. Israeli PLO, due to a belief that it was created by the Palestine Liberation Organization to mark May 13, 1948, the day before Israel Independence Day, apparently the last day Palestine existed as a country. Russian Saturday 14 sUMsDos, referencing a piece of the virus's code. Variants Get Password 1 (GP1): Discovered in 1991, this Novell NetWare-specific virus attempts to gather passwords from the NetWare DOS shell in memory upon user login, which it then broadcasts to a specific socket number on the network where a companion program can recover them. This virus does not work on Novell 2.x and newer versions. Suriv Viruses: Viruses that are earlier, more primitive versions of Jerusalem. The Jerusalem virus is considered to be based on Suriv-3, which is a logic bomb triggered when the date is Friday the 13th, switching off the computer on the 13th. In itself, Suriv-3 is based on its predecessors, Suriv-1 and Suriv-2, which are logic bombs triggered on April 1 (April Fools' Day), showing text reading "April 1, ha ha you have a virus!". Suriv-1 infects .COM files and Suriv-2 infects .EXE files, while Suriv-3 infects both types of files. The name of these viruses comes from spelling "virus" backwards. Sunday (Jeru-Sunday): (Main article: Sunday (computer virus))This virus grows files by 1,636 bytes. The variant is intended to delete every program as it is run every Sunday, but software bugs prevent this from happening. On each Sunday, the virus displays the following message:Today is SunDay! Why do you work so hard? All work and no play make you a dull boy! Come on! Let's go out and have some fun! Variants of Sunday Sunday.a: The original Sunday virus. Sunday.b: A version of Sunday which has a functional program-deleting function. Sunday.1.b: An improvement upon Sunday.b which fixes a bug regarding the Critical Error Handler, which causes problems on write-protected disks. Sunday.1.Tenseconds: A variant on Sunday.a which maintains a 10 second delay between messages and sets Sunday as day 0 instead of day 7. Sunday.2: A variant on Sunday.a which grows files by 1,733 bytes instead of the original 1,636 bytes. Anarkia: Anarkia has a trigger date of Tuesday the 13th and uses the self-recognition code "Anarkia". PSQR (1720): PQSR infects .COM and .EXE files, but does not infect overlay files or COMMAND.COM. It causes infected .COM files to grow by 1,720 bytes and .EXE files by 1,719-1,733 bytes. It activates on Friday the 13th, and will delete any file run that day. Garbage is written to the master boot record and the nine sectors after the MBR. The virus uses "PQSR" as its self-recognition code. Frère: Frère plays Frère Jacques on Fridays. It increases the size of infected .COM files by 1,813 bytes and .EXE files by 1,808-1,822 bytes, but does not infect COMMAND.COM. Westwood (Jerusalem-Westwood): Westwood causes files to grow by 1,829 bytes. If the virus is memory-resident, Westwood deletes any file run during Friday the 13th. Jerusalem 11-30: This virus infects .COM, .EXE, and overlay files, but not COMMAND.COM. The virus infects programs as they are used, and causes infected .COM files to grow by 2,000 bytes and .EXE files to grow by 2,000-2,014 bytes. However, unlike the original Jerusalem virus, it does not re-infect .EXE files. Jerusalem-Apocalypse: Developed in Italy, this virus infects programs as they are executed, and will insert the text "Apocalypse!!" in infected files. It causes infected .COM files to grow by 1,813 bytes and .EXE to grow by 1,808-1,822 bytes. It can re-infect .EXE files, and will increase the size of already infected .EXE files by 1,808 bytes. Jerusalem-VT1: If the virus is memory-resident, it will delete any file run on Tuesday the 1st. Jerusalem-T13: The virus causes .COM and .EXE files to grow by 1,812 bytes. If the virus is memory-resident, it will delete any program run on Tuesday the 13th. Jerusalem-Sat13: If the virus is memory-resident, it will delete any program run on Saturday the 13th. Jerusalem-Czech: The virus infects .COM and .EXE files, but not COMMAND.COM. It causes infected .COM files to grow by 1,735 bytes and .EXE files to grow by 1,735-1,749 bytes. It will not delete programs run on Friday the 13th. Jerusalem-Czech has a self-recognition code and a code placement that differ from the original Jerusalem, and is frequently detected as a Sunday variant. Jerusalem-Nemesis: This virus inserts the strings "NEMESIS.COM" and "NOKEY" in infected files. Jerusalem-Captain Trips: Jerusalem-Captain Trips contains the strings "Captain Trips" and "SPITFIRE". Captain Trips is the name of the apocalyptic plague described in Stephen King's novel The Stand. If the year is any year other than 1990 and the day is a Friday on or after the 15th, Jerusalem-Captain Trips creates an empty file with the same name as any program run that day. On the 16th Jerusalem-Captain Trip re-programs the video controller, and on several other dates it installs a routine in the timer tick that activates when 15 minutes pass. Jerusalem-Captain Trips has several errors. Jerusalem-J: The variant causes .COM files to grow by 1,237 bytes and .EXE files by about 1,232 bytes. The virus has no "Jerusalem effects", and originates from Hong Kong. Jerusalem-Yellow (Growing Block): Jerusalem-Yellow infects .EXE and .COM files. Infected .COM files grow by 1,363 bytes and .EXE files grow by 1,361-1,375 bytes. Jerusalem-Yellow creates a large yellow box with a shadow in the middle of the screen and the computer hangs. Jerusalem-Jan25: If the virus is memory-resident, it will activate on January 25 and will delete any program run that day. Additionally, it does not re-infect .EXE files. Skism: The virus will activate on any Friday after the 15th of the month, and causes infected .COM files to grow by 1,808 bytes and infected .EXE to grow by 1,808-1,822 bytes. Additionally, it can re-infect .EXE files. Carfield (Jeru-Carfield): The virus causes infected files to grow by 1,508 bytes. If the virus is memory-resident and the day is Monday, the computer will display the string "Carfield!" every 42 seconds. Mendoza (Jerusalem Mendoza): The virus does nothing if the year is 1980 or 1989, but for all other years a flag is set if the virus is memory resident and if the floppy disk motor count is 25. The flag will be set if a program is run from a floppy disk. If the flag is set, every program which runs is deleted. If the flag is not set and 30 minutes passes, the cursor is changed to a block. After one hour, Caps Lock, Nums Lock, and Scroll Lock are switched to "Off". Additionally, it does not re-infect .EXE files. Einstein: This is a small variant, only 878 bytes, and infects .EXE files. Moctezuma: This variant virus is 2,228 bytes and is encrypted. Century: This variant is a logic bomb with trigger date of January 1, 2000 that was supposed to display the message "Welcome to the 21st Century". However, no one is sure as to the legitimacy of the virus, as no one has seen it. Danube: The Danube virus is a unique variant of Jerusalem, as it has evolved beyond Jerusalem and only reflects very few parts of it. This virus is a multipartite virus, so it has several methods by which it can infect and spread: disk boot sectors as well as .COM and .EXE files. Because of this, how the virus works is dependent upon the origin of the virus (boot sector or program). When a contaminated program is executed, the virus resides in memory, taking 5 kB. Additionally, it will check if it also resides in the active boot sector and will place a copy of itself there if it was not present before. When a computer is booted from a contaminated boot sector/disk, the virus will place itself in memory before the operating system is even loaded. It reserves 5 kB of DOS base memory, and reserves 5 sectors on any disk it infects. HK: This variant of Jerusalem originates from Hong Kong, and references one of Hong Kong's technical schools in its code. Jerusalem-1767: This virus infects .EXE and .COM files, and will infect COMMAND.COM if it is executes. It causes .COM files to grow by 1,767 bytes and .EXE to grow by 1,767-1,799 bytes. Infected files include the strings "**INFECTED BY FRIDAY 13th**" or "COMMAND.COM". Jerusalem-1663: This virus infects .EXE and .COM files, including COMMAND.COM. Once memory resident, it infects programs as they are run. It causes .COM and .EXE files to grow by 1,663 bytes, but it cannot recognize infected files, so it may re-infect both .COM and .EXE files. Jerusalem-Haifa: This virus infects .EXE and .COM files, but not COMMAND.COM. It causes .COM files to grow by 2,178 bytes and .EXE files to grow by 1,960-1,974 bytes. Its name is due to the Hebrew word for Haifa, an Israeli city, being in the virus code. Phenome: This virus is similar to the Apocalypse variant, but will infect COMMAND.COM. It only activates on Saturdays, and does not allow the user to execute programs. It features the string "PHENOME.COM" and "MsDos". See also Timeline of computer viruses and worms Westwood (computer virus) References External links Jerusalem's rise and fall, chapter in an IBM virus research report Anti-Virus company Sophos description on the Jerusalem virus Anti-Virus company Network Associates description on the Jerusalem virus Jerusalem.1808 Jerusalem virus DOS file viruses Hacking in the 1980s
30954525
https://en.wikipedia.org/wiki/84th%20Radar%20Evaluation%20Squadron
84th Radar Evaluation Squadron
The 84th Radar Evaluation Squadron is a component of the 505th Test and Training Group, 505th Command and Control Wing, Air Combat Command, located at Hurlburt Field, Florida. The squadron provides the Warfighter responsive worldwide radar-centric planning, optimization, and constant evaluation to create the most sensitive integrated radar picture. Overview The 84 RADES provides expert assistance to multiple government entities: from radar coverage studies to crucial assistance for search and rescue coordination centers. It insures that tactical and strategic air defense ground radars, C2 systems, and electronic resources are installed, maintained, and operated in a high state of readiness to provide a quick reaction to the threat of both limited and general wars. The squadron also regularly provides assistance to the USAF Safety Center, US Navy Safety Center, and National Transportation Safety Board. The Radar Data Interface System (RDIS), designed and built by 84 RADES engineers and present at all Air Defense Sectors and Alaska, drives our Event Analysis (EA) process. EA is a post-event investigation of aerial mishaps that has provided a unique and sometimes crucial perspective on a high number of military and civilian aircraft incidents. History The squadron was activated by Air Defense Command (ADC) in 1952 as the provisional 4754th Radar Evaluation Electronics Counter-Countermeasure Flight, assigned to the Western Air Defense Force (WADF) at Hamilton AFB, California. The mission was to provide Electronic Counter-Measure (ECM) training and evaluation services to the various Aircraft Control and Warning Squadrons of WADF. In order to provide the necessary training for WADF, the 4754th was assigned six B-29 Superfortresses, three B-25 Mitchells, and one C-47. The B-29s and B-25s contained an assortment of jamming devices to provide the required ECM and the C-47 was used as a support aircraft to ferry personnel and equipment. During the period that the 4754th operated these aircraft, they provided the operators of the WADF with thousands of hours of ECM training. By 1958, the size of the unit was increased to 59 officers and 206 airmen – a growth of 58% in four years. Effective 8 July 1958, the ADC re-designated the unit as the 4754th Radar Evaluation Squadron (ECM). By 1959 the World War II aircraft were very expensive to operate, needing excessive amounts of maintenance to remain airworthy and not supportable due to a lack of spare parts. The squadron's aircraft were retired and the squadron was moved to Hill AFB, Utah. The unit was given sole responsibility for providing evaluation services to all AC&W type radars and radar systems throughout North America. The squadron was initially placed under Central Air Defense Force (CADF), but transferred shortly afterward to ADC’s 29th Air Division in 1960 and then the 28th Air Division in 1961 as a result of ADC reorganizations. For the next 14 years, the unit evaluated all long-range air defense radar facilities, standardized evaluation processes, and developed new evaluation technologies. During this time, the command of the squadron was transferred to ADC's 4th Air Force in 1966 and to ADCOM Headquarters in 1969. The unit’s exemplary performance was noted in 1968 when it received its first of 21 Outstanding Unit Awards. By 1975, the unit had been given the responsibility for Pacific (PACAF) and European (USAFE) radars, making it the only ground-based radar evaluation unit in the Department of Defense. In 1979 as part of the inactivation of Aerospace Defense Command, the unit was re-designated the 1954th Radar Evaluation Squadron, and was transferred to the Air Force Communications Command, then headquartered at Richards-Gebaur Air Force Base, Missouri. The squadron was designated 84th Radar Evaluation Squadron in 1987 and was transferred to Tactical Air Command, and in 1992 it became part of the newly formed Air Combat Command, following TAC’s inactivation. The RADES maintained its alignment under the USAF Warfare Center until 1993, when they became part of the 505th Command and Control Evaluation Group. In July 1998, the unit became a charter member of the newly formed Air Combat Command Communications Group, based at Langley AFB, VA. Finally, in October 2005, the 84th RADES returned to the USAF Warfare Center, under the 505th Command and Control Wing and 505th Operations Group based in Hurlburt Field, Florida. Lineage Activated as 4754th Radar Evaluation Electronics Counter-Countermeasure Flight, 25 January 1954 Re-designated: 4754th Radar Evaluation Flight, ECM, 1 October 1957 Re-designated: 4754th Radar Evaluation Squadron, 8 July 1958 Re-designated: 4754th Radar Evaluation Squadron (Technical), 1 July 1960 Re-designated: 1954th Radar Evaluation Squadron, 1 October 1979 Re-designated: 84th Radar Evaluation Squadron, 1 July 1987 Assignments Western Air Defense Force, 25 January 1954 Central Air Defense Force, 1 September 1959 29th Air Division, 1 January 1960 28th Air Division, 1 July 1961 Fourth Air Force, 1 April 1966 Aerospace Defense Command, 1 March 1969 Air Force Communications Command, 1 October 1979 USAF Air Warfare Center, 1 July 1987 505th Command and Control Evaluation Group, 15 April 1993 Air Combat Command Communications Group, 15 September 1998 505th Operations Group, 1 October 2005 – Present Stations Hamilton AFB, California, 25 January 1954 Hill AFB, Utah, 1 September 1959 – Present Aircraft B-29 Superfortress 1957-1959 B-25 Mitchell, 1957–1959 C-47 Skytrain, 1957–1959 See also List of United States Air Force defense systems evaluation squadrons References 84 Radar Evaluation Squadron Fact Sheet A Handbook of Aerospace Defense Organization 1946 - 1980, by Lloyd H. Cornett and Mildred W. Johnson, Office of History, Aerospace Defense Center, Peterson Air Force Base, Colorado External links 84 RADES Official Fact Sheet with Contact Information 0084 Radar squadrons of the United States Air Force
578460
https://en.wikipedia.org/wiki/Loop%20invariant
Loop invariant
In computer science, a loop invariant is a property of a program loop that is true before (and after) each iteration. It is a logical assertion, sometimes checked within the code by an assertion call. Knowing its invariant(s) is essential in understanding the effect of a loop. In formal program verification, particularly the Floyd-Hoare approach, loop invariants are expressed by formal predicate logic and used to prove properties of loops and by extension algorithms that employ loops (usually correctness properties). The loop invariants will be true on entry into a loop and following each iteration, so that on exit from the loop both the loop invariants and the loop termination condition can be guaranteed. From a programming methodology viewpoint, the loop invariant can be viewed as a more abstract specification of the loop, which characterizes the deeper purpose of the loop beyond the details of this implementation. A survey article covers fundamental algorithms from many areas of computer science (searching, sorting, optimization, arithmetic etc.), characterizing each of them from the viewpoint of its invariant. Because of the similarity of loops and recursive programs, proving partial correctness of loops with invariants is very similar to proving correctness of recursive programs via induction. In fact, the loop invariant is often the same as the inductive hypothesis to be proved for a recursive program equivalent to a given loop. Informal example The following C subroutine max() returns the maximum value in its argument array a[], provided its length n is at least 1. Comments are provided at lines 3, 6, 9, 11, and 13. Each comment makes an assertion about the values of one or more variables at that stage of the function. The highlighted assertions within the loop body, at the beginning and end of the loop (lines 6 and 11), are exactly the same. They thus describe an invariant property of the loop. When line 13 is reached, this invariant still holds, and it is known that the loop condition i!=n from line 5 has become false. Both properties together imply that m equals the maximum value in a[0...n-1], that is, that the correct value is returned from line 14. int max(int n, const int a[]) { int m = a[0]; // m equals the maximum value in a[0...0] int i = 1; while (i != n) { // m equals the maximum value in a[0...i-1] if (m < a[i]) m = a[i]; // m equals the maximum value in a[0...i] ++i; // m equals the maximum value in a[0...i-1] } // m equals the maximum value in a[0...i-1], and i==n return m; } Following a defensive programming paradigm, the loop condition i!=n in line 5 should better be modified to i<n, in order to avoid endless looping for illegitimate negative values of n. While this change in code intuitively shouldn't make a difference, the reasoning leading to its correctness becomes somewhat more complicated, since then only i>=n is known in line 13. In order to obtain that also i<=n holds, that condition has to be included into the loop invariant. It is easy to see that i<=n, too, is an invariant of the loop, since i<n in line 6 can be obtained from the (modified) loop condition in line 5, and hence i<=n holds in line 11 after i has been incremented in line 10. However, when loop invariants have to be manually provided for formal program verification, such intuitively too obvious properties like i<=n are often overlooked. Floyd–Hoare logic In Floyd–Hoare logic, the partial correctness of a while loop is governed by the following rule of inference: This means: If some property is preserved by the code —more precisely, if holds after the execution of whenever both and held beforehand— (upper line) then and are guaranteed to be false and true, respectively, after the execution of the whole loop , provided was true before the loop (lower line). In other words: The rule above is a deductive step that has as its premise the Hoare triple . This triple is actually a relation on machine states. It holds whenever starting from a state in which the boolean expression is true and successfully executing some code called , the machine ends up in a state in which is true. If this relation can be proven, the rule then allows us to conclude that successful execution of the program will lead from a state in which is true to a state in which holds. The boolean formula in this rule is called a loop invariant. Example The following example illustrates how this rule works. Consider the program while (x < 10) x := x+1; One can then prove the following Hoare triple: The condition C of the while loop is . A useful loop invariant has to be guessed; it will turn out that is appropriate. Under these assumptions it is possible to prove the following Hoare triple: While this triple can be derived formally from the rules of Floyd-Hoare logic governing assignment, it is also intuitively justified: Computation starts in a state where is true, which means simply that is true. The computation adds 1 to , which means that is still true (for integer x). Under this premise, the rule for while loops permits the following conclusion: However, the post-condition ( is less than or equal to 10, but it is not less than 10) is logically equivalent to , which is what we wanted to show. The property is another invariant of the example loop, and the trivial property is another one. Applying the above inference rule to the former invariant yields . Applying it to invariant yields , which is slightly more expressive. Programming language support Eiffel The Eiffel programming language provides native support for loop invariants. A loop invariant is expressed with the same syntax used for a class invariant. In the sample below, the loop invariant expression x <= 10 must be true following the loop initialization, and after each execution of the loop body; this is checked at runtime. from x := 0 invariant x <= 10 until x > 10 loop x := x + 1 end Whiley The Whiley programming language also provides first-class support for loop invariants. Loop invariants are expressed using one or more where clauses, as the following illustrates: function max(int[] items) -> (int r) // Requires at least one element to compute max requires |items| > 0 // (1) Result is not smaller than any element ensures all { i in 0..|items| | items[i] <= r } // (2) Result matches at least one element ensures some { i in 0..|items| | items[i] == r }: // nat i = 1 int m = items[0] // while i < |items| // (1) No item seen so far is larger than m where all { k in 0..i | items[k] <= m } // (2) One or more items seen so far matches m where some { k in 0..i | items[k] == m }: if items[i] > m: m = items[i] i = i + 1 // return m The max() function determines the largest element in an integer array. For this to be defined, the array must contain at least one element. The postconditions of max() require that the returned value is: (1) not smaller than any element; and, (2) that it matches at least one element. The loop invariant is defined inductively through two where clauses, each of which corresponds to a clause in the postcondition. The fundamental difference is that each clause of the loop invariant identifies the result as being correct up to the current element i, whilst the postconditions identify the result as being correct for all elements. Use of loop invariants A loop invariant can serve one of the following purposes: purely documentary to be checked within in the code by an assertion call to be verified based on the Floyd-Hoare approach For 1., a natural language comment (like // m equals the maximum value in a[0...i-1] in the above example) is sufficient. For 2., programming language support is required, such as the C library assert.h, or the above-shown invariant clause in Eiffel. Often, run-time checking can be switched on (for debugging runs) and off (for production runs) by a compiler or a runtime option. For 3., some tools exist to support mathematical proofs, usually based on the above-shown Floyd–Hoare rule, that a given loop code in fact satisfies a given (set of) loop invariant(s). The technique of abstract interpretation can be used to detect loop invariant of given code automatically. However, this approach is limited to very simple invariants (such as 0<=i && i<=n && i%2==0). Distinction from loop-invariant code A loop invariant (loop-invariant property) is to be distinguished from loop-invariant code; note "loop invariant" (noun) versus "loop-invariant" (adjective). Loop-invariant code consists of statements or expressions that can be moved outside a loop body without affecting the program semantics. Such transformations, called loop-invariant code motion, are performed by some compilers to optimize programs. A loop-invariant code example (in the C programming language) is for (int i=0; i<n; ++i) { x = y+z; a[i] = 6*i + x*x; } where the calculations x = y+z and x*x can be moved before the loop, resulting in an equivalent, but faster, program: x = y+z; t1 = x*x; for (int i=0; i<n; ++i) { a[i] = 6*i + t1; } In contrast, e.g. the property 0<=i && i<=n is a loop invariant for both the original and the optimized program, but is not part of the code, hence it doesn't make sense to speak of "moving it out of the loop". Loop-invariant code may induce a corresponding loop-invariant property. For the above example, the easiest way to see it is to consider a program where the loop invariant code is computed both before and within the loop: x1 = y+z; t1 = x1*x1; for (int i=0; i<n; ++i) { x2 = y+z; a[i] = 6*i + t1; } A loop-invariant property of this code is (x1==x2 && t1==x2*x2) || i==0, indicating that the values computed before the loop agree with those computed within (except before the first iteration). See also Invariant (computer science) Loop-invariant code motion Loop variant Weakest-preconditions of While loop References Further reading Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. . Pages 17–19, section 2.1: Insertion sort. David Gries. "A note on a standard strategy for developing loop invariants and loops." Science of Computer Programming, vol 2, pp. 207–214. 1984. Michael D. Ernst, Jake Cockrell, William G. Griswold, David Notkin. "Dynamically Discovering Likely Program Invariants to Support Program Evolution." International Conference on Software Engineering, pp. 213–224. 1999. Robert Paige. "Programming with Invariants." IEEE Software, 3(1):56–69. January 1986. Yanhong A. Liu, Scott D. Stoller, and Tim Teitelbaum. Strengthening Invariants for Efficient Computation. Science of Computer Programming, 41(2):139–172. October 2001. Michael Huth, Mark Ryan. "Logic in Computer Science.", Second Edition. Formal methods Control flow
39417287
https://en.wikipedia.org/wiki/Data%20Control%20%26%20Systems
Data Control & Systems
Data Control & Systems was a company formed by Rob Nursten in Zimbabwe in 1994 and became commercially operational in 1995. The company was originally a subsidiary of UUNet Internet Africa which he started in South Africa with the demand for internet services. History of Internet service in Zimbabwe Zimbabwe may have been one of the later additions to the digital world, but this was by no means anything short of a technical advancement that has since changed the country. Delving into the history of data Control is to look back to the very founding of the first global ISPs. For this reason alone it is an important chapter of global history that needs to be documented. Data Control won an award from "Best Of The Planet Awards 1996 IN THE CATEGORY OF WWW INTERNET PROVIDERS" 2ask.com. This was a major achievement for an African company, never mind a Zimbabwean company. Data Control & Systems was the first internet service provider in Zimbabwe. They provided a technological first in Southern Africa, being the Sister company of UUnet Internet Africa born in South Africa. The main directors of the company were Clint Nursten and his father, Rob Nursten. The Network Administration was handled by Clint Sim and Scott Nursten who was also responsible for the security systems at both the host and network levels. Scott also provided web design and various programming applications for the commercial side of the company. He then went on to start s2s Ltd, a major provider of security and networking consultancy in the United Kingdom with his brother Dale Nursten. The company set out to provide Internet access to the whole of southern Africa from 1994 onwards. Its domain name was harare.iafrica.com. which used leased line access from South Africa through the Zimbabwean Posts & Telecommunications Company aka P.T.C. Eventually Data Control & Systems changed name to Internet Unlimited and was then bought out by Econet Wireless and named Ecoweb Internet. During the period of the company being named Internet Unlimited, the domain name was internet.co.zw and has remained as that domain up until 2002. As of 2005, the company provided internet access to over 50,000 Zimbabweans and many international tourists through lower level internet cafes and client computers. Technical management and support Server hardware and software The ISP was based upon the Red Hat Linux 5 operating system which was the fastest at that time and Microsoft Windows NT 3.51. Later, it was upgraded to Windows NT 4 and Red Hat Linux 6 and, a radius server was implemented by Clint Sim, their administrator at that time. Unfortunately, Clint and his wife who were fatally injured in a car accident in 2004, survived by their two sons. The computer servers' hardware was based upon Dual Intel Pentium processor boards with 64 MB of RAM linked to RAID ARRAYS of SCSI disks. This type of hard drive was chosen for its independent processor usage which presented performance enhancements over the use of IDE disks which used the computer's CPU to function. Network structure Primary head of service originated from UUNet in USA, which was recognised as being the "first commercial Internet service provider and a leading supplier of a comprehensive range of Internet access options, applications, and consulting services to businesses, professionals and on-line service providers." in the United States. Founded in 1987 and was the fourth largest in the world in 1997 prior to merging with WorldCom, Inc. UUNet provided a 622Mbs trunk line on Atlantic sea bed via Europe before connecting to the African routes via VSAT. The satellite signal was connected to a South African station of UUNet Internet Africa and distributed to the region by land links. In c1999 there was a project that started to implement a fibre optic route around the west coast of Africa all the way from Europe to South Africa as an under-sea project and was completed in 2012, SAT-3/WASC. The Network was broadcast overseas from London links to South Africa prior to Sat-2 being fully implemented but the cost was prohibitive to most companies. Since the arrival of SAT-3/WASC the cost of access is a lot lower, but savings not always passed on by governments to users which is why African Internet service is very expensive, especially South African internet and subsequently, Zimbabwean services. Most of Data Control's Dial-up lines came in via Livingstone Portmasters full of US Robotics and Microcom external modems and the commercial links were accessed through License-free 2.4 GHz wireless WaveLAN or leased line alternative. The bulk of the modems used were the US Robotics type modems which in 1996 were operation at around 14.4 kbit/s whilst the Microcom modems originally operated at Microcom 14.4 kbit/s Microcom 28.8 kbit/s. Within a couple of years, the technology would be outdated and replaced with the 56.6 kbit/s type modems for dial-up customers. With the lack of reliable POTS (Plain Old Telephone System) networks in the most rural areas of Zimbabwe, some farmers used cellular networks almost exclusively, albeit with an incredibly high cost. Some of those areas were so remote that not even a telephone line was installed within 10 miles of their offices. This peculiarity of African circumstance lead to the development of extended range wifi networks not seen in use in Europe or USA to any great extent, but In Africa it is widespread. Service and support personnel The back office service and support was headed up by Clint Sim and supported by Scott Nursten. Scott's primary duties were web design and security while Clint provided the technical expertise on Linux and Windows, the latter being an operating system he despised. The front of office Technical Support desk was manned by up to 4 technicians who answered and assisted the dial-in users with many of their problems and training issues. Since Zimbabwe had never before seen internet access locally, many people did not know what computers were useful for or how they could be implemented for their businesses. The technical support team; Sheldon, Heath, Adam and Zimbabwe's All Africa Games gold medalist Tae Kwondo fighter, Fanuel Kwande, had to deal with this infusion of knowledge to the general populace. According to Linked-In profiles, Fanuel may now be a Director of a company Fanuel Kwande and was certainly the lead support with Sheldon at the handover of Data Control to Econet. Heath went on to start up his own company Total PC after leaving Data Control, and emigrated to England after 2006. Adam left Data Control and started his own company, Visionary TechServ, which failed with the Zimbabwe economic collapse. Adam eventually emigrated to England in 2004 and started a specialist photography company in 2015. Scott emigrated to England and started his own company there. Ownership history The original name was UUNet Internet Africa before becoming Data Control & Systems in 1996. Data Control & Systems changed name to Internet Unlimited when Clint Nursten took over the position of managing director from Rob Nursten in 1998. After a few months, a deal with Econet Wireless (owned by Strive Masyiwa) was struck and Internet Unlimited was sold to Econet Wireless. Econet decided to re-brand the company as Ecoweb. Competitors With the successes of Data Control, many other people realised that Internet access was a huge opportunity to make money and create a new business type in the country before the impending Dot Com boom and bust. As part of that boom there was a very particular company that made a fantastically meteoric rise to fame and then crashed into ignominy all within the first five years of Zimbabwe's internet debut: Samara Services. All Zimbabwe's major Internet Service Providers (ISPs) were based on a single block of buildings in central Harare. The buildings were namely Eastgate Centre, Harare and Southampton Life Centre. The entire block of buildings was built by the same consulting engineers company, Ove Arup & Partners of Harare. By close of 1999, Samara Services had all but evaporated and the directors disappeared. Their customers were left without services and their entire network fell into disarray in a very short period of time. Their creditors quickly seized assets and equipment as part-settlement of outstanding bills as was recorded in The Herald newspaper on Zimbabwe. The one really notable contribution to Zimbabwean internet history by Samara was that they had managed to gain the most popularity and largest user base in the little nation in a very short period of time. It was that massive growth that may have led to their downfall and the company failures on the back of many hidden issues that the public were never made aware of. Another issue leading to their demise was the backbone of the entire network was on a single operator's backbone: the PTC. Another competitor was Africa Online. It was a rather late arrival compared to the other ISPs as they had older (technologically speaking) equipment but a very much better management structure which allowed them to survive longer than was expected of other ISPs. Their advent into the world of Internet service provision at the end of 1997 was a breath of fresh air after the collapse of Samara Services. Many of the technical and administrative staff from Samara services were absorbed into Africa online as a result and it was believed that much of their equipment was from Samara Services too. Their parent service provision was from Prodigy Internet through the same leased line structure of PTC. Zimbabwe Online deserves a special mention as it was run and managed from inside the offices of Data Control and their technical staff. It was founded by Peter Lobel using a proprietary dial-up system like similar to the style of America Online very successfully. Mr Lobel then updated the method of dial-up from a managed proprietary script service to an automated user-managed service. One of the secrets of the company at that time was that they were one of the first to dabble in VSAT broadcast with their own dish on the roof of the Eastgate Centre, secretly installed with only a very select few staff and friends who knew about it. VSAT was considered an illegal installation by the government who wanted to control all and every bit of international news to do with Zimbabwe. Zimsurf surfaced on 29 May 1998 and was eventually the governmental run Internet Service Provider, administered competently by Marco Kalis. Zimsurf was to become ultimately the internet arm of Telecel Zimbabwe. Before Zimsurf dissolved, there was a huge VSAT dish installed in the garden of one of the company's directors in Avondale. The dish was linked by leased line to their offices in the Harare CBD. In size, it was about three meters in diameter smack in the middle of his front lawn. The property was surrounded by a four-foot chicken-wire fence and the dish was plainly visible to all passers-by. Eventually, the company was ordered to disable VSAT broadcast according to local law that prohibited competition to the PTC who were the only licensed users of VSAT and the only issuers of licences for VSAT. Thus began a monopolistic war on communication in Zimbabwe. An indepth study of the Internet availability in Africa in the 1990s is available in PDF format. References External links Telecommunications companies of Zimbabwe Internet service providers of Africa Harare
4205361
https://en.wikipedia.org/wiki/Billy%20Atkins%20%28American%20football%29
Billy Atkins (American football)
William Ellis Atkins (November 19, 1934 – November 5, 1991) was an American football defensive back and punter from Auburn University who played for the San Francisco 49ers in the National Football League, and in the American Football League for the Buffalo Bills, the New York Titans/Jets, and the Denver Broncos. He was an AFL All-Star in 1961. On January 8, 1966, Atkins was named the head coach of the Troy State Trojans football team. In 1968, he coached Troy State to an NAIA National Championship and was named the NAIA Coach of the Year. Atkins finished at Troy State with a 44–16–2 record before leaving in 1971. He is the second-most winningest coach in Troy history, only behind Larry Blakeney. Atkins' son, author William Ellis "Ace" Atkins Jr., also played football at Auburn and was member of its 1993 undefeated team. Head coaching record See also List of American Football League players References External links 1934 births 1991 deaths American football defensive backs American football punters American Football League players Auburn Tigers football players Buffalo Bills players Denver Broncos (AFL) players New York Titans (AFL) players New York Jets players San Francisco 49ers players Troy Trojans athletic directors Troy Trojans football coaches American Football League All-Star players People from Lamar County, Alabama Players of American football from Alabama
162549
https://en.wikipedia.org/wiki/MacOS%20version%20history
MacOS version history
The history of macOS, Apple's current Mac operating system formerly named Mac OS X until 2012 and then OS X until 2016, began with the company's project to replace its "classic" Mac OS. That system, up to and including its final release Mac OS 9, was a direct descendant of the operating system Apple had used in its Macintosh computers since their introduction in 1984. However, the current macOS is a Unix operating system built on technology that had been developed at NeXT from the 1980s until Apple purchased the company in early 1997. Although it was originally marketed as simply "version 10" of the Mac OS (indicated by the Roman numeral "X"), it has a completely different codebase from Mac OS 9, as well as substantial changes to its user interface. The transition was a technologically and strategically significant one. To ease the transition, versions through 10.4 were able to run Mac OS 9 and its applications in a compatibility layer. MacOS was first released in 1999 as Mac OS X Server 1.0, with a widely released desktop version—Mac OS X 10.0—following in March 2001. Since then, several more distinct desktop and server editions of macOS have been released. Starting with Mac OS X 10.7 Lion, macOS Server is no longer offered as a separate operating system; instead, server management tools are available for purchase as an add-on. Starting with the Intel build of Mac OS X 10.5 Leopard, most releases have been certified as Unix systems conforming to the Single Unix Specification. Lion was sometimes referred to by Apple as "Mac OS X Lion" and sometimes referred to as "OS X Lion", without the "Mac"; Mountain Lion was consistently referred to as just "OS X Mountain Lion", with the "Mac" being completely dropped. The operating system was further renamed to "macOS" starting with macOS Sierra. macOS retained the major version number 10 throughout its development history until the release of macOS 11 Big Sur in 2020; releases of macOS have also been named after big cats (versions 10.0–10.8) or locations in California (10.9–present). A new macOS, Monterey, was announced during WWDC on June 7, 2021. Development Development outside Apple After Apple removed Steve Jobs from management in 1985, he left the company and attempted to create the "next big thing", with funding from Ross Perot and himself. The result was the NeXT Computer. As the first workstation to include a digital signal processor (DSP) and a high-capacity optical disc drive, NeXT hardware was advanced for its time, but was expensive relative to the rapidly commoditizing workstation market and marred by design problems. The hardware was phased out in 1993; however, the company's object-oriented operating system NeXTSTEP had a more lasting legacy. NeXTSTEP was based on the Mach kernel developed at CMU (Carnegie Mellon University) and BSD, an implementation of Unix dating back to the 1970s. It featured an object-oriented programming framework based on the Objective-C language. This environment is known today in the Mac world as Cocoa. It also supported the innovative Enterprise Objects Framework database access layer and WebObjects application server development environment, among other notable features. All but abandoning the idea of an operating system, NeXT managed to maintain a business selling WebObjects and consulting services, only ever making modest profits in its last few quarters as an independent company. NeXTSTEP underwent an evolution into OPENSTEP which separated the object layers from the operating system below, allowing it to run with less modification on other platforms. OPENSTEP was, for a short time, adopted by Sun and HP. However, by this point, a number of other companies — notably Apple, IBM, Microsoft, and even Sun itself — were claiming they would soon be releasing similar object-oriented operating systems and development tools of their own. Some of these efforts, such as Taligent, did not fully come to fruition; others, like Java, gained widespread adoption. On February 4, 1997, Apple Computer acquired NeXT for $427 million, and used OPENSTEP as the basis for Mac OS X, as it was called at the time. Traces of the NeXT software heritage can still be seen in macOS. For example, in the Cocoa development environment, the Objective-C library classes have "NS" prefixes, and the HISTORY section of the manual page for the defaults command in macOS straightforwardly states that the command "First appeared in NeXTStep." Internal development Meanwhile, Apple was facing commercial difficulties of its own. The decade-old Macintosh System Software had reached the limits of its single-user, co-operative multitasking architecture, and its once-innovative user interface was looking increasingly outdated. A massive development effort to replace it, known as Copland, was started in 1994, but was generally perceived outside Apple to be a hopeless case due to political infighting and conflicting goals. By 1996, Copland was nowhere near ready for release, and the project was eventually cancelled. Some elements of Copland were incorporated into Mac OS 8, released on July 26, 1997. After considering the purchase of BeOS — a multimedia-enabled, multi-tasking OS designed for hardware similar to Apple's, the company decided instead to acquire NeXT and use OPENSTEP as the basis for their new OS. Avie Tevanian took over OS development, and Steve Jobs was brought on as a consultant. At first, the plan was to develop a new operating system based almost entirely on an updated version of OPENSTEP, with the addition of a virtual machine subsystem — known as the Blue Box — for running "classic" Macintosh applications. The result was known by the code name Rhapsody, slated for release in late 1998. Apple expected that developers would port their software to the considerably more powerful OPENSTEP libraries once they learned of its power and flexibility. Instead, several major developers such as Adobe told Apple that this would never occur, and that they would rather leave the platform entirely. This "rejection" of Apple's plan was largely the result of a string of previous broken promises from Apple; after watching one "next OS" after another disappear and Apple's market share dwindle, developers were not interested in doing much work on the platform at all, let alone a re-write. Changed direction under Jobs Apple's financial losses continued and the board of directors lost confidence in CEO Gil Amelio, asking him to resign. The board asked Steve Jobs to lead the company on an interim basis, essentially giving him carte blanche to make changes to return the company to profitability. When Jobs announced at the World Wide Developer's Conference that what developers really wanted was a modern version of the Mac OS, and Apple was going to deliver it, he was met with applause. Over the next two years, a major effort was applied to porting the original Macintosh APIs to Unix libraries known as Carbon. Mac OS applications could be ported to Carbon without the need for a complete re-write, making them operate as native applications on the new operating system. Meanwhile, applications written using the older toolkits would be supported using the "Classic" Mac OS 9 environment. Support for C, C++, Objective-C, Java, and Python were added, furthering developer comfort with the new platform. During this time, the lower layers of the operating system (the Mach kernel and the BSD layers on top of it) were re-packaged and released under the Apple Public Source License. They became known as Darwin. The Darwin kernel provides a stable and flexible operating system, which takes advantage of the contributions of programmers and independent open-source projects outside Apple; however, it sees little use outside the Macintosh community. During this period, the Java programming language had increased in popularity, and an effort was started to improve Mac Java support. This consisted of porting a high-speed Java virtual machine to the platform, and exposing macOS-specific "Cocoa" APIs to the Java language. The first release of the new OS — Mac OS X Server 1.0 — used a modified version of the Mac OS GUI, but all client versions starting with Mac OS X Developer Preview 3 used a new theme known as Aqua. Aqua was a substantial departure from the Mac OS 9 interface, which had evolved with little change from that of the original Macintosh operating system: it incorporated full color scalable graphics, anti-aliasing of text and graphics, simulated shading and highlights, transparency and shadows, and animation. A new feature was the Dock, an application launcher which took advantage of these capabilities. Despite this, Mac OS X maintained a substantial degree of consistency with the traditional Mac OS interface and Apple's own [Human Interface Guidelines], with its pull-down menu at the top of the screen, familiar keyboard shortcuts, and support for a single-button mouse. The development of Aqua was delayed somewhat by the switch from OpenStep's Display PostScript engine to one developed in-house that was free of any license restrictions, known as Quartz. Releases With the exception of Mac OS X Server 1.0 and the original public beta, the first several macOS versions were named after big cats. Prior to its release, version 10.0 was code named "Cheetah" internally at Apple, and version 10.1 was code named internally as "Puma". After the code name "Jaguar" for version 10.2 received publicity in the media, Apple began openly using the names to promote the operating system: 10.3 was marketed as "Panther", 10.4 as "Tiger", 10.5 as "Leopard", 10.6 as "Snow Leopard", 10.7 as "Lion", and 10.8 as "Mountain Lion". "Panther", "Tiger", and "Leopard" were registered as trademarks. Apple registered "Lynx" and "Cougar", but these were allowed to lapse. Apple instead used the name of iconic locations in California for subsequent releases: 10.9 Mavericks is named after Mavericks, a popular surfing destination; 10.10 Yosemite is named after Yosemite National Park; 10.11 El Capitan is named for the El Capitan rock formation in Yosemite National Park; 10.12 Sierra is named for the Sierra Nevada mountain range; and 10.13 High Sierra is named for the area around the High Sierra Camps. Public Beta: "Kodiak" On September 13, 2000, Apple released a $29.95 "preview" version of Mac OS X (internally codenamed Kodiak) in order to gain feedback from users. It marked the first public availability of the Aqua interface, and Apple made many changes to the UI based on customer feedback. Mac OS X Public Beta expired and ceased to function in spring 2001. Version 10.0: "Cheetah" On March 24, 2001, Apple released Mac OS X 10.0 (internally codenamed Cheetah). The initial version was slow, incomplete, and had very few applications available at the time of its launch, mostly from independent developers. While many critics suggested that the operating system was not ready for mainstream adoption, they recognized the importance of its initial launch as a base on which to improve. Simply releasing Mac OS X was received by the Macintosh community as a great accomplishment, for attempts to completely overhaul the Mac OS had been underway since 1996, and delayed by countless setbacks. Following some bug fixes, kernel panics became much less frequent. Version 10.1: "Puma" Mac OS X 10.1 (internally codenamed Puma) was released on September 25, 2001. It has better performance and provided missing features, such as DVD playback. Apple released 10.1 as a free upgrade CD for 10.0 users. Apple released a upgrade CD for Mac OS 9. On January 7, 2002, Apple announced that Mac OS X was to be the default operating system for all Macintosh products by the end of that month. Version 10.2: "Jaguar" On August 23, 2002, Apple followed up with Mac OS X 10.2 Jaguar, the first release to use its code name as part of the branding. It brought great raw performance improvements, a sleeker look, and many powerful user-interface enhancements (over 150, according to Apple ), including Quartz Extreme for compositing graphics directly on an ATI Radeon or Nvidia GeForce2 MX AGP-based video card with at least 16 MB of VRAM, a system-wide repository for contact information in the new Address Book, and an instant messaging client named iChat. The Happy Mac which had appeared during the Mac OS startup sequence for almost 18 years was replaced with a large grey Apple logo with the introduction of Mac OS X 10.2. Version 10.3: "Panther" Mac OS X Panther was released on October 24, 2003. In addition to providing much improved performance, it also incorporated the most extensive update yet to the user interface. Panther included as many or more new features as Jaguar had the year before, including an updated Finder, incorporating a brushed-metal interface, Fast user switching, Exposé (Window manager), FileVault, Safari, iChat AV (which added videoconferencing features to iChat), improved Portable Document Format (PDF) rendering and much greater Microsoft Windows interoperability. Support for some early G3 computers such as the Power Macintosh and PowerBook was discontinued. Version 10.4: "Tiger" Mac OS X Tiger was released on April 29, 2005. Apple stated that Tiger contained more than 200 new features. As with Panther, certain older machines were no longer supported; Tiger requires a Mac with a built-in FireWire port. Among the new features, Tiger introduced Spotlight, Dashboard, Smart Folders, updated Mail program with Smart Mailboxes, QuickTime 7, Safari 2, Automator, VoiceOver, Core Image and Core Video. The initial release of the Apple TV used a modified version of Tiger with a different graphical interface and fewer applications and services. On January 10, 2006, Apple released the first Intel-based Macs along with the 10.4.4 update to Tiger. This operating system functioned identically on the PowerPC-based Macs and the new Intel-based machines, with the exception of the Intel release dropping support for the Classic environment. Only PowerPC Macs can be booted from retail copies of the Tiger client DVD, but there is a Universal DVD of Tiger Server 10.4.7 (8K1079) that can boot both PowerPC and Intel Macs. Version 10.5: "Leopard" Mac OS X Leopard was released on October 26, 2007. Apple called it "the largest update of Mac OS X". Leopard supports both PowerPC- and Intel x86-based Macintosh computers; support for the G3 processor was dropped and the G4 processor required a minimum clock rate of 867 MHz, and at least 512 MB of RAM to be installed. The single DVD works for all supported Macs (including 64-bit machines). New features include a new look, an updated Finder, Time Machine, Spaces, Boot Camp pre-installed, full support for 64-bit applications (including graphical applications), new features in Mail and iChat, and a number of new security features. Leopard is an Open Brand UNIX 03 registered product on the Intel platform. It was also the first BSD-based OS to receive UNIX 03 certification. Leopard dropped support for the Classic Environment and all Classic applications, and was the final version of Mac OS X to support the PowerPC architecture. Version 10.6: "Snow Leopard" Mac OS X Snow Leopard was released on August 28, 2009, the last version to be available on disc. Rather than delivering big changes to the appearance and end user functionality like the previous releases of , the development of Snow Leopard was deliberately focused on "under the hood" changes, increasing the performance, efficiency, and stability of the operating system. For most users, the most noticeable changes are these: the disk space that the operating system frees up after a clean installation compared to Mac OS X 10.5 Leopard, a more responsive Finder rewritten in Cocoa, faster Time Machine backups, more reliable and user friendly disk ejects, a more powerful version of the Preview application, as well as a faster Safari web browser. An update introduced support for the Mac App Store, Apple's digital distribution platform for macOS applications and subsequent macOS upgrades. Snow Leopard only supports machines with Intel CPUs, requires at least 1 GB of RAM, and drops default support for applications built for the PowerPC architecture (Rosetta can be installed as an additional component to retain support for PowerPC-only applications). Version 10.7: "Lion" Mac OS X Lion was released on July 20, 2011. It brought developments made in Apple's iOS, such as an easily navigable display of installed applications (Launchpad) and (a greater use of) multi-touch gestures, to the Mac. This release removed Rosetta, making it incapable of running PowerPC applications. It dropped support for 32-bit Intel processors and requires 2GB of memory. Changes made to the GUI (Graphical User Interface) include the Launchpad (similar to the home screen of iOS devices), auto-hiding scrollbars that only appear when they are being used, and Mission Control, which unifies Exposé, Spaces, Dashboard, and full-screen applications within a single interface. Apple also made changes to applications: they resume in the same state as they were before they were closed (similar to iOS). Documents auto-save by default. Version 10.8: "Mountain Lion" OS X Mountain Lion was released on July 25, 2012. It incorporates some features seen in iOS 5, which include Game Center, support for iMessage in the new Messages messaging application, and Reminders as a to-do list app separate from iCal (which is renamed as Calendar, like the iOS app). It also includes support for storing iWork documents in iCloud. 2GB of memory is required. Notification Center, which makes its debut in Mountain Lion, is a desktop version similar to the one in iOS 5.0 and higher. Application pop-ups are now concentrated on the corner of the screen, and the Center itself is pulled from the right side of the screen. Mountain Lion also includes more Chinese features, including support for Baidu as an option for Safari search engine. Notification Center is added, providing an overview of alerts from applications. Notes is added, as an application separate from Mail, synching with its iOS counterpart through the iCloud service. Messages, an instant messaging software application, replaces iChat. Version 10.9: "Mavericks" OS X Mavericks was released on October 22, 2013, as a free update through the Mac App Store worldwide. It placed emphasis on battery life, Finder enhancements, other enhancements for power users, and continued iCloud integration, as well as bringing more of Apple's iOS apps to the OS X platform. iBooks and Apple Maps applications were added. Mavericks requires 2GB of memory to operate. It is the first version named under Apple's then-new theme of places in California, dubbed Mavericks after the surfing location. Unlike previous versions of OS X, which had progressively decreasing prices since 10.6, 10.9 was available at no charge to all users of compatible systems running Snow Leopard (10.6) or later, beginning Apple's policy of free upgrades for life on its operating system and business software. Version 10.10: "Yosemite" OS X Yosemite was released to the general public on October 16, 2014, as a free update through the Mac App Store worldwide. It featured a major overhaul of user interface, replaced skeuomorphism with flat graphic design and blurred translucency effects, following the aesthetic introduced with iOS 7. It introduced features called Continuity and Handoff, which allow for tighter integration between paired OS X and iOS devices: the user can handle phone calls or text messages on either their Mac or their iPhone, and edit the same Pages document on either their Mac or their iPad. A later update of the OS included Photos as a replacement for iPhoto and Aperture. Version 10.11: "El Capitan" OS X El Capitan was revealed on June 8, 2015, during the WWDC keynote speech. It was made available as a public beta in July and was made available publicly on September 30, 2015. Apple described this release as containing "Refinements to the Mac Experience" and "Improvements to System Performance" rather than new features. Refinements include public transport built into the Maps application, GUI improvements to the Notes application, as well as adopting San Francisco as the system font. Metal API, an application enhancing software, had debuted in this operating system, being available to "all Macs since 2012". Version 10.12: "Sierra" macOS Sierra was announced on June 13, 2016, during the WWDC keynote speech. The update brought Siri to macOS, featuring several Mac-specific features, like searching for files. It also allowed websites to support Apple Pay as a method of transferring payment, using either a nearby iOS device or Touch ID to authenticate. iCloud also received several improvements, such as the ability to store a user's Desktop and Documents folders on iCloud so they could be synced with other Macs on the same Apple ID. It was released publicly on September 20, 2016. Version 10.13: "High Sierra" macOS High Sierra was announced on June 5, 2017, during the WWDC keynote speech. It was released on September 25, 2017. The release includes many under-the-hood improvements, including a switch to Apple File System (APFS), the introduction of Metal 2, support for HEVC video, and improvements to VR support. In addition, numerous changes were made to standard applications including Photos, Safari, Notes, and Spotlight. Version 10.14: "Mojave" macOS Mojave was announced on June 4, 2018, during the WWDC keynote speech. It was released on September 24, 2018. Some of the key new features were the Dark mode, Desktop stacks and Dynamic Desktop, which changes the desktop background image to correspond to the user's current time of day. Version 10.15: "Catalina" macOS Catalina was announced on June 3, 2019, during the WWDC keynote speech. It was released on October 7, 2019. It primarily focuses on updates to built-in apps, such as replacing iTunes with separate Music, Podcasts, and TV apps, redesigned Reminders and Books apps, and a new Find My app. It also features Sidecar, which allows the user to use an iPad as a second screen for their computer, or even simulate a graphics tablet with an Apple Pencil. It is the first version of macOS not to support 32-bit applications. The Dashboard application was also removed in the update. Since macOS Catalina, iOS apps can run on macOS with Project Catalyst but requires the app to be made compatible unlike ARM-powered Silicon Macs that can run all iOS apps by default. Version 11: "Big Sur" macOS Big Sur was announced on June 22, 2020, during the WWDC keynote speech. It was released November 12, 2020. The major version number is changed, for the first time since "Mac OS X" was released, making it macOS 11. It brings ARM support, new icons, GUI changes to the system, and other bug fixes. Version 12: "Monterey" macOS Monterey was announced on June 7, 2021, during the WWDC keynote speech. It was released on October 25, 2021. macOS Monterey introduces new features such as Universal Control, AirPlay to Mac, Shortcuts application, and more. Universal Control allows users to use a single Keyboard and Mouse to move between devices. Airplay now allows users to present and share almost anything. The Shortcuts app was also introduced to macOS, giving users access to galleries of pre-built shortcuts, designed for Macs, a service brought from iOS. Users can now also set up shortcuts, among other things. Timeline of Macintosh operating systems Timeline of macOS versions See also Macintosh operating systems Architecture of macOS List of macOS components iOS version history References External links MacOS Lists of operating systems History Software version histories
1474766
https://en.wikipedia.org/wiki/UID
UID
UID may refer to: Identifying numbers Unique identifier and instances or systems thereof In computing Unique identifier for a specific user of a computer system Unique ID for the Mifare series of chips (integrated circuits) used in contactless smart cards and proximity cards. Unique ID of a message in a folder on an IMAP server User identifier (Unix), a code identifying each user on a Unix and Unix-like systems Globally unique identifier (GUID.) Universally unique identifier (UUID) In other areas PubMed 'Unique Identifier' parameter (PMID) designating specific scientific publication abstracts (PubMed § PubMed identifier) 'Unique Item Identifier', a specific value in the IUID (Item Unique Identification) system used by the United States Department of Defense for the identification of accountable equipment according to DoD Instruction 5000.64 Aadhaar number, originally the Unique Identification Number, an initiative of the Unique Identification Authority of India (UIDAI) of the Indian government to create a unique ID for every Indian resident uID Center, a nonprofit organization in Tokyo, Japan, responsible for the Ucode system for uniquely identifying real-world objects electronically German for: "UID = Umsatzsteuer Identifikation" (English: VAT identification number, VAT = value-added tax) Organizations Ulster Institute for the Deaf Umeå Institute of Design Unitedworld Institute of Design, college in India Other uses Unidentified decedent, a deceased person whose body has not yet been identified Unintelligent Design, a satirical reaction to the Intelligent Design movement Unit Identification, an LED used as a means of locating a specific computer server in a server room Universal Instructional Design, an educational method that tries to deliver teaching to meet the needs of wide variety of learners User interface design, device design with the focus on the user's experience and interaction See also IUD
40122140
https://en.wikipedia.org/wiki/Ramesh%20Prabhoo
Ramesh Prabhoo
Dr. Ramesh Y. Prabhoo (17th May 1939-11th Dec 2016) Born to businessman Yeshwantrao S. Prabhoo and Sumitra Y. Prabhoo. He was alumni of Wilson College, Mumbai. He did MBBS from Grant Medical College, Mumbai. He got married to his classmate, Dr. Pushpa R. Prabhoo (daughter of the then Member of Parliament, Hon. Sri Ganapatrao Tapase) in 1964 and moved to Vile Parle, Mumbai Suburbs, where he started private practice. They have three children, Commander Rajendra R. Prabhoo, Arvind R. Prabhoo, Leena R. Prabhoo. He joined the Shiv Sena in 1969 and served the city of Mumbai City over a period of more than 40 years by becoming 3 times Corporator, 2 times MLA and Mayor of Mumbai (1987-88). He has won many awards and is a recipient of Parle Bhushan. During his active career, he is credited with the initiative of building Dinanath Mangeshkar  Natyagruha, Babasaheb Gawde Hospital, Shirodkar Hospital, Prabodhankar Thackeray Krida Sankul and Joggers Park (1996), Veer Savarkar Seva Kendra in Vile Parle (East). He is the founder of  Chhatrapati Shivaji Maharaj Smarak Samittee through which he installed  Chhatrapati Shivaji Maharaj Statue at Juhu. He was President of Vijay Merchant Rehabilitation Center, which gave source of livelihood to thousands of physically challenged people. He organized numerous blood donation camps, polio vaccination camps and health check-up camps where lakhs of citizens participated and availed needed help. He planted Trees in Vile Parle through Lions Club and Rotary Club initiatives. Through his initiative, a portrait of Sri Veer Savarkar was installed in the Indian Parliament on 26th February 2003 by Hon. President of India, Sri A.P. J. Abdul Kalamji in the presence of Hon. Prime Minister of India, Sri Atal Bihari Vajpayee ji and other dignitaries.  A delegation of 250 people went to Marseilles to commemorate the 100th year of historic jump of Veer Savarkar in 2008. Under the aegis of Prabodhankar Thackeray Krida Sankul thousands of children are trained in various sporting activities every year and bring gold medals and honor to India on a regular basis. The Olympic size swimming pool has nurtured nationwide superstars who bring gold, silver and bronze medals for India on the international arena. The current management, headed by Arvind R. Prabhoo as President, has identified and are nurturing sporting talent from rural India and are preparing them for competitive sports nationally and internationally. This is the vision of Dr. Ramesh Y. Prabhoo. Dr. Ramesh Yeshwant Prabhoo was an Indian politician. In 1987 he was elected as a member of Maharashtra's legislative assembly as an independent candidate supported by the Shiv Sena,. His election was annulled by a judgement handed down from the Supreme Court of India which upheld a lower court order, for election speeches made by the chief of the Shiv Sena, Bal Thackeray. Subsequently, Dr. Ramesh Y. Prabhoo created history by establishing Hindutva as a philosophy, nationwide. Even the Hon. Supreme Court of India declared in its 1995 diktat that Hindutva is not a religion but a philosophy of life. To gain such a momentous decision, Dr. Prabhoo had to make the sacrifice of his electoral seat and was debarred from contesting elections and along with Sri Balasaheb Thackeray, were denied their right to vote for 6 years. Post this diktat, BJP and Shiv Sena alliance was formed, which was running successfully for more than 25 years. Dr. Prabhoo was the Mayor of Mumbai from 1987 until 1988. Dr. Prabhoo was a medical doctor and held a MBBS degree. References 2016 deaths Maharashtra MLAs 1985–1990 Mayors of Mumbai Shiv Sena politicians Marathi politicians Date of birth unknown Year of birth missing
35511
https://en.wikipedia.org/wiki/5ESS%20Switching%20System
5ESS Switching System
The 5ESS Switching System is a Class 5 telephone electronic switching system developed by Western Electric for the American Telephone and Telegraph Company (AT&T) and the Bell System in the United States. It came into service in 1982 and the last unit was produced in 2003. History The 5ESS came to market as the Western Electric No. 5 ESS. It commenced service in Seneca, Illinois on 25 March 1982, and was destined to replace the Number One Electronic Switching System (1ESS and 1AESS) and other electromechanical systems in the 1980s and 1990s. The 5ESS was also used as a Class-4 telephone switch or as a hybrid Class 4/Class 5 switch in markets too small for the 4ESS. Approximately half of all US central offices are served by 5ESS switches. The 5ESS is also exported internationally, and manufactured outside the US under license. The 5ESS–2000 version, introduced in the 1990s, increased the capacity of the switching module (SM), with more peripheral modules and more optical links per SM to the communications module (CM). A follow-on version, the 5ESS–R/E, was in development during the late 1990s but did not reach market. Another version was the 5E–XC. The 5ESS technology was transferred to the AT&T Network Systems division upon the 1984 breakup of the Bell System. The division was divested by AT&T in 1996 as Lucent Technologies, and after becoming Alcatel-Lucent in 2006, it was acquired by Nokia in 2016. 5ESS switches in service in 2021 included several operated by the United States Navy. Architecture The 5ESS switch has three main types of modules: the Administrative Module (AM) contains the central computers; the Communications Module (CM) is the central time-divided switch of the system; and the Switching Module (SM) makes up the majority of the equipment in most exchanges. The SM performs multiplexing, analog and digital coding, and other work to interface with external equipment. Each has a controller, a small computer with duplicated CPUs and memories, like most common equipment of the exchange, for redundancy. Distributed systems lessen the load on the Central Administrative Module (AM) or main computer. Power for all circuitry is distributed as –48 VDC (nominal), and converted locally to logic levels or telephone signals. Switching Module Each Switching Module (SM) handles several hundred to a few thousand telephone lines or several hundred trunks or combination thereof. Each has its own processors, also called Module Controllers, which perform most call handling processes, using their own memory boards. Originally the peripheral processors were to be Intel 8086, but those proved inadequate and the system was introduced with Motorola 68000 series processors. The name of the cabinet that houses this equipment was changed at the same time from Interface Module to Switching Module. Peripheral units are on shelves in the SM. In most exchanges the majority are Line Units (LU) and Digital Line Trunk Units (DLTU). Each SM has Local Digital Service Units (LDSU) to provide various services to lines and trunks in the SM, including tone generation and detection. Global Digital Service Units (GDSU) provide less-frequently used services to the entire exchange. The Time Slot Interchanger (TSI) in the SM uses random-access memory to delay each speech sample to fit into a time slot which will carry its call through the exchange to another or, in some cases, the same SM. T-carrier spans are terminated, originally one per card but in later models usually two, in Digital Line Trunk Units (DLTU) which concentrate their DS0 channels into the TSI. These may serve either interoffice trunks or, using Integrated Subscriber Loop Carrier, subscriber lines. Higher-capacity DS3 signals can also have their DS0 signals switched in Digital Network Unit SONET (DNUS) units, without demultiplexing them into DS1. Newer SM's have DNUS (DS3) and Optical OIU interfaces (OC12) with a large amount of capacity. SMs have Dual Link Interface (DLI) cards to connect them by multi-mode optical fibers to the Communications Modules for time-divided switching to other SMs. These links may be short, for example within the same building, or may connect to SMs in remote locations. Calls among the lines and trunks of a particular SM needn't go through CM, and an SM located remotely can act as distributed switching, administered from the central AM. Each SM has two Module Controller/Time Slot Interchange (MCTSI) circuits for redundancy. In contrast to Nortel's DMS-100 which uses individual line cards with a codec, most lines are on two-stage analog space-division concentrators or Line Units, which connect as many as 512 lines, as needed, to the 8 Channel cards that each contain 8 codecs, and to high-level service circuits for ringing and testing. Both stages of concentration are included on the same GDX (Gated Diode Access) board. Each GDX board serves 32 lines, 16 A links and 32 B links. Limited availability saves money with incompletely filled matrixes. The Line Unit can have up to 16 GDX boards connecting to the channel boards by shared B links, but in offices with heavier traffic for lines a lesser number of GDX boards are equipped. ISDN lines are served by individual line cards in an ISLU (Integrated Services Line Unit). Administrative Module The Administrative Module (AM) is a dual-processor mini main frame computer of the AT&T 3B series, running UNIX-RTR. AM contains the hard drives and tape drives used to load and backup the central and peripheral processor software and translations. Disk drives were originally several 300 megabyte SMD multi-platter units in a separate frame. Now they consist of several redundant multi-gigabyte SCSI drives that each reside on a card. Tape drives were originally half inch open reel at 6250 bits per inch, which were replaced in the early 1990s with 4 mm Digital Audio Tape cassettes. The Administrative Module is built on the 3B21D platform and is used to load software to the many microprocessors throughout the switch and to provide high speed control functions. It provides messaging and interface to control terminals. The AM of a 5ESS consists of the 3B20x or 3B21D processor unit, including I/O, disks, and tape drive units. Once the 3B21D has loaded the software into the 5ESS and the switch is activated, packet switching takes place without further action by the 3B21D, except for billing functions requiring records to be transferred to disk for storage. Because the processor has duplex hardware, one active side, and one standby side, a failure of one side of the processor will not necessarily result in a loss of switching. Communication Module The Communications Module (CM) forms the central time switch of the exchange. 5ESS uses a time-space-time (TST) topology in which the Time-Slot-Interchangers (TSI) in the Switching Modules assign each phone call to a time slot for routing through the CM. CMs perform time-divided switching and are provided in pairs; each module (cabinet) belonging to Office Network and Timing Complex (ONTC) 0 or 1, roughly corresponding to the switch planes of other designs. Each SM has four optical fiber links, two connecting to a CM belonging to ONTC 0 and two to ONTC 1. Each optical link consists of two multimode optical fibers with ST connectors to plug into transceivers plugged into backplane wiring at each end. CMs receive time-multiplexed signals on the receive fiber and send them to the appropriate destination SM on the send fiber. Very Compact Digital Exchange The Very Compact Digital Exchange (VCDX) was developed with the 5ESS-2000, and marketed to mostly non-Bell telephone companies as an inexpensive, effective way to offer ISDN and other digital services in an analog switching center. This avoided the capital expense of retrofitting the entire analog switch into a digital one to serve all of the switch's lines when many wouldn't require it and would remain POTS lines. An example would be the (former) GTE/Verizon Class-5 telephone switch, the GTD-5 EAX. Like the Western Electric 1ESS/1AESS, it served mostly medium to large wire centers. The standalone VCDX was also capable of serving as a switch for very small wire centers (a CDX- Community dial office) of fewer than ~400 lines. However, for small wire centers, 400-4000 lines, that function was usually served by RSM's, a 5ESS "Remote SM", ORM's or Wired ORM's. The RSM is controlled by T1 lines connected to a DLTU unit. The first 2 T1's are the control of the RSM and are necessary for any Recent Changes to take place. RSM's can have up to 10 T1's. There can be multiple RSM's in an office. An ORM can be fed via direct fiber or via coax thus called Wired ORM's. An RSM or ORM can have many of the same peripheral units that are part of a full 5ESS switch. An RSM has a limited distance and can serve parts of a larger metro area or rural offices. An ORM or wired ORM can be anywhere technically, and preferred over the RSM once the ORM became available. Both the RSM and ORM is often used as a Class-5 wire center for small to medium towns hosted from a 5ESS located in a larger city. The Wired ORM is connected via coax from a MUX unit and fed to a TRCU which converts the coax to connection to the DLI, There was also a two-mile ORM that was used when an office was broken out or took an area from another office. The distance on this was 2 miles from a host office and fed direct via fiber. As with any SM, the size is dictated by the number of time slots needed for each peripheral unit. ORM's are linked with DS3, RSM's are linked with T1 lines. The VCDX was also used as a large private branch exchange (PBX). Small communities of less than 400 lines or so were also provided with SLC-96 units or Anymedia units. The standalone VCDX has a single Switching Module, and no Communications Module. Its Sun Microsystems SPARC workstation runs the UNIX-based Solaris (operating system) that executes a 3B20/21D processor MERT OS emulation system, acting as the VCDX's Administrative Module. The VCDX uses the CO's normal telephone power sources (which are very large uninterruptible power supplies), and has connections to the CO Digital cross connect system for T1 access, etc. Signaling The 5ESS has two different signaling architectures: Common Network Interface (CNI) Ring and Packet Switching Unit (PSU)-based SS7 Signaling. Software The development effort for 5ESS required five thousand employees, producing 100 million lines of system source code, mostly in the C language, with 100 million lines of header files and makefiles. Evolution of the system took place over 20 years, while three releases were often being developed simultaneously, each taking about three years to develop. The 5ESS was originally U.S.-only and the international market resulted in a complete development system and team, in parallel to the U.S. version. The development systems were Unix-based mainframe systems. There were around 15 of these systems active at the peak. There were development machines, simulator machines, and build machines, etc. Developers' desktops were multi-window terminals (versions of the Blit developed by Bell Labs) until the mid 1990s, when Sun workstations were deployed. Developers continued to login into the servers for their work, using X-Windows on their workstations as a multi-window environment. Source code management was based on SCCS and utilized "#feature" lines to separate source code between releases, between features specific to US or Intl, and the like. Customisation around the vi and Emacs text editors allowed developers to work with the appropriate view of a file, hiding the parts that were not applicable to their current project. The change request system used the SCCS MR to create named change sets, tied into the IMR (initial modification request) system which had purely numeric identifiers. An MR name was created with subsystem prefix, IMR number, MR sequence characters, and a character for the release or "load". So, for the gr (generic retrofit) subsystem, the first MR created for the 2371242 IMR, destined for the 'F' load, would be gr2371242aF. The build system used a simple mechanism of build configuration that would cause makefile generation to occur. The system always built everything, but used checksum results to decide if a file had actually changed, before updating the build output directory tree. This provided a huge reduction in build time when a core library or header was being edited. A developer could add values to an enum, but if that did not change the build output, then subsequent dependencies on that output would not have to be relinked or libraries built, etc. OAMP The system is administered through an assortment of teletypewriter "channels", also called the system console, such as the TEST channel and the Maintenance channel. Typically provisioning is done either through a command line interface (CLI) called RCV:APPTEXT, or through the menu-driven program. RCV stands for Recent Change/Verification, and can be accessed through the Switching Control Center System. Most service orders, however, are administered through the Recent Change Memory Administration Center (RCMAC). In the international market, this terminal interface has localization to provide locale-specific language and command name variations on the screen and printer output. See also PRX (telephony) – an earlier switch acquired by AT&T in Europe References External links Evolution of Switching Architecture to Support Voice Telephony over ATM by Judith R. McGoogan, Joseph E. Merritt, and Yogesh J. Dave. Extending 5ESS–2000. Bell Labs Technical Journal, April–May 2000 Switch Basics 5ESS Scribd.com Alcatel-Lucent Telephone exchange equipment
1185772
https://en.wikipedia.org/wiki/Vampire%3A%20The%20Masquerade%20%E2%80%93%20Redemption
Vampire: The Masquerade – Redemption
Vampire: The Masquerade – Redemption is a 2000 role-playing video game developed by Nihilistic Software and published by Activision. The game is based on White Wolf Publishing's tabletop role-playing game Vampire: The Masquerade, a part of the larger World of Darkness series. It follows Christof Romuald, a 12th-century French crusader who is killed and revived as a vampire. The game depicts Christof's centuries-long journey from the Dark Ages of 12th century Prague and Vienna to late-20th century London and New York City in search of his humanity and his kidnapped love, the nun Anezka. Redemption is presented in the first- and third-person perspectives. The player controls Christof and up to three allies through a linear structure, providing the player with missions to progress through a set narrative. Certain actions committed by Christof throughout the game can raise or lower his humanity, affecting which of the game's three endings the player receives. As a vampire, Christof is imbued with a variety of abilities and powers that can be used to combat or avoid enemies and obstacles. Use of these abilities drains Christof's supply of blood which can be replenished by drinking from enemies or innocents. It includes multiplayer gameplay called "Storyteller", which allows one player to create a narrative for a group of players with the ability to modify the game dynamically in reaction to the players' actions. Founded in March 1998, Nihilistic's twelve-man team began development of Redemption the following month as their first game. It took the team twenty-four months to complete on a budget of . The team relied on eight outside contractors to provide elements that the team could not supply, such as music and artwork. The game's development was difficult: late changes to software forced the developers to abandon completed code and assets; a focus on high-quality graphics and sound meant that the game ran poorly on some computer systems; and the original scope of the game exceeded the game's schedule and budget, forcing the team to cancel planned features. Redemption was released for Microsoft Windows on June 7, 2000, with a Mac OS version following in November 2001. The game received a mixed critical response; reviewers praised its graphics and its multiplayer functionality but were polarized by the quality of the story and combat. It received the 1999 Game Critics Awards for Best Role-Playing game. It was successful enough to merit the production of the indirect sequel Vampire: The Masquerade – Bloodlines (2004), which takes place in the same fictional universe. Gameplay Vampire: The Masquerade – Redemption is a role-playing video game (RPG) presented primarily from the third-person perspective; the playable character is shown on the screen while an optional first-person mode used to view the character's immediate environment is available. The camera can be freely rotated around the character and positioned above it to give a greater overview of the immediate area. The game follows a linear, mission-based structure. Interaction is achieved by using a mouse to click on an enemy or environmental object to attack it or to activate it. Interaction is context based; clicking on an enemy initiates combat, while clicking on a door causes it to open or close. The playable character can lead a group of three additional allies into battle, controlling their actions to attack a single enemy or to use specific powers. Characters can be set to one of three modes: defensive, neutral, or offensive. In defensive mode, the character remains distant from battles, while offensive mode sends the character directly into battle. The main character and active allies are represented by portraits on screen that reflect their current physical or emotional state, showing sadness, anger, feeding, or the presence of injuries or staking—having been stabbed through the heart and rendered immobile. The player can access various long-range and melee weapons including swords, shields, bows, and guns, stakes, and holy water. Some weapons have a secondary, more powerful attack; for example a sword can be spun to decapitate a foe. Because they are vampires, allies and enemies are susceptible to damage from sunlight. Disciplines (vampiric powers) are used to supplement physical attacks. Each discipline can be upgraded, becoming a more powerful version of itself; alternatively, other in-game benefits can be gained. The game features disciplines that allow the player to enhance the character's physical abilities such as speed, strength, or durability. Disciplines can also allow the player to mesmerize an enemy or a potential feeding victim, render the character invisible to escape detection, turn the character into mist, summon serpents to attack enemies, heal, revive their allies, and teleport to a haven. Each discipline can be upgraded up to five times, affecting the abilities' durations, the scale of the damage or their effect, and the cost of using it. The characters' health and disciplines are reliant on blood, which can only be replenished by feeding on the living—including other party members—or finding blood containers such as bottles and plasma bags. Drinking an innocent to death and other negative actions reduces the player's humanity, increasing the likelihood of entering a frenzy when injured or low on blood, during which they indiscriminately attack friend and foe. Completing objectives and defeating enemies is rewarded with experience points, which are used to unlock or upgrade existing disciplines and improve each characters' statistics, such as strength or agility. Weapons, armor, and other accessories can be purchased or upgraded using money or valuable items, which are collected throughout the game. The character's inventory is grid-based; objects occupy an allotted amount of space, requiring the management of the storage space available. A belt allows some items to be selected for immediate use during gameplay, such as healing items, without the need to access them in the main inventory. The first version of the game allows progress to be saved only in the main character's haven or safehouse; it automatically saves other data at specific points. An update to the game enabled players to save their in-game data at any point in the in-game narrative. Redemption features an online multiplayer component which allows players to engage in scenarios together. One player assumes the role of the Storyteller, guiding other players through a scenario using the Storyteller interface. The interface allows the Storyteller to create or modify scenarios by placing items, monsters, and characters across the map. Character statistics, such as experience points, abilities, and disciplines, can also be modified. Finally, the Storyteller can assume the role of any character at any given time. These functions allow the Storyteller to dynamically manipulate the play environment while the other players traverse it. Synopsis Setting The events depicted in Vampire: The Masquerade – Redemption occur in two time periods: 12th century Prague and Vienna, and late-20th century London and New York City. The game is set in the World of Darkness; it depicts a world in which vampires, werewolves, demons, and other creatures influence human history. The vampires are divided into seven Clans of the Camarilla—the vampire government—each with distinctive traits and abilities. The Toreadors are the closest to humanity—they have a passion for culture; the Ventrue are noble, powerful leaders; the Brujah are idealists who excel at fighting; the Malkavians are either cursed with insanity or blessed with insight; the Gangrel are loners in synchronization with their animalistic nature; the Tremere are secretive, untrustworthy, and wield blood magic; and the monstrous Nosferatu are condemned to remain hidden in the shadows. Redemption also features the Cappadocian clan; the Society of Leopold—modern-day vampire hunters; the Assamite clan of assassin vampires; the Setite clan; the Tzimisce clan, the Giovanni clan, and the Sabbat—vampires who revel in their nature, embracing the beast within. The main character of Redemption is French crusader Christof Romuald, a once-proud, religious church knight who is transformed into a Brujah vampire. With his religious faith destroyed, Christof is forced to reassess his understanding of good and evil as he acclimates to his new life. Christof's anchor to humanity is the nun Anezka, a human with a pure soul who loves Chrisof even after his transformation. As a member of the Brujah under Ecaterina the Wise, Christof allies with Wilhem Streicher, the Gangrel Erik, and the Cappadocian Serena during his journeys through 12th century Prague. Other characters in this era include the slaver Count Orsi, the Tremere Etrius, and the Ventrue Prince Brandl. Christof continues his quest into the late-20th century, where he allies with the Brujah Pink, the enslaved Toreador Lily, and the Nosferatu Samuel. Other characters include the 300-year-old human leader of the Society of Leopold, Leo Allatius—who has unnaturally extended his lifespan by consuming vampire blood— and the Setite leader Lucretia. During his journey, Christof comes into conflict with Vukodlak, a powerful Tzimisce vampire intent on usurping the clans' ancestors and taking their power for himself. Trapped in a mystical sleep by those who oppose his plot, Vukodlak commands his followers to help resurrect him. Plot In 1141 in Prague, crusader Christof Romuald is wounded in battle. He recovers in a church, where he is cared for by a nun called Anezka. The pair instantly fall in love but are restrained by their commitments to God. Christof enters a nearby silver mine to kill a monstrous Tzimisce vampire who is tormenting the city. Christof's victory is noted by the local vampires, one of whom, Ecaterina the Wise, turns him into a vampire to prevent another clan from taking him. Initially defiant, Christof agrees to accompany Ecaterina's servant Wilhem on a mission to master his new vampiric abilities. Afterward, he meets with Anezka and refuses to taint her with his cursed state. At Ecaterina's haven, the Brujah tell Christof about an impending war between the Tremere and Tzimisce clans that will devastate humans caught up in it. Wilhem and Christof gain the favor of the local Jews and Cappadocians, who devote their member Serena to the Brujah cause. The Ventrue Prince Brandl tells the group that in Vienna, the Tremere are abducting humans to turn them into ghouls—servitors addicted and empowered by vampire blood. The group infiltrate the Tremere chantry in Prague, and stop the Gangrel Erik from being turned into a Gargoyle, and he joins them. Christof learns that Anezka, seeking Christof's redemption, has visited the Tremere and Tzimisce clans, and the Vienna Tremere stronghold, Haus de Hexe. There, the Tremere leader Etrius turns Erik into a Gargoyle, forcing Christof to kill him. Etrius reveals that the Tzimisce abducted Anezka. Returning to Prague, Christof finds the Tzimisce in nearby Vyšehrad Castle have been revealed to the humans, who have launched an assault on the structure. Christof, Wilhem, and Serena infiltrate the castle and find that the powerful, slumbering Vukodlak has enslaved Anezka as a ghoul. Anezka rejects Christof and prepares to revive Vukodlak, but the outside assault collapses the castle upon them. In 1999, the Society of Leopold excavates the site of Vyšehrad Castle; they recover Christof's body and take it to London, where he is awoken by a female voice. He learns that the events at Vyšehrad and the resulting human uprising divided the vampires into two sects: the Camarilla who seek to hide from humanity and the Sabbat who seek to regain dominion over it. The Society's excavation also enables Vukodlak's followers to recover Vyšehrad. After escaping, Christof meets Pink, who agrees to help him. They learn that the Setite clan has been shipping Vyšehrad contraband to New York City and infiltrate a Setite brothel to gain information. They kill the Setite leader Lucretia and recruit Lily, an enslaved prostitute. Christof, Pink, and Lily travel to New York City aboard a contraband ship, rescue the Nosferatu Samuel from the Sabbat, and infiltrate a warehouse storing the Vyšehrad contraband. There they encounter Wilhem, who is now a Sabbat under Ecaterina following the collapse of their group. Wilhem reveals that Pink is an assassin working for Vukodlak. Pink escapes and Wilhem rejoins Christof, hoping to reclaim the humanity he has sacrificed during the previous 800 years. Together, Christof, Wilhem, Lily, and Samuel discover that Vukodlak is hidden beneath a church within his Cathedral of Flesh and that Anezka is still in his servitude. In the cathedral they find that Vukodlak has awoken; he tries to influence Christof by offering him Anezka then revealing that she is completely dependent on Vukodlak's blood and will die without him. Christof refuses and Vukodlak drops the group into tunnels beneath the cathedral. Christof finds the Wall of Memories, which hold Anezka's memories of the last millennia, showing she continued to hope as Vukodlak found new ways to defile and torment her. She eventually sacrificed her innocence to gain Vukodlak's trust, using her position to delay his resurrection over hundreds of years until, with no options left, she prayed for Christof's return. The group returns to the Cathedral and battles Vukodlak. The ending of Redemption varies depending upon the quantity of humanity Christof has retained during the game. If the quantity is great, Christof kills Vukodlak, reconciles with Anezka, and turns her into a vampire, sparing her from death. If his humanity is moderate, he surrenders to Vukodlak and becomes a ghoul; Vukodlak betrays Christof and forces him to murder Anezka. A lesser quantity of humanity results in Christof killing Vukodlak by drinking his blood. Greatly empowered, Christof forsakes his humanity, murders Anezka, and revels in his new power. Development The development of Vampire: The Masquerade – Redemption began at Nihilistic Software in April 1998, shortly after the developer's founding in March that year. Its development was publicly announced in March 1999. Intending to move away from the first-person games the team members had worked on with previous companies, Nihilistic prepared a design and story for a futuristic RPG with similar themes and gothic aesthetics to those of the Vampire: The Masquerade series. After publisher Activision approached the team using the White Wolf license, they adapted parts of their original design to fit the Vampire series, which became the original design for Redemption. Endorsement by Id Software founder John Carmack helped Nihilistic decide to work with Activision. The Nihilistic team developed Redemption over twenty-four months; the team expanded to twelve members by the end of development. The development team included Nihilistic President and CEO Ray Gesko, lead programmer Rob Huebner, world designer Steve Tietze, level designer Steve Thoms, lead artist Maarten Kraaijvanger, artist Yujin Kiem, art technician Anthony Chiang, and programmers Yves Borckmans and Ingar Shu. Activision provided a budget of US$1.8 million; the amount was intentionally kept low to make the project manageable for Nihilistic and reduce the risk to Activision, which was relatively inexperienced with RPGs at the time. Nihilistic's management was committed to the entire team working in a one-room environment with no walls, doors, or offices, believing this would force the individual groups to communicate and allow each department to respond to queries immediately, saving hours or days of development time. Redemptions story was developed with input from Wolf; it was co-written by Daniel Greenberg, a writer for the source pen-and-paper RPG. The small size of the team led to Nihilistic relying on eight external contractors to provide elements the team could not supply. Nick Peck was chosen to provide sounds effects, ambient loops, and additional voice recordings based on his previous work on Grim Fandango (1998). Kevin Manthei provided the musical score for the game's 12th century sections, while a duo called Youth Engine provided the modern-day sections' score. Some artwork was outsourced; Peter Chan (Day of the Tentacle (1993) and Grim Fandango) developed concept art to establish the look of the game's environments, and Patrick Lambert developed character concepts and full-color drawings for the modelers and animators to use. Huebner considered the most important external relationship was with a small start-up company called Oholoko, which produced cinematic movies for the game's story elements and endings. Nihilistic met with various computer animation firms but their prices were too expensive for the project budget. Redemption was officially released to manufacturing on May 30, 2000. The game features 300,000 lines of code, with a further 66,000 lines of Java for scripts. In January 2000, it was announced that Nihilistic was seeking a studio to port Redemption to the Sega Dreamcast video game console, however this version was never released. In February 2001, after the release of the PC version, it was announced that MacSoft was developing a MacOS version of the game. Technology Nihilistic initially looked at existing game engines such as the Quake engine and Unreal Engine, but decided those engines, which were primarily designed for first-person shooters, would not be sufficient for its point-and-click driven RPG and decided to create its own engine for development of Redemption. This was the NOD engine, which the developers could customize for the game's 3D perspective and role-playing mechanics. The team also considered that developing its own engine would allow it to freely reuse code for future projects or to license the engine for profit. NOD was prototyped using the Glide application programming interface (API) because the team believed it would be more stable during the engine's development, intending that once the engine was more complete, it would be moved to a more general API designed to support a wide range of hardware such as Direct3D. However, once a basic engine was in place in Glide, the programmers turned their attention to gameplay and functionality. By June 1999, Redemption was still running in Glide, which at that point lacked some of the basic features the team needed to demonstrate at that year's Electronic Entertainment Expo. When the team eventually switched to Direct3D, it was forced to abandon some custom code it had built to compensate for Glide's limitations such as texture and graphic management, which required the re-exporting of hundreds of levels and models for the new software. The late API switch also limited the time available to test the game's compatibility on a wide range of hardware. The team focused on building the game for hardware accelerated systems to avoid the limitations of supporting a wider range of systems, which had restricted the development of the company founders' previous game, Star Wars Jedi Knight: Dark Forces II (1997). The programmers suggested using 3D Studio Max for art and level design, which would save money by allowing the company to license a single piece of software, but the lead artists successfully lobbied against this plan, believing that allowing the respective teams to choose the software would allow them to work most efficiently. Huebner said this saved the project more time than any other decision made during development. The level designers chose QERadiant to take advantage of their previous experience using the software while working on Id Software's Quake series. Id allowed Nihilistic to license QERadiant and modify it to create a customized tool for its 3D environments. Because QERadiant was a finished, functional tool, it allowed the level designers to begin developing levels from the project's start and then export them into the NOD engine, rather than waiting for up to six months for Nihilistic to develop a custom tool or learning a new 3D level editor. In twenty-four months, the three level designers built over 100 in-game environments for Redemption. They obtained blueprints and sketches of buildings from medieval Prague and Vienna to better represent that period and locations. The four-person art team led by Kraaijvanger used Alias Wavefront Maya to create 3D art. Nihilistic's management wanted Kraaijvanger to use a less expensive tool but relented when the cost was found to be lower than had been thought. Throughout the project, the art team built over 1,500 3D models. At the start of development, Nihilistic wanted to support editing of the game by the user-community, having seen the benefits to the community while working on other games. Staff who worked on Jedi Knight knew the experience of creating a new, customized programming language called COG that gave the programmers the results they wanted but cost time and significant project resources. With Redemption, they wanted to incorporate an existing scripting engine that would more easily enable users to further develop the game instead of developing their own code again, which would consume months of development time. The team tested various languages, but became aware of another studio, Rebel Boat Rocker, which was receiving attention for its use of the Java language. Speaking to that studio's lead programmer Billy Zelsnak, Nihilistic decided to experiment with Java, having little prior knowledge of it. The language successfully integrated into the NOD engine without problems, providing a standardized and freely distributable scripting engine. Several designers were trained to use Java to allow them to build the several hundred scripts required to drive the game's storyline. Design The Nihilistic team used their experience adapting an existing property for the Star Wars games to design Redemption. Reasoning that most people would be familiar with vampire tropes, the team wrote the game assuming players would not need an explanation of the genre's common elements, while enabling them to explore White Wolf's additions to the mythos. When translating the pen-and-paper RPG to a video game, the team redesigned some of the disciplines to make them simpler to understand. For example, in the pen-and-paper game, the "Protean" discipline includes the abilities to see in the dark, grow claws, melt into the ground, and change into an animal, however in Redemption these were made into individual disciplines to make them instantly accessible, instead of requiring the player to select Protean and then select one of the sub-abilities. Huebner said the team struggled with restraint. From inception, the team had developed its assets for a high-end system to ensure the finished project would have top-of-the-range graphics, and because if necessary, it could more easily scale down the art down than scale it up. However, the art teams were not stopped from producing new assets, resulting in Redemption requiring approximately 1GB of storage space to install. Additionally, textures were made in 32-bit color, models were extremely detailed—featuring between 1,000 and 2,000 triangles each on average—and levels were illuminated with high-resolution light-maps. Because the game was designed for high-end computer systems, it relied on algorithms to scale down the models; combined with the high detail art assets, Redemption was taxing to run on low- and mid-range systems. Nihilistic had intended to include both 16-bit and 32-bit versions of the game textures, and different sound quality levels to allow players to choose which versions to install, but the CD-ROM format was not spacious enough to accommodate more than one version of the game. The finished product barely fitted onto two CD-ROMs; some sound assets were removed to fit the format. This caused the game to use a large amount of computer resources and limited the ability to port it to more limited console environments. The programmers identified early on that pathfinding—the ability of the variable-sized characters to navigate through the environment—would be a problem. Huebner cited the difficulty of programming characters to navigate an environment in which level designers are free to add stairs, ramps, and other 3D objects. They came up with a temporary solution and planned to improve the pathfinding later into development. By the time they properly addressed the problem, many of the levels were almost complete and featured few markers the programmers could use to control movement. They could identify walkable tiles but not walls, cliffs, and other environmental hazards. Ideal solutions, such as creating zones for characters to walk through would have taken too much time to retroactively add into the 100 created levels, so the programmers spent several weeks making small, iterative fixes to conceal the obvious errors in the pathfinding and leave less obvious ones intact. From the outset, the team wanted to make a grand RPG, but were restricted by their budget and schedule. They were reluctant to cut any content such as one of the time periods or the multiplayer aspect, and they decided to postpone the original release date from March 2000 to June the same year. They also scaled back the scope of their multiplayer testing and canceled the planned release of an interactive pre-launch demo. The delay allowed Nihilistic to retain most of the intended design but they were forced to remove the ability to play the entire single-player campaign as a team online, compensating for this by adding two multiplayer scenarios built using levels from the single-player game. Huebner said they did not plan appropriately for multiplayer when building the Java scripts for the single-player game, meaning the scripts did not work effectively in multiplayer mode. The multiplayer "Storyteller" mode was conceived early in the development cycle. Diverting from the typical death match or co-operative gameplay multiplayer modes, Storyteller required Nihilistic to develop an interface that could give one player, the Storyteller, enough control to run a particular scenario, and change events in the game in real time without making it too complex to understand for the average player. Much of the technology was simple to implement, requiring typical multiplayer software components that would allow users to connect with each other. The largest task required the development of an interface that could provide the Storyteller with control over the aspects of a multiplayer game without it becoming too complex. The interface had to contain lists of objects, characters, and other resources, and options to manipulate those resources. It had to be mostly accessible using a mouse as input, reserving the keyboard for less common and more advanced commands. The mode was inspired by the text-based Multi-User Dungeon, a multiplayer real-time virtual world in which high-ranking users can manipulate the game's environment and dynamically create adventures. Release Vampire: The Masquerade – Redemption was released for Microsoft Windows on June 7, 2000. The game's release included a standalone copy of the game, and a Collector's Edition containing a copy of the game, a hardbound, limited edition of White Wolf's The Book of Nod chronicling the first vampire, a Camarilla pendant, a strategy guide, and an alternative game case cover. The Collector's Edition also included a copy of the game's soundtrack, featuring songs by Type O Negative, Gravity Kills, Ministry, Darling Violetta, Cubanate, Primus, Youth Engine, and Kevin Manthei. Nihilistic also released Embrace, a level editor with access to the game's code to allow users to modify levels and scripts. A Mac OS version was released in November 2001. During its first week on sale, Redemption was the third best-selling Windows game in the United States behind The Sims and Who Wants To Be A Millionaire 2nd Edition. Sales of the Collector's Edition were individually tracked; it was the fifth best-selling game that same week. According to the sale tracking firm PC Data, Redemption had sold approximately 111,193 units across North America by October, earning $4.88 million. Approximately 57,000 units were sold in Germany by March 2001. It spent four months on its list of the 30 top-selling games, peaking at number 5 in July 2000, before leaving the charts in October. Redemption received a digital release on the GOG.com service in February 2010. Redemption achieved enough success to merit the 2004 release of an indirect sequel, Vampire: The Masquerade – Bloodlines, which is set in the same fictional universe and was developed by Troika Games. Reception The aggregating review websites Metacritic provides the game a score of 74 out of 100 based on 22 reviews. Reviewers compared it to other successful RPGs, including Diablo II, Deus Ex, Darkstone: Evil Reigns, and the Final Fantasy series. The game's graphics received near-unanimous praise. Game Revolution said its "brilliant" graphics were among "the best in gaming" and Next Generation said the graphics were the best in any PC RPG. Computer Games said it was the most attractive PC game at the time, ArsTechnica said it was the best game to look at and watch since The Last Express (1997), and PC Gamer said, "there has never been a more beautifully created RPG". The level design and environments were praised for the level of detail, providing a brooding, atmospheric aesthetic with "painstaking" detail. Reviewers also made positive comments about the game's lighting effects. Conversely, Computer Gaming World (CGW) said that while the game was attractive, the visuals were superficial and failed to emphasize the game's horror elements. They were also critical of the third-person in-game camera positioning, claiming that it obscured the area directly in front of the player and did not allow the player to look upwards. Responses to the story were ambivalent; some reviewers called it strong with good dialog; others said it was poor. GameRevolution and CGW called the dialog poor, sophomoric, and often overly-verbose; in particular CGW said some speeches became an "agonizingly long filibuster" that only served to delay the return of control to the player. Other sources called it one of the richest, most engrossing stories to be found outside films and novels, and more original than most RPGs. Computer Games criticized the linear storyline, and said the few dialog choices available to the player had no real impact on the storytelling. CGW said the linear story prevented Redemption from being a true RPG because it lacked interaction with many characters, and the lack of player impact on the story made it seem as though they were not building characters but rather were getting them to the story milestones. According to PC Gamer, while the game's linearity was a negative, it kept the narrative tight and compelling. Reviewers variously appreciated and disliked the voice acting. Game Revolution and Computer Games said the acting ranged from adequate to good, while CGW said the voices were inappropriate and the 12th century European voices sounding like modern Americans, but that the modern era featured better actors. ArsTechnica said the acting was inconsistent but was better than that of Deus Ex. The weather effects, background sound, and moody music were said to blend together well and help immerse the player in the game's world. CGW said the sound quality was sometimes poor. Much of Redemption criticism focused on technical problems when it was released, undermining the game experience or making it unplayable. Several reviewers noted issues with the initial lack of a function to save game progress at any point, which meant that dying or technical issues with the game could necessitate them to reload a previous save, and then repeat up to 30 minutes of gameplay. CGW added that the repetitive gameplay meant that losing progress and having to repeat it was a particular downside. Next Generation, who provided the game with a score of 3 out of 5, said that Redemption was potentially only a few patches away from being a 5 out of 5 game, if not for technical issues. PC Gamers review even included recommended instructions for cheats that worked around the technical flaws. CGW said the in-game combat became a confusing mess once allies became involved, in part due to poor artificial intelligence (AI) that caused them to use powers liberally and become low on blood as a result. The AI was considered to be insufficient for the game; pathfinding failures meant allies would become stuck on environmental objects or each other during combat, use up their costliest abilities on enemies regardless of their threat, and were poor at staying alive in battle. Enemies were similarly dismissed for either not noticing the playable character in obvious circumstances or failing to respond to attacks on themselves. Combat was also criticized; Computer Games called the game "little more than a hack-and-slash adventure", and said the game's focus on combat was counter to the greater focus on political intrigue and social interaction prevalent in the source Vampire: The Masquerade tabletop game. ArsTechnica said that combat was initially fun but very repetitive, and it became a chore by the later stages of the game, noting that every enemy dungeon consisted of four levels filled with identical enemies, while Next Generation said the number of enemies and the difficulty of defeating them often meant the playable character would run away or die. The repetitive combat was also criticized by other reviewers, who disliked that it involved repeatedly clicking on enemies until they were dead; and running away if the playable character was about to die against unending waves of enemies. Disciplines were considered helpful in adding variety to combat, but battles were too fast-paced to allow the tactical use of a wide range of powers because of the inability to pause the combat to allow the issuing of orders. Game Revolution said the multiplayer feature was a revelation and worth the cost of the game alone. Computer Games said it was innovative and may serve as an inspiration for future games. PC Gamer said the multiplayer mode was the redeeming factor of the game, though it was still marred by bugs. Others noted that aspects of the multiplayer interface were insufficient, such as the inability to store custom dialog, requiring the Storyteller to type text in real time during gameplay. Accolades At the 1999 Game Critics Awards, Redemption was named Best RPG ahead of the first-person action RPG Deus Ex. References Works cited External links 2000 video games Activision games Dark fantasy video games Gothic video games Fantasy video games set in the Middle Ages Classic Mac OS games Multiplayer and single-player video games Role-playing video games Video games about vampires Vampire: The Masquerade Video games scored by Kevin Manthei Video games developed in the United States Video games set in Austria Video games set in the Czech Republic Video games set in London Video games set in New York City Video games with alternate endings Windows games World of Darkness video games
3587057
https://en.wikipedia.org/wiki/Sindh%20Agriculture%20University
Sindh Agriculture University
Sindh Agriculture University, (Sindhi: سنڌ زرعي يونيورسٽي ٽنڊو ڄام ) is situated in Tando Jam town of Hyderabad, on Hyderabad-Mirpurkhas highway and is about from Karachi airport linked with super highway to Hyderabad. Sindh Agriculture University is ranked 3rd best university in Agriculture by the Higher Education Commission. The university is an academic complex of five faculties (Faculty of Crop Production, Faculty of Crop Protection, Faculty of Agricultural Social Sciences, Faculty of Agricultural Engineering and Faculty of Animal Husbandry and Veterinary Sciences). Two institutes (Information Technology Centre and Institute of Food Sciences and Technology ). Three affiliated colleges ( Sub Campus Umarkot, Shaheed Z.A.Bhutto Agriculture College Dokri and The Khairpur College of Agricultural Engineering and Technology ) and Directorate of Advanced Studies and Research. The five faculties are majoring in almost 41 departments. These include the Doctor of Veterinary Medicine (D.V.M), Bachelor of Engineering in Agriculture (B.E.Agriculture), Bachelor of Science (Agriculture Honours) . and Bachelor of Science Information Technology (BS-IT Honours). The university offers postgraduate programmes leading to the award of M.S/ME and MS-IT in Animal Husbandry and Veterinary Sciences, Agricultural Engineering and Information Technology and in all the above-mentioned disciplines of Agriculture. M.Phil and Ph.D degree programmes are also offered in selected subject areas where trained staff and other facilities are available. A modest number of short courses and training programmes are regularly offered to meet the continuing and in service education needs of Agriculture Officers, Field Assistants, Bank Officials, Agricultural Technicians, Progressive Farmers, Small Farmers, Tenants, Gardeners, Housewives and other clientele groups. The total area covered by the university is including an area of more than 80 acres occupied by residential and non-residential buildings of the University, Agricultural Research Institute, Nuclear Institute of Agriculture, Rural Academy, Agricultural Engineering Workshop, Drainage Research Centre, and Central Veterinary Diagnostic Laboratory. Faculty of Crop Production Department of Soil Science. Department of Agronomy. Department of Horticulture. Department of Plant Breeding & Genetics. Department of Biotechnology. Department of Crop Physiology. Faculty of Crop Protection Department of Entomology. Department of Plant Pathology. Department of Plant Protection. Faculty of Agriculture Social Sciences Department of Agriculture Economics. Department of Agriculture Extensions. Department of Rural Sociology. Department of Statistics. Department of English. Department of Islamic Studies. Department of Pakistan Studies. Faculty of Agriculture Engineering Department of Irrigation and Drainage. Department of Farm Power and Machinery. Department of Land and Water Management. Department of Farm Structure. Department of Energy and Environment. Department of Basic Engineering. Faculty of Animal Husbandry and Veterinary Sciences (AVHS) Department of Animal and Histology. Department of Animal Nutrition. Department of Animal Breeding and Genetics. Department of Animal Products. Department of Animal Reproduction.. Department of Veterinary Parasitology. Department of Veterinary Pathology. Department of Veterinary Pharmacology. Department of Veterinary Medicines. Department of Livestock. Department of Poultry and Husbandry Department of Surgery and Obstetrics. Information Technology Centre Computerisation and Network (MS-IT) Software Engineering and Information System(MS-IT) Institute of Food Sciences and Technology References External links Sindh Agriculture University Tandojam Public universities and colleges in Sindh Universities and colleges in Hyderabad District, Pakistan Agricultural universities and colleges in Pakistan Agriculture in Sindh
11436470
https://en.wikipedia.org/wiki/Syslog-ng
Syslog-ng
syslog-ng is a free and open-source implementation of the syslog protocol for Unix and Unix-like systems. It extends the original syslogd model with content-based filtering, rich filtering capabilities, flexible configuration options and adds important features to syslog, like using TCP for transport. As of today, syslog-ng is developed by Balabit IT Security Ltd. It has three editions with a common codebase. The first is called syslog-ng Open Source Edition (OSE) with the license LGPL. The second is called Premium Edition (PE) and has additional plugins (modules) under a proprietary license. The third is called Storebox (SSB), which comes as an appliance with a Web-based UI as well as additional features including ultra-fast-text search, unified search, content-based alerting and a premier tier support. In January 2018, syslog-ng, as part of Balabit, was acquired by One Identity, a global vendor of identity and access management solutions under the Quest Software umbrella. The syslog-ng team remains an independent business within the One Identity organization and continues to develop its open source and commercial solutions under the syslog-ng brand. Protocol syslog-ng uses the standard BSD syslog protocol, specified in RFC 3164. As the text of RFC 3164 is an informational description and not a standard, some incompatible extensions of it emerged. Since version 3.0 syslog-ng also supports the syslog protocol specified in RFC 5424. syslog-ng interoperates with a variety of devices, and the format of relayed messages can be customized. Extensions to the original syslog-ng protocol include: ISO 8601 timestamps with millisecond granularity and time zone information The addition of the name of relays in additional host fields, to make it possible to track the path of a given message Reliable transport using TCP TLS encryption (Since 3.0.1 in OSE ) History The syslog-ng project began in 1998, when Balázs Scheidler, the primary author of syslog-ng, ported the existing nsyslogd code to Linux. The 1.0.x branch of syslog-ng was still based on the nsyslogd sources and are available in the syslog-ng source archive. Right after the release of syslog-ng 1.0.x, a reimplementation of the code base started to address some of the shortcomings of nsyslogd and to address the licensing concerns of Darren Reed, the original nsyslogd author. This reimplementation was named stable in the October 1999 with the release of version 1.2.0. This time around, syslog-ng depended on some code originally developed for lsh by Niels Möller. Three major releases (1.2, 1.4 and 1.6) were using this code base, the last release of the 1.6.x branch in February 2007. In this period of about 8 years, syslog-ng became one of the popular alternative syslog implementations. In a volunteer based effort, yet another rewrite was started back in 2001, dropping lsh code and using the more widely available GLib library. This rewrite of the codebase took its time, the first stable release of 2.0.0 happened in October 2006. Development efforts were focused on improving the 2.0.x branch; support for 1.6.x was dropped at the end of 2007. Support for 2.x was dropped at the end of 2009, but it is still used in some Linux distributions. Balabit, the company behind syslog-ng, started a parallel, commercial fork of syslog-ng, called syslog-ng Premium Edition. Portions of the commercial income are used to sponsor development of the free version. Syslog-ng version 3.0 was released in the fourth quarter of 2008. Starting with the 3.0 version developments efforts were parallel on the Premium and on the Open Source Editions. PE efforts were focused on quality, transport reliability, performance and encrypted log storage. The Open Source Edition efforts focused on improving the flexibility of the core infrastructure to allow more and more different, non-syslog message sources. Both the OSE & PE forks produced two releases (3.1 and 3.2) in 2010. Features syslog-ng provides a number of features in addition to transporting syslog messages and storing them in plain text log files: The ability to format log messages using Unix shell-like variable expansion (can break cross-platform log format compatibility) The use of this shell-like variable expansion when naming files, covering multiple destination files with a single statement The ability to send log messages to local applications Support for message flow-control in network transport Logging directly into a database (since syslog-ng OSE 2.1) Rewrite portions of the syslog message with set and substitute primitives (since syslog-ng OSE 3.0) Classify incoming log messages and at the same time extract structured information from the unstructured syslog message (since syslog-ng OSE 3.0) Generic name–value support: each message is just a set of name–value pairs, which can be used to store extra information (since syslog-ng OSE 3.0) The ability to process structured message formats transmitted over syslog, like extract columns from CSV formatted lines (since syslog-ng OSE 3.0) The ability to correlate multiple incoming messages to form a more complex, correlated event (since syslog-ng OSE 3.2); Distributions syslog-ng is available on a number of different Linux and Unix distributions. Some install it as the system default, or provide it as a package that replaces the previous standard syslogd. Several Linux distributions that used syslog-ng have replaced it with rsyslog. openSUSE used it as default prior to openSUSE 11.2, and is still available SLES used it prior to SUSE Linux Enterprise Server 12 Debian GNU/Linux used syslogd and klogd prior to 5.0; post-5.0 ("Lenny"), rsyslog is used Gentoo Linux Fedora used it prior to Fedora 10 Arch Linux used it as default prior to the adoption of systemd in 2012 Hewlett-Packard's HP-UX FreeBSD port A Cygwin port is available for Microsoft Windows Portability syslog-ng is highly portable to many Unix systems, old and new alike. A list of the currently known to work Unix versions are found below: Linux on i386, ARM, PowerPC, SPARC and x86-64 CPUs FreeBSD 7.x - 9.x on i386 and x86-64 CPUs AIX 5, 6 and 7 on IBM Power microprocessors HP-UX 11iv1, 11iv2 and 11iv3 on PA-RISC and Itanium CPUs Solaris 8, 9, 10 on SPARC, x86-64 and i386 CPUs Tru64 5.1b on Alpha CPUs The list above is based on BalaBit's current first hand experience, other platforms may also work, but your mileage may vary. Related RFCs & working groups - The BSD syslog protocol - The Syslog Protocol - Transport Layer Security (TLS) Transport Mapping for Syslog - Transmission of Syslog Messages over UDP See also NXLog Datadog Syslog Rsyslog journald – incorporates syslog-functionality Graylog References External links Official syslog-ng documentation A comparison of syslog-ng web guis lggr.io - The web based syslog gui Michael D. Bauer: Linux Server Security, Second Edition published 2005 at O'Reilly: System Log Management and Monitoring (Chapter 12) syslog-ng FAQ Syslog-ng and vlogger meet Free network-related software Internet protocols Internet Standards Linux security software Network management System administration
35746530
https://en.wikipedia.org/wiki/SAS%20Institute%20Inc%20v%20World%20Programming%20Ltd
SAS Institute Inc v World Programming Ltd
The SAS Institute, creators of the SAS System filed a lawsuit against World Programming Limited, creators of World Programming System (WPS) in November 2009. The dispute was whether World Programming had infringed copyrights on SAS Institute Products and Manuals, and whether World Programming used SAS Learning Edition to reverse engineer SAS system in violation with its term of usage. The case is interesting because World Programming did not have access to the SAS Institute's source code, and so the court considered the merits of a copyright claim based on observing functionality only. The European Committee for Interoperable Systems say that the case is important to the software industry. Some observers say the case is as important as the Borland versus Lotus case. The EU Court of Justice ruled that copyright protection does not extend to the software functionality, the programming language used and the format of the data files used by the program. It stated that there is no copyright infringement when a company which does not have access to the source code of a program studies, observes and tests that program to create another program with the same functionality. High Court of England and Wales On 23 July 2010 Justice Arnold in the High Court of England and Wales referred a number of questions to the Court of Justice of the European Union (CJEU), but expressed his initial views of the main claims via the following observations in the initial judgment ([2010] EWHC 1829 (Ch, [2011] RPC 1). 1. On his preferred interpretation of Article 5(3), WPL's use of the Learning Edition is within Article 5(3), and to the extent that the licence terms prevent this they are null and void, with the result that none of WPL's acts complained of was a breach of contract or an infringement of copyright except perhaps one (see paragraphs 313-315 of the initial judgment). 2 WPL has infringed the copyrights in the SAS Manuals by substantially reproducing them in the WPL Manual (see paragraphs 317-319 of the initial judgment). 3 WPL has not infringed the copyrights in the SAS Manuals by producing the WPS Guides (see paragraphs 320-329 of the initial judgment). 4 On the assumption that Pumfrey J's interpretation of Article 1(2) of the Software Directive (from Navitaire v Easyjet [2004]) was correct, WPL has not infringed SAS Institute's copyrights in the SAS Components by producing WPS (see paragraphs 245-250 of the initial judgment). Justice Arnold quoted (at paragraph 56) from the "SAS language" article of Wikipedia (as at 25 April 2010) in support of his view that SAS is a "programming language" (and thus not protected under the Software Directive): "SAS can be considered a general programming language, though it serves largely as a database programming language and a language with a wide variety of specialized analytic and graphic procedures." First, the decision confirms what WPL has always admitted, namely that it has used the SAS Manuals to emulate functionality of the SAS System in WPS. Secondly, it shows that to some extent WPL has reproduced aspects of the SAS Manuals going beyond that which was strictly necessary in order for WPS to emulate the functions of the SAS System. What it does not show is reproduction of the SAS source code by WPS going beyond the reproduction of its functionality. WPL's manual writers did not directly copy from the SAS Manuals in the sense of having one of the SAS Manuals open in front of them when writing the WPS Manual and intentionally either transcribing or paraphrasing the wording. A considerable degree of similarity in both content and language between the SAS Manual entries and the WPS Manual entries is to be expected given that they are describing identical functionality. The degree of resemblance in the language goes beyond that which is attributable to describing identical functionality. Justice Arnold referred certain questions to the CJEU. After the CJEU handed down its decision later in 2012, Justice Arnold in the High Court handed down his final judgement on 25 January 2013, which concluded (as summarised in the final statement of the judgment): "82. For the reasons given above, I dismiss all of SAS Institute's claims except for its claim in respect of the WPS Manual. That claim succeeds to the extent indicated in my first judgment, but no further." Court of Justice of the European Union reference The High Court referred several questions of the interpretation of the Computer Programs Directive and the Information Society Directive to the Court of Justice of the European Union, under the preliminary ruling procedure. Advocate-General Yves Bot gave his Opinion on 29 November 2011. The full judgement was handed down by the European Court of Justice on 2 May 2012. It largely adopted the Advocate-General's Opinion, holding that neither the functionality of a computer program nor the programming language and the format of data files used in a computer program in order to exploit certain of its functions are covered by copyright. The Court concluded that: 1. Article 1(2) of the Computer Programs Directive (Council Directive 91/250/EEC of 14 May 1991) must be interpreted as meaning that neither the functionality of a computer program nor the programming language and the format of data files used in a computer program in order to exploit certain of its functions constitute a form of expression of that program and, as such, are not protected by copyright in computer programs for the purposes of that directive. 2. Article 5(3) of the Computer Programs Directive must be interpreted as meaning that a person who has obtained a copy of a computer program under a licence is entitled, without the authorisation of the owner of the copyright, to observe, study or test the functioning of that program so as to determine the ideas and principles which underlie any element of the program, in the case where that person carries out acts covered by that licence and acts of loading and running necessary for the use of the computer program, and on condition that that person does not infringe the exclusive rights of the owner of the copyright in that program. 3. Article 2(a) of the Information Society Directive (Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001) must be interpreted as meaning that the reproduction, in a computer program or a user manual for that program, of certain elements described in the user manual for another computer program protected by copyright is capable of constituting an infringement of the copyright in the latter manual if – this being a matter for the national court to ascertain – that reproduction constitutes the expression of the intellectual creation of the author of the user manual for the computer program protected by copyright. The case returned to the High Court of England and Wales which provided its final judgement on 25 January 2013 applying the CJEU findings to the particular facts of this case. US lawsuit (initial filing) The initial US case filed by SAS Institute against WPS was dismissed SAS INSTITUTE INC., Plaintiff, v. WORLD PROGRAMMING LIMITED, Defendant. In their briefing, the parties have raised for the court’s consideration a variety of interesting and complex questions of law. But after considering the able arguments of counsel for both sides, the court is unable to conclude that it clearly erred in dismissing this action on for forum non conveniens. As such, and for the reasons set forth more particularly above, plaintiff’s motion to alter or amend judgment pursuant to Rule 59(e) (DE # 53) is DENIED. SO ORDERED, this the 22nd day of June, 2011. US lawsuit (subsequent filing) A subsequent US case filed by SAS Institute against WPL was won by SAS. After a three-week trial that ended on October 9, 2015, a jury in federal court awarded SAS $79.1 million in damages, after trebling. The jury ruled that WPL had engaged in unfair and deceptive trade practices - specifically, that it had misrepresented its intentions in order to obtain the license to the software, and violated the contract granted, which only allowed for non-commercial use - and that it had infringed on the copyright of its manual by copying portions of it into its own manual. However, Judge Flanagan ruled against SAS in summary judgement that WPL had infringed on the copyright of SAS's software. WPL has announced its intention to appeal. Implications The UK case limited the aspects of computer programs which were eligible for copyright protection, and was subsequently cited in the case of Oracle Corporation's lawsuit against Google over the latter's use of Java in Android. References External links The Final High Court Judgment The Initial High Court Judgment The ECJ Judgment Copyright case law United Kingdom copyright case law Copyright infringement of software
13818486
https://en.wikipedia.org/wiki/Cassandra%20%28novel%29
Cassandra (novel)
Cassandra () is a 1983 novel by the German author Christa Wolf. It has since been translated into a number of languages. Swiss composer Michael Jarrell has adapted the novel for speaker and instrumental ensemble, and his piece has been performed frequently. Plot Cassandra's narrative begins by describing her youth, when she was Priam's favorite daughter and loved to sit with him as he discussed politics and matters of state. Her relationship with her mother, Hecuba, however, was never as intimate, since Hecuba recognized Cassandra's independence. At times their interactions are tense or even cold, notably when Hecuba does not sympathize with Cassandra's fear of the god Apollo's gift of prophecy or her reluctance to accept his love. When she ultimately refuses him, he curses her so that no one will believe what she prophesies. When Cassandra is presented among the city's virgins for deflowering, she is chosen by Aeneas, who makes love to her only later. Nonetheless, she falls in love with him, and is devoted to him despite her liaisons with others, including Panthous — indeed, she imagines Aeneas whenever she is with anyone else. It is Aeneas' father Anchises who tells Cassandra of the mission to bring Hesione, Priam's sister who was taken as a prize by Telamon during the first Trojan War, back from Sparta. Not only do the Trojans fail to secure Hesione, they also lose the seer Calchas during the voyage, who later aids the Greeks during the war. When Menelaus visits Troy to offer a sacrifice, he rebukes impertinence of Cassandra's brother Paris, who has recently returned to Troy and been reclaimed as Priam and Hecuba's son, though as a child he was abandoned because of a prophecy. His words provoke Paris, who insists that he will travel to Sparta, and if Hesione is not returned to him, he will take Helen. The tension increases when Cassandra experiences a sort of fit and collapses, having foreseen the war and fall of Troy. By the time she recovers, Paris has sailed to Sparta and returned, bringing Helen, who wears a veil. Cassandra soon begins to suspect—but does not want to believe—that Helen is not in Troy, after all. No one is permitted to see her, and Cassandra has seen Paris' former wife Oenone leaving his room. However, she is unable to accept that Troy—that her father—would continue to prepare for a war if its premise were false. When Paris finally tells her explicitly what she already knows, she protests to her father, but he rejects her plea to negotiate peace and orders her to be silent. Thus Cassandra's traditional role—as the seeress who tells the truth but is not believed—is reinterpreted. She knows the truth, but Priam knows it too; she cannot persuade anyone of the truth, but only because she is forbidden to speak of it. Although she feels miserable, she still loves and trusts Priam and cannot betray his secret. Although Priam's political motives ostensibly drive Troy to war, the palace guard Eumelos is the true force behind the conflict. He manipulates Priam and the public until they believe the war is necessary and forget that the stakes are nothing but Helen. Gradually he increases the pressure on the Trojan population, including Cassandra. Anchises explains that Eumelos, by convincing the Trojans that the Greeks were enemies and inciting them to fight, made his own military state necessary and was thus able to rise to power. One of Eumelos' guards, Andron, becomes Polyxena's lover, but when Achilles demands her in exchange for Hector's body, Andron does not object—rather, he offers her to Achilles without remorse. Later Eumelos plans to lure Achilles into a trap by stationing Polyxena in the temple, and for Polyxena's sake Cassandra refuses to comply with his scheme, threatening to reveal it. Priam promptly has her imprisoned in the heroes' graveyard. Eumelos executes his plan after all, and Achilles is killed, requesting as he dies that Odysseus sacrifice Polyxena at his grave for her betrayal. Later when the Greeks come to take her away, Polyxena asks Cassandra to kill her, but Cassandra has discarded her dagger and cannot spare her sister. When the defeat is imminent, Cassandra meets Aeneas for the last time, and he asks her to leave Troy with him. She refuses because she knows that he will be forced to become a hero, and she cannot love a hero. Themes Cassandra's experience during the Trojan War parallels Christa Wolf's personal experience as a citizen of East Germany: during the Cold War, a police state much like Eumelos' Troy. Wolf, too, was familiar with censorship; in fact, Cassandra was censored when it was initially published. The novel, besides criticizing repression, emphasizes issues of marginalization as well. Cassandra is of course a marginalized figure because of her role as seeress, but Wolf focuses more on her role as a woman. It is not until Cassandra lives in a community with other women, literally at the margin of the city, that she identifies with a group and includes herself in it by the pronoun "we." Cassandra is certainly interesting as a reinterpretation of history and literature by an otherwise rather obscure character. However, the novel is truly compelling because Cassandra's individual character and her individual voice are symbolic of all female characters and their voices that have been underrepresented by past writers. Cassandra is narrated from the perspective of Cassandra, seeress and daughter of King Priam of Troy. Not only is this representation of Cassandra distinct from those in classical works because of her unique narrative voice, but also this version of the story of the Trojan War, through its contradiction or reversal of many of the legends that are traditionally associated with the War. Cassandra's narration, which is presented as an internal monologue in stream-of-consciousness style, begins on Agamemnon's ship to Mycenae, where—as Cassandra knows—she will soon be murdered by Agamemnon's wife Clytemnestra. As she prepares to face her death, she is overwhelmed by emotions, and both to distract herself from and to make sense of them, she occupies her thoughts with reflections on the past. Throughout the novel Cassandra spends a good deal of time in introspection, examining and even critiquing her personality, her perspective, and her motives as she was growing up in Troy. She particularly regrets her naiveté, and more than anything her pride. Although it becomes clear that she was ultimately powerless to oppose the political forces supporting the war and thus to prevent the disaster at Troy, she nonetheless feels that she is to blame—and if only indirectly for the war, then quite directly for her sister Polyxena's death. She is also remorseful that her final disagreement with Aeneas ended on an angry note, even though she thinks Aeneas—on a rational level—understood her motives. As Cassandra reminisces about Troy, her complex relationships with Aeneas and Polyxena—relationships without precedent in the classical canon—serve not only to contextualize her experience of the Trojan War within the broader tradition, but also to humanize her, as do her interactions with Priam, Aeneas' father Anchises, and Panthous, the Greek priest. Aeneas, though his presence both in Troy and in the novel is scarce, is perhaps the most significant of these, and in several of the brief moments when Cassandra's thoughts return to the present, they are addressed to him. Her narrative finally seems to represent Cassandra's desperate effort to justify, both to Aeneas and to herself, her fate. See also Der geteilte Himmel (Divided Heaven, They Divided the Sky) References Edition Wolf, Christa. Cassandra. Transl. Jan van Heurck. New York: Farrar, Straus and Giroux, 1984. 1983 German novels East German novels Novels by Christa Wolf Novels set in ancient Troy Novels set during the Trojan War German historical novels Adaptations of works by Aeschylus Novels based on the Iliad
5043734
https://en.wikipedia.org/wiki/Wikipedia
Wikipedia
Wikipedia ( or ) is a free content, multilingual online encyclopedia written and maintained by a community of volunteers through a model of open collaboration, using a wiki-based editing system. Individual contributors, also called editors, are known as Wikipedians. Wikipedia is the largest and most-read reference work in history. It is consistently one of the 15 most popular websites ranked by Alexa; Wikipedia was ranked the 13th most popular site. It is hosted by the Wikimedia Foundation, an American non-profit organization funded mainly through donations. On January 15, 2001, Jimmy Wales and Larry Sanger launched Wikipedia; Sanger coined its name as a portmanteau of "wiki" and "encyclopedia." Wales was influenced by the "spontaneous order" ideas associated with Friedrich Hayek and the Austrian School of economics, after being exposed to these ideas by Austrian economist and Mises Institute Senior Fellow Mark Thornton. Initially available only in English, versions in other languages were quickly developed. Its combined editions comprise more than articles, attracting around 2billion unique device visits per month and more than 17 million edits per month (1.9edits per second) . In 2006, Time magazine stated that the policy of allowing anyone to edit had made Wikipedia the "biggest (and perhaps best) encyclopedia in the world." Wikipedia has received praise for its enablement of the democratization of knowledge, extent of coverage, unique structure, culture, and reduced amount of commercial bias, but criticism for exhibiting systemic bias, particularly gender bias against women and alleged ideological bias. Its reliability was frequently criticized in the 2000s but has improved over time; it has been generally praised in the late 2010s and early 2020s. Its coverage of controversial topics such as American politics and major events such as the COVID-19 pandemic has received substantial media attention. It has been censored by world governments, ranging from specific pages to the entire site. Nevertheless, it has become an element of popular culture, with references in books, films, and academic studies. In April 2018, Facebook and YouTube announced that they would help users detect fake news by suggesting fact-checking links to related Wikipedia articles. History Nupedia Other collaborative online encyclopedias were attempted before Wikipedia, but none were as successful. Wikipedia began as a complementary project for Nupedia, a free online English-language encyclopedia project whose articles were written by experts and reviewed under a formal process. It was founded on March 9, 2000, under the ownership of Bomis, a web portal company. Its main figures were Bomis CEO Jimmy Wales and Larry Sanger, editor-in-chief for Nupedia and later Wikipedia. Nupedia was initially licensed under its own Nupedia Open Content License, but even before Wikipedia was founded, Nupedia switched to the GNU Free Documentation License at the urging of Richard Stallman. Wales is credited with defining the goal of making a publicly editable encyclopedia, while Sanger is credited with the strategy of using a wiki to reach that goal. On January 10, 2001, Sanger proposed on the Nupedia mailing list to create a wiki as a "feeder" project for Nupedia. Launch and growth The domains wikipedia.com (later redirecting to wikipedia.org) and wikipedia.org were registered on January 12, 2001, and January 13, 2001, respectively, and Wikipedia was launched on January 15, 2001 as a single English-language edition at www.wikipedia.com, and announced by Sanger on the Nupedia mailing list. Its policy of "neutral point-of-view" was codified in its first few months. Otherwise, there were initially relatively few rules, and it operated independently of Nupedia. Bomis originally intended it as a business for profit. Wikipedia gained early contributors from Nupedia, Slashdot postings, and web search engine indexing. Language editions were created beginning in March 2003, with a total of 161 in use by the end of 2004. Nupedia and Wikipedia coexisted until the former's servers were taken down permanently in 2003, and its text was incorporated into Wikipedia. The English Wikipedia passed the mark of two million articles on September 9, 2007, making it the largest encyclopedia ever assembled, surpassing the Yongle Encyclopedia made during the Ming Dynasty in 1408, which had held the record for almost 600 years. Citing fears of commercial advertising and lack of control, users of the Spanish Wikipedia forked from Wikipedia to create Enciclopedia Libre in February 2002. Wales then announced that Wikipedia would not display advertisements, and changed Wikipedia's domain from wikipedia.com to wikipedia.org. Though the English Wikipedia reached three million articles in August 2009, the growth of the edition, in terms of the numbers of new articles and of editors, appears to have peaked around early 2007. Around 1,800 articles were added daily to the encyclopedia in 2006; by 2013 that average was roughly 800. A team at the Palo Alto Research Center attributed this slowing of growth to the project's increasing exclusivity and resistance to change. Others suggest that the growth is flattening naturally because articles that could be called "low-hanging fruit"—topics that clearly merit an article—have already been created and built up extensively. In November 2009, a researcher at the Rey Juan Carlos University in Madrid found that the English Wikipedia had lost 49,000 editors during the first three months of 2009; in comparison, it lost only 4,900 editors during the same period in 2008. The Wall Street Journal cited the array of rules applied to editing and disputes related to such content among the reasons for this trend. Wales disputed these claims in 2009, denying the decline and questioning the study's methodology. Two years later, in 2011, he acknowledged a slight decline, noting a decrease from "a little more than 36,000 writers" in June 2010 to 35,800 in June 2011. In the same interview, he also claimed the number of editors was "stable and sustainable". A 2013 MIT Technology Review article, "The Decline of Wikipedia", questioned this claim, revealing that since 2007, Wikipedia had lost a third of its volunteer editors, and that those remaining had focused increasingly on minutiae. In July 2012, The Atlantic reported that the number of administrators was also in decline. In the November 25, 2013, issue of New York magazine, Katherine Ward stated, "Wikipedia, the sixth-most-used website, is facing an internal crisis." The number of active English Wikipedia editors has remained steady after a long period of decline. Milestones In January 2007, Wikipedia first became one of the ten most popular websites in the US, according to Comscore Networks. With 42.9 million unique visitors, it was ranked #9, surpassing The New York Times (#10) and Apple (#11). This marked a significant increase over January 2006, when Wikipedia ranked 33rd, with around 18.3 million unique visitors. , it ranked 13th in popularity according to Alexa Internet. In 2014, it received eight billion page views every month. On February 9, 2014, The New York Times reported that Wikipedia had 18 billion page views and nearly 500 million unique visitors a month, "according to the ratings firm comScore". Loveland and Reagle argue that, in process, Wikipedia follows a long tradition of historical encyclopedias that have accumulated improvements piecemeal through "stigmergic accumulation". On January 18, 2012, the English Wikipedia participated in a series of coordinated protests against two proposed laws in the United States Congress—the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA)—by blacking out its pages for 24 hours. More than 162 million people viewed the blackout explanation page that temporarily replaced its content. On January 20, 2014, Subodh Varma reporting for The Economic Times indicated that not only had Wikipedia's growth stalled, it "had lost nearly ten percent of its page views last year. There was a decline of about two billion between December 2012 and December 2013. Its most popular versions are leading the slide: page-views of the English Wikipedia declined by twelve percent, those of German version slid by 17 percent and the Japanese version lost nine percent." Varma added, "While Wikipedia's managers think that this could be due to errors in counting, other experts feel that Google's Knowledge Graphs project launched last year may be gobbling up Wikipedia users." When contacted on this matter, Clay Shirky, associate professor at New York University and fellow at Harvard's Berkman Klein Center for Internet & Society said that he suspected much of the page-view decline was due to Knowledge Graphs, stating, "If you can get your question answered from the search page, you don't need to click [any further]." By the end of December 2016, Wikipedia was ranked the fifth most popular website globally. In January 2013, 274301 Wikipedia, an asteroid, was named after Wikipedia; in October 2014, Wikipedia was honored with the Wikipedia Monument; and, in July 2015, 106 of the 7,473 700-page volumes of Wikipedia became available as Print Wikipedia. In April 2019, an Israeli lunar lander, Beresheet, crash landed on the surface of the Moon carrying a copy of nearly all of the English Wikipedia engraved on thin nickel plates; experts say the plates likely survived the crash. In June 2019, scientists reported that all 16 GB of article text from the English Wikipedia had been encoded into synthetic DNA. Current state On January 23, 2020, the English-language Wikipedia, which is the largest language section of the online encyclopedia, published its six millionth article. By February 2020, Wikipedia ranked eleventh in the world in terms of Internet traffic. As a key resource for disseminating information related to COVID-19, the World Health Organization has partnered with Wikipedia to help combat the spread of misinformation. Wikipedia accepts cryptocurrency donations and Basic Attention Token. Openness Unlike traditional encyclopedias, Wikipedia follows the procrastination principle regarding the security of its content. Restrictions Due to Wikipedia's increasing popularity, some editions, including the English version, have introduced editing restrictions for certain cases. For instance, on the English Wikipedia and some other language editions, only registered users may create a new article. On the English Wikipedia, among others, particularly controversial, sensitive or vandalism-prone pages have been protected to varying degrees. A frequently vandalized article can be "semi-protected" or "extended confirmed protected", meaning that only "autoconfirmed" or "extended confirmed" editors can modify it. A particularly contentious article may be locked so that only administrators can make changes. A 2021 article in the Columbia Journalism Review identified Wikipedia's page-protection policies as "[p]erhaps the most important" means at its disposal to "regulate its market of ideas". In certain cases, all editors are allowed to submit modifications, but review is required for some editors, depending on certain conditions. For example, the German Wikipedia maintains "stable versions" of articles which have passed certain reviews. Following protracted trials and community discussion, the English Wikipedia introduced the "pending changes" system in December 2012. Under this system, new and unregistered users' edits to certain controversial or vandalism-prone articles are reviewed by established users before they are published. Review of changes Although changes are not systematically reviewed, the software that powers Wikipedia provides tools allowing anyone to review changes made by others. Each article's History page links to each revision. On most articles, anyone can undo others' changes by clicking a link on the article's History page. Anyone can view the latest changes to articles, and anyone registered may maintain a "watchlist" of articles that interest them so they can be notified of changes. "New pages patrol" is a process where newly created articles are checked for obvious problems. In 2003, economics Ph.D. student Andrea Ciffolilli argued that the low transaction costs of participating in a wiki created a catalyst for collaborative development, and that features such as allowing easy access to past versions of a page favored "creative construction" over "creative destruction". Vandalism Any change or edit that manipulates content in a way that deliberately compromises Wikipedia's integrity is considered vandalism. The most common and obvious types of vandalism include additions of obscenities and crude humor; it can also include advertising and other types of spam. Sometimes editors commit vandalism by removing content or entirely blanking a given page. Less common types of vandalism, such as the deliberate addition of plausible but false information, can be more difficult to detect. Vandals can introduce irrelevant formatting, modify page semantics such as the page's title or categorization, manipulate the article's underlying code, or use images disruptively. Obvious vandalism is generally easy to remove from Wikipedia articles; the median time to detect and fix it is a few minutes. However, some vandalism takes much longer to detect and repair. In the Seigenthaler biography incident, an anonymous editor introduced false information into the biography of American political figure John Seigenthaler in May 2005, falsely presenting him as a suspect in the assassination of John F. Kennedy. It remained uncorrected for four months. Seigenthaler, the founding editorial director of USA Today and founder of the Freedom Forum First Amendment Center at Vanderbilt University, called Wikipedia co-founder Jimmy Wales and asked whether he had any way of knowing who contributed the misinformation. Wales said he did not, although the perpetrator was eventually traced. After the incident, Seigenthaler described Wikipedia as "a flawed and irresponsible research tool". The incident led to policy changes at Wikipedia for tightening up the verifiability of biographical articles of living people. In 2010, Daniel Tosh encouraged viewers of his show, Tosh.0, to visit the show's Wikipedia article and edit it at will. On a later episode, he commented on the edits to the article, most of them offensive, which had been made by the audience and had prompted the article to be locked from editing. Edit warring Wikipedians often have disputes regarding content, which may result in repeated competing changes to an article, known as "edit warring". It is widely seen as a resource-consuming scenario where no useful knowledge is added, and criticized as creating a competitive and conflict-based editing culture associated with traditional masculine gender roles. Policies and laws Content in Wikipedia is subject to the laws (in particular, copyright laws) of the United States and of the US state of Virginia, where the majority of Wikipedia's servers are located. Beyond legal matters, the editorial principles of Wikipedia are embodied in the "five pillars" and in numerous policies and guidelines intended to appropriately shape content. Even these rules are stored in wiki form, and Wikipedia editors write and revise the website's policies and guidelines. Editors can by deleting or modifying non-compliant material. Originally, rules on the non-English editions of Wikipedia were based on a translation of the rules for the English Wikipedia. They have since diverged to some extent. Content policies and guidelines According to the rules on the English Wikipedia, each entry in Wikipedia must be about a topic that is encyclopedic and is not a dictionary entry or dictionary-style. A topic should also meet Wikipedia's standards of "notability", which generally means that the topic must have been covered in mainstream media or major academic journal sources that are independent of the article's subject. Further, Wikipedia intends to convey only knowledge that is already established and recognized. It must not present original research. A claim that is likely to be challenged requires a reference to a reliable source. Among Wikipedia editors, this is often phrased as "verifiability, not truth" to express the idea that the readers, not the encyclopedia, are ultimately responsible for checking the truthfulness of the articles and making their own interpretations. This can at times lead to the removal of information that, though valid, is not properly sourced. Finally, Wikipedia must not take sides. Governance Wikipedia's initial anarchy integrated democratic and hierarchical elements over time. An article is not considered to be owned by its creator or any other editor, nor by the subject of the article. Administrators Editors in good standing in the community can request extra user rights, granting them the technical ability to perform certain special actions. In particular, editors can choose to run for "adminship", which includes the ability to delete pages or prevent them from being changed in cases of severe vandalism or editorial disputes. Administrators are not supposed to enjoy any special privilege in decision-making; instead, their powers are mostly limited to making edits that have project-wide effects and thus are disallowed to ordinary editors, and to implement restrictions intended to prevent disruptive editors from making unproductive edits. By 2012, fewer editors were becoming administrators compared to Wikipedia's earlier years, in part because the process of vetting potential administrators had become more rigorous. Dispute resolution Over time, Wikipedia has developed a semiformal dispute resolution process. To determine community consensus, editors can raise issues at appropriate community forums, seek outside input through third opinion requests, or initiate a more general community discussion known as a "request for comment". Arbitration Committee The Arbitration Committee presides over the ultimate dispute resolution process. Although disputes usually arise from a disagreement between two opposing views on how an article should read, the Arbitration Committee explicitly refuses to directly rule on the specific view that should be adopted. Statistical analyses suggest that the committee ignores the content of disputes and rather focuses on the way disputes are conducted, functioning not so much to resolve disputes and make peace between conflicting editors, but to weed out problematic editors while allowing potentially productive editors back in to participate. Therefore, the committee does not dictate the content of articles, although it sometimes condemns content changes when it deems the new content violates Wikipedia policies (for example, if the new content is considered biased). Its remedies include cautions and probations (used in 63% of cases) and banning editors from articles (43%), subject matters (23%), or Wikipedia (16%). Complete bans from Wikipedia are generally limited to instances of impersonation and anti-social behavior. When conduct is not impersonation or anti-social, but rather anti-consensus or in violation of editing policies, remedies tend to be limited to warnings. Community Each article and each user of Wikipedia has an associated "talk" page. These form the primary communication channel for editors to discuss, coordinate and debate. Wikipedia's community has been described as cultlike, although not always with entirely negative connotations. Its preference for cohesiveness, even if it requires compromise that includes disregard of credentials, has been referred to as "anti-elitism". Wikipedians sometimes award one another "virtual barnstars" for good work. These personalized tokens of appreciation reveal a wide range of valued work extending far beyond simple editing to include social support, administrative actions, and types of articulation work. Wikipedia does not require that its editors and contributors provide identification. As Wikipedia grew, "Who writes Wikipedia?" became one of the questions frequently asked there. Jimmy Wales once argued that only "a community ... a dedicated group of a few hundred volunteers" makes the bulk of contributions to Wikipedia and that the project is therefore "much like any traditional organization". In 2008, a Slate magazine article reported that: "According to researchers in Palo Alto, one percent of Wikipedia users are responsible for about half of the site's edits." This method of evaluating contributions was later disputed by Aaron Swartz, who noted that several articles he sampled had large portions of their content (measured by number of characters) contributed by users with low edit counts. The English Wikipedia has articles, registered editors, and active editors. An editor is considered active if they have made one or more edits in the past 30 days. Editors who fail to comply with Wikipedia cultural rituals, such as signing talk page comments, may implicitly signal that they are Wikipedia outsiders, increasing the odds that Wikipedia insiders may target or discount their contributions. Becoming a Wikipedia insider involves non-trivial costs: the contributor is expected to learn Wikipedia-specific technological codes, submit to a sometimes convoluted dispute resolution process, and learn a "baffling culture rich with in-jokes and insider references". Editors who do not log in are in some sense second-class citizens on Wikipedia, as "participants are accredited by members of the wiki community, who have a vested interest in preserving the quality of the work product, on the basis of their ongoing participation", but the contribution histories of anonymous unregistered editors recognized only by their IP addresses cannot be attributed to a particular editor with certainty. Studies A 2007 study by researchers from Dartmouth College found that "anonymous and infrequent contributors to Wikipedia ... are as reliable a source of knowledge as those contributors who register with the site". Jimmy Wales stated in 2009 that "[I]t turns out over 50% of all the edits are done by just .7% of the users... 524 people... And in fact, the most active 2%, which is 1400 people, have done 73.4% of all the edits." However, Business Insider editor and journalist Henry Blodget showed in 2009 that in a random sample of articles, most Wikipedia content (measured by the amount of contributed text that survives to the latest sampled edit) is created by "outsiders", while most editing and formatting is done by "insiders". A 2008 study found that Wikipedians were less agreeable, open, and conscientious than others, although a later commentary pointed out serious flaws, including that the data showed higher openness and that the differences with the control group and the samples were small. According to a 2009 study, there is "evidence of growing resistance from the Wikipedia community to new content". Diversity Several studies have shown that most Wikipedia contributors are male. Notably, the results of a Wikimedia Foundation survey in 2008 showed that only 13 percent of Wikipedia editors were female. Because of this, universities throughout the United States tried to encourage women to become Wikipedia contributors. Similarly, many of these universities, including Yale and Brown, gave college credit to students who create or edit an article relating to women in science or technology. Andrew Lih, a professor and scientist, wrote in The New York Times that the reason he thought the number of male contributors outnumbered the number of females so greatly was because identifying as a woman may expose oneself to "ugly, intimidating behavior". Data has shown that Africans are underrepresented among Wikipedia editors. Language editions There are currently language editions of Wikipedia (also called language versions, or simply Wikipedias). As of , the six largest, in order of article count, are the , , , , , and Wikipedias. The and -largest Wikipedias owe their position to the article-creating bot Lsjbot, which had created about half the articles on the Swedish Wikipedia, and most of the articles in the Cebuano and Waray Wikipedias. The latter are both languages of the Philippines. In addition to the top six, twelve other Wikipedias have more than a million articles each (, , , , , , , , , , and ), seven more have over 500,000 articles (, , , , , and ), 44 more have over 100,000, and 82 more have over 10,000. The largest, the English Wikipedia, has over million articles. the English Wikipedia receives 48% of Wikipedia's cumulative traffic, with the remaining split among the other languages. The top 10 editions represent approximately 85% of the total traffic. Since Wikipedia is based on the Web and therefore worldwide, contributors to the same language edition may use different dialects or may come from different countries (as is the case for the English edition). These differences may lead to some conflicts over spelling differences (e.g. colour versus color) or points of view. Though the various language editions are held to global policies such as "neutral point of view", they diverge on some points of policy and practice, most notably on whether images that are not licensed freely may be used under a claim of fair use. Jimmy Wales has described Wikipedia as "an effort to create and distribute a free encyclopedia of the highest possible quality to every single person on the planet in their own language". Though each language edition functions more or less independently, some efforts are made to supervise them all. They are coordinated in part by Meta-Wiki, the Wikimedia Foundation's wiki devoted to maintaining all its projects (Wikipedia and others). For instance, Meta-Wiki provides important statistics on all language editions of Wikipedia, and it maintains a list of articles every Wikipedia should have. The list concerns basic content by subject: biography, history, geography, society, culture, science, technology, and mathematics. It is not rare for articles strongly related to a particular language not to have counterparts in another edition. For example, articles about small towns in the United States might be available only in English, even when they meet the notability criteria of other language Wikipedia projects. Translated articles represent only a small portion of articles in most editions, in part because those editions do not allow fully automated translation of articles. Articles available in more than one language may offer "interwiki links", which link to the counterpart articles in other editions. A study published by PLOS One in 2012 also estimated the share of contributions to different editions of Wikipedia from different regions of the world. It reported that the proportion of the edits made from North America was 51% for the English Wikipedia, and 25% for the simple English Wikipedia. English Wikipedia editor numbers On March 1, 2014, The Economist, in an article titled "The Future of Wikipedia", cited a trend analysis concerning data published by the Wikimedia Foundation stating that "[t]he number of editors for the English-language version has fallen by a third in seven years." The attrition rate for active editors in English Wikipedia was cited by The Economist as substantially in contrast to statistics for Wikipedia in other languages (non-English Wikipedia). The Economist reported that the number of contributors with an average of five or more edits per month was relatively constant since 2008 for Wikipedia in other languages at approximately 42,000 editors within narrow seasonal variances of about 2,000 editors up or down. The number of active editors in English Wikipedia, by sharp comparison, was cited as peaking in 2007 at approximately 50,000 and dropping to 30,000 by the start of 2014. In contrast, the trend analysis published in The Economist presents Wikipedia in other languages (non-English Wikipedia) as successful in retaining their active editors on a renewable and sustained basis, with their numbers remaining relatively constant at approximately 42,000. No comment was made concerning which of the differentiated edit policy standards from Wikipedia in other languages (non-English Wikipedia) would provide a possible alternative to English Wikipedia for effectively ameliorating substantial editor attrition rates on the English-language Wikipedia. Reception Various Wikipedians have criticized Wikipedia's large and growing regulation, which includes more than fifty policies and nearly 150,000 words Critics have stated that Wikipedia exhibits systemic bias. In 2010, columnist and journalist Edwin Black described Wikipedia as being a mixture of "truth, half-truth, and some falsehoods". Articles in The Chronicle of Higher Education and The Journal of Academic Librarianship have criticized Wikipedia's "Undue Weight" policy, concluding that the fact that Wikipedia explicitly is not designed to provide correct information about a subject, but rather focus on all the major viewpoints on the subject, give less attention to minor ones, and creates omissions that can lead to false beliefs based on incomplete information. Journalists Oliver Kamm and Edwin Black alleged (in 2010 and 2011 respectively) that articles are dominated by the loudest and most persistent voices, usually by a group with an "ax to grind" on the topic. A 2008 article in Education Next Journal concluded that as a resource about controversial topics, Wikipedia is subject to manipulation and spin. In 2020, Omer Benjakob and Stephen Harrison noted that "Media coverage of Wikipedia has radically shifted over the past two decades: once cast as an intellectual frivolity, it is now lauded as the 'last bastion of shared reality' online." In 2006, the Wikipedia Watch criticism website listed dozens of examples of plagiarism in the English Wikipedia. Accuracy of content Articles for traditional encyclopedias such as Encyclopædia Britannica are written by experts, lending such encyclopedias a reputation for accuracy. However, a peer review in 2005 of forty-two scientific entries on both Wikipedia and Encyclopædia Britannica by the science journal Nature found few differences in accuracy, and concluded that "the average science entry in Wikipedia contained around four inaccuracies; Britannica, about three." Joseph Reagle suggested that while the study reflects "a topical strength of Wikipedia contributors" in science articles, "Wikipedia may not have fared so well using a random sampling of articles or on humanities subjects." Others raised similar critiques. The findings by Nature were disputed by Encyclopædia Britannica, and in response, Nature gave a rebuttal of the points raised by Britannica. In addition to the point-for-point disagreement between these two parties, others have examined the sample size and selection method used in the Nature effort, and suggested a "flawed study design" (in Natures manual selection of articles, in part or in whole, for comparison), absence of statistical analysis (e.g., of reported confidence intervals), and a lack of study "statistical power" (i.e., owing to small sample size, 42 or 4× 101 articles compared, vs >105 and >106 set sizes for Britannica and the English Wikipedia, respectively). As a consequence of the open structure, Wikipedia "makes no guarantee of validity" of its content, since no one is ultimately responsible for any claims appearing in it. Concerns have been raised by PC World in 2009 regarding the lack of accountability that results from users' anonymity, the insertion of false information, vandalism, and similar problems. Economist Tyler Cowen wrote: "If I had to guess whether Wikipedia or the median refereed journal article on economics was more likely to be true after a not so long think I would opt for Wikipedia." He comments that some traditional sources of non-fiction suffer from systemic biases, and novel results, in his opinion, are over-reported in journal articles as well as relevant information being omitted from news reports. However, he also cautions that errors are frequently found on Internet sites and that academics and experts must be vigilant in correcting them. Amy Bruckman has argued that, due to the number of reviewers, "the content of a popular Wikipedia page is actually the most reliable form of information ever created". Critics argue that Wikipedia's open nature and a lack of proper sources for most of the information makes it unreliable. Some commentators suggest that Wikipedia may be reliable, but that the reliability of any given article is not clear. Editors of traditional reference works such as the Encyclopædia Britannica have questioned the project's utility and status as an encyclopedia. Wikipedia co-founder Jimmy Wales has claimed that Wikipedia has largely avoided the problem of "fake news" because the Wikipedia community regularly debates the quality of sources in articles. Wikipedia's open structure inherently makes it an easy target for Internet trolls, spammers, and various forms of paid advocacy seen as counterproductive to the maintenance of a neutral and verifiable online encyclopedia. In response to paid advocacy editing and undisclosed editing issues, Wikipedia was reported in an article in The Wall Street Journal, to have strengthened its rules and laws against undisclosed editing. The article stated that: "Beginning Monday [from the date of the article, June 16, 2014], changes in Wikipedia's terms of use will require anyone paid to edit articles to disclose that arrangement. Katherine Maher, the nonprofit Wikimedia Foundation's chief communications officer, said the changes address a sentiment among volunteer editors that, 'we're not an advertising service; we're an encyclopedia. These issues, among others, had been parodied since the first decade of Wikipedia, notably by Stephen Colbert on The Colbert Report. A Harvard law textbook, Legal Research in a Nutshell (2011), cites Wikipedia as a "general source" that "can be a real boon" in "coming up to speed in the law governing a situation" and, "while not authoritative, can provide basic facts as well as leads to more in-depth resources". Discouragement in education Most university lecturers discourage students from citing any encyclopedia in academic work, preferring primary sources; some specifically prohibit Wikipedia citations. Wales stresses that encyclopedias of any type are not usually appropriate to use as citable sources, and should not be relied upon as authoritative. Wales once (2006 or earlier) said he receives about ten emails weekly from students saying they got failing grades on papers because they cited Wikipedia; he told the students they got what they deserved. "For God's sake, you're in college; don't cite the encyclopedia," he said. In February 2007, an article in The Harvard Crimson newspaper reported that a few of the professors at Harvard University were including Wikipedia articles in their syllabi, although without realizing the articles might change. In June 2007, former president of the American Library Association Michael Gorman condemned Wikipedia, along with Google, stating that academics who endorse the use of Wikipedia are "the intellectual equivalent of a dietitian who recommends a steady diet of Big Macs with everything". In contrast, academic writing in Wikipedia has evolved in recent years and has been found to increase student interest, personal connection to the product, creativity in material processing, and international collaboration in the learning process. Medical information On March 5, 2014, Julie Beck writing for The Atlantic magazine in an article titled "Doctors' #1 Source for Healthcare Information: Wikipedia", stated that "Fifty percent of physicians look up conditions on the (Wikipedia) site, and some are editing articles themselves to improve the quality of available information." Beck continued to detail in this article new programs of Amin Azzam at the University of San Francisco to offer medical school courses to medical students for learning to edit and improve Wikipedia articles on health-related issues, as well as internal quality control programs within Wikipedia organized by James Heilman to improve a group of 200 health-related articles of central medical importance up to Wikipedia's highest standard of articles using its Featured Article and Good Article peer-review evaluation process. In a May 7, 2014, follow-up article in The Atlantic titled "Can Wikipedia Ever Be a Definitive Medical Text?", Julie Beck quotes WikiProject Medicine's James Heilman as stating: "Just because a reference is peer-reviewed doesn't mean it's a high-quality reference." Beck added that: "Wikipedia has its own peer review process before articles can be classified as 'good' or 'featured'. Heilman, who has participated in that process before, says 'less than one percent' of Wikipedia's medical articles have passed." Quality of writing In a 2006 mention of Jimmy Wales, Time magazine stated that the policy of allowing anyone to edit had made Wikipedia the "biggest (and perhaps best) encyclopedia in the world". In 2008, researchers at Carnegie Mellon University found that the quality of a Wikipedia article would suffer rather than gain from adding more writers when the article lacked appropriate explicit or implicit coordination. For instance, when contributors rewrite small portions of an entry rather than making full-length revisions, high- and low-quality content may be intermingled within an entry. Roy Rosenzweig, a history professor, stated that American National Biography Online outperformed Wikipedia in terms of its "clear and engaging prose", which, he said, was an important aspect of good historical writing. Contrasting Wikipedia's treatment of Abraham Lincoln to that of Civil War historian James McPherson in American National Biography Online, he said that both were essentially accurate and covered the major episodes in Lincoln's life, but praised "McPherson's richer contextualization ... his artful use of quotations to capture Lincoln's voice ... and ... his ability to convey a profound message in a handful of words." By contrast, he gives an example of Wikipedia's prose that he finds "both verbose and dull". Rosenzweig also criticized the "waffling—encouraged by the NPOV policy—[which] means that it is hard to discern any overall interpretive stance in Wikipedia history". While generally praising the article on William Clarke Quantrill, he quoted its conclusion as an example of such "waffling", which then stated: "Some historians ... remember him as an opportunistic, bloodthirsty outlaw, while others continue to view him as a daring soldier and local folk hero." Other critics have made similar charges that, even if Wikipedia articles are factually accurate, they are often written in a poor, almost unreadable style. Frequent Wikipedia critic Andrew Orlowski commented, "Even when a Wikipedia entry is 100 percent factually correct, and those facts have been carefully chosen, it all too often reads as if it has been translated from one language to another then into a third, passing an illiterate translator at each stage." A study of Wikipedia articles on cancer was conducted in 2010 by Yaacov Lawrence of the Kimmel Cancer Center at Thomas Jefferson University. The study was limited to those articles that could be found in the Physician Data Query and excluded those written at the "start" class or "stub" class level. Lawrence found the articles accurate but not very readable, and thought that "Wikipedia's lack of readability (to non-college readers) may reflect its varied origins and haphazard editing". The Economist argued that better-written articles tend to be more reliable: "inelegant or ranting prose usually reflects muddled thoughts and incomplete information". Coverage of topics and systemic bias Wikipedia seeks to create a summary of all human knowledge in the form of an online encyclopedia, with each topic covered encyclopedically in one article. Since it has terabytes of disk space, it can have far more topics than can be covered by any printed encyclopedia. The exact degree and manner of coverage on Wikipedia is under constant review by its editors, and disagreements are not uncommon (see deletionism and inclusionism). Wikipedia contains materials that some people may find objectionable, offensive, or pornographic. The "Wikipedia is not censored" policy has sometimes proved controversial: in 2008, Wikipedia rejected an online petition against the inclusion of images of Muhammad in the English edition of its Muhammad article, citing this policy. The presence of politically, religiously, and pornographically sensitive materials in Wikipedia has led to the censorship of Wikipedia by national authorities in China and Pakistan, amongst other countries. A 2008 study conducted by researchers at Carnegie Mellon University and Palo Alto Research Center gave a distribution of topics as well as growth (from July 2006 to January 2008) in each field: Culture and Arts: 30% (210%) Biographies and persons: 15% (97%) Geography and places: 14% (52%) Society and social sciences: 12% (83%) History and events: 11% (143%) Natural and Physical Sciences: 9% (213%) Technology and Applied Science: 4% (−6%) Religions and belief systems: 2% (38%) Health: 2% (42%) Mathematics and logic: 1% (146%) Thought and Philosophy: 1% (160%) These numbers refer only to the number of articles: it is possible for one topic to contain a large number of short articles and another to contain a small number of large ones. Through its "Wikipedia Loves Libraries" program, Wikipedia has partnered with major public libraries such as the New York Public Library for the Performing Arts to expand its coverage of underrepresented subjects and articles. A 2011 study conducted by researchers at the University of Minnesota indicated that male and female editors focus on different coverage topics. There was a greater concentration of females in the "people and arts" category, while males focus more on "geography and science". Coverage of topics and selection bias Research conducted by Mark Graham of the Oxford Internet Institute in 2009 indicated that the geographic distribution of article topics is highly uneven. Africa is the most underrepresented. Across 30 language editions of Wikipedia, historical articles and sections are generally Eurocentric and focused on recent events. An editorial in The Guardian in 2014 claimed that more effort went into providing references for a list of female porn actors than a list of women writers. Data has also shown that Africa-related material often faces omission; a knowledge gap that a July 2018 Wikimedia conference in Cape Town sought to address. Systemic biases When multiple editors contribute to one topic or set of topics, systemic bias may arise, due to the demographic backgrounds of the editors. In 2011, Wales claimed that the unevenness of coverage is a reflection of the demography of the editors, citing for example "biographies of famous women through history and issues surrounding early childcare". The October 22, 2013, essay by Tom Simonite in MIT's Technology Review titled "The Decline of Wikipedia" discussed the effect of systemic bias and policy creep on the downward trend in the number of editors. Systemic bias on Wikipedia may follow that of culture generally, for example favoring certain nationalities, ethnicities or majority religions. It may more specifically follow the biases of Internet culture, inclining to be young, male, English-speaking, educated, technologically aware, and wealthy enough to spare time for editing. Biases, intrinsically, may include an overemphasis on topics such as pop culture, technology, and current events. Taha Yasseri of the University of Oxford, in 2013, studied the statistical trends of systemic bias at Wikipedia introduced by editing conflicts and their resolution. His research examined the counterproductive work behavior of edit warring. Yasseri contended that simple reverts or "undo" operations were not the most significant measure of counterproductive behavior at Wikipedia and relied instead on the statistical measurement of detecting "reverting/reverted pairs" or "mutually reverting edit pairs". Such a "mutually reverting edit pair" is defined where one editor reverts the edit of another editor who then, in sequence, returns to revert the first editor in the "mutually reverting edit pairs". The results were tabulated for several language versions of Wikipedia. The English Wikipedia's three largest conflict rates belonged to the articles George W. Bush, anarchism, and Muhammad. By comparison, for the German Wikipedia, the three largest conflict rates at the time of the Oxford study were for the articles covering Croatia, Scientology, and 9/11 conspiracy theories. Researchers from Washington University developed a statistical model to measure systematic bias in the behavior of Wikipedia's users regarding controversial topics. The authors focused on behavioral changes of the encyclopedia's administrators after assuming the post, writing that systematic bias occurred after the fact. Explicit content Wikipedia has been criticized for allowing information about graphic content. Articles depicting what some critics have called objectionable content (such as feces, cadaver, human penis, vulva, and nudity) contain graphic pictures and detailed information easily available to anyone with access to the internet, including children. The site also includes sexual content such as images and videos of masturbation and ejaculation, illustrations of zoophilia, and photos from hardcore pornographic films in its articles. It also has non-sexual photographs of nude children. The Wikipedia article about Virgin Killer—a 1976 album from the German rock band Scorpions—features a picture of the album's original cover, which depicts a naked prepubescent girl. The original release cover caused controversy and was replaced in some countries. In December 2008, access to the Wikipedia article Virgin Killer was blocked for four days by most Internet service providers in the United Kingdom after the Internet Watch Foundation (IWF) decided the album cover was a potentially illegal indecent image and added the article's URL to a "blacklist" it supplies to British internet service providers. In April 2010, Sanger wrote a letter to the Federal Bureau of Investigation, outlining his concerns that two categories of images on Wikimedia Commons contained child pornography, and were in violation of US federal obscenity law. Sanger later clarified that the images, which were related to pedophilia and one about lolicon, were not of real children, but said that they constituted "obscene visual representations of the sexual abuse of children", under the PROTECT Act of 2003. That law bans photographic child pornography and cartoon images and drawings of children that are obscene under American law. Sanger also expressed concerns about access to the images on Wikipedia in schools. Wikimedia Foundation spokesman Jay Walsh strongly rejected Sanger's accusation, saying that Wikipedia did not have "material we would deem to be illegal. If we did, we would remove it." Following the complaint by Sanger, Wales deleted sexual images without consulting the community. After some editors who volunteer to maintain the site argued that the decision to delete had been made hastily, Wales voluntarily gave up some of the powers he had held up to that time as part of his co-founder status. He wrote in a message to the Wikimedia Foundation mailing-list that this action was "in the interest of encouraging this discussion to be about real philosophical/content issues, rather than be about me and how quickly I acted". Critics, including Wikipediocracy, noticed that many of the pornographic images deleted from Wikipedia since 2010 have reappeared. Privacy One privacy concern in the case of Wikipedia is the right of a private citizen to remain a "private citizen" rather than a "public figure" in the eyes of the law. It is a battle between the right to be anonymous in cyberspace and the right to be anonymous in real life ("meatspace"). A particular problem occurs in the case of a relatively unimportant individual and for whom there exists a Wikipedia page against her or his wishes. In January 2006, a German court ordered the German Wikipedia shut down within Germany because it stated the full name of Boris Floricic, aka "Tron", a deceased hacker. On February 9, 2006, the injunction against Wikimedia Deutschland was overturned, with the court rejecting the notion that Tron's right to privacy or that of his parents was being violated. Wikipedia has a "" that uses Znuny, a free and open-source software fork of OTRS to handle queries without having to reveal the identities of the involved parties. This is used, for example, in confirming the permission for using individual images and other media in the project. Sexism Wikipedia was described in 2015 as harboring a battleground culture of sexism and harassment. The perceived toxic attitudes and tolerance of violent and abusive language were reasons put forth in 2013 for the gender gap in Wikipedia editorship. Edit-a-thons have been held to encourage female editors and increase the coverage of women's topics. A comprehensive 2008 survey, published in 2016, found significant gender differences in: confidence in expertise, discomfort with editing, and response to critical feedback. "Women reported less confidence in their expertise, expressed greater discomfort with editing (which typically involves conflict), and reported more negative responses to critical feedback compared to men." Operation Wikimedia Foundation and Wikimedia movement affiliates Wikipedia is hosted and funded by the Wikimedia Foundation, a non-profit organization which also operates Wikipedia-related projects such as Wiktionary and Wikibooks. The foundation relies on public contributions and grants to fund its mission. The foundation's 2013 IRS Form 990 shows revenue of $39.7 million and expenses of almost $29 million, with assets of $37.2 million and liabilities of about $2.3 million. In May 2014, Wikimedia Foundation named Lila Tretikov as its second executive director, taking over for Sue Gardner. The Wall Street Journal reported on May 1, 2014, that Tretikov's information technology background from her years at University of California offers Wikipedia an opportunity to develop in more concentrated directions guided by her often repeated position statement that, "Information, like air, wants to be free." The same Wall Street Journal article reported these directions of development according to an interview with spokesman Jay Walsh of Wikimedia, who "said Tretikov would address that issue (paid advocacy) as a priority. 'We are really pushing toward more transparency... We are reinforcing that paid advocacy is not welcome.' Initiatives to involve greater diversity of contributors, better mobile support of Wikipedia, new geo-location tools to find local content more easily, and more tools for users in the second and third world are also priorities," Walsh said. Following the departure of Tretikov from Wikipedia due to issues concerning the use of the "superprotection" feature which some language versions of Wikipedia have adopted, Katherine Maher became the third executive director of the Wikimedia Foundation in June 2016. Maher has stated that one of her priorities would be the issue of editor harassment endemic to Wikipedia as identified by the Wikipedia board in December. Maher stated regarding the harassment issue that: "It establishes a sense within the community that this is a priority... (and that correction requires that) it has to be more than words." Wikipedia is also supported by many organizations and groups that are affiliated with the Wikimedia Foundation but independently-run, called Wikimedia movement affiliates. These include Wikimedia chapters (which are national or sub-national organizations, such as Wikimedia Deutschland and Wikimédia France), thematic organizations (such as Amical Wikimedia for the Catalan language community), and user groups. These affiliates participate in the promotion, development, and funding of Wikipedia. Software operations and support The operation of Wikipedia depends on MediaWiki, a custom-made, free and open source wiki software platform written in PHP and built upon the MySQL database system. The software incorporates programming features such as a macro language, variables, a transclusion system for templates, and URL redirection. MediaWiki is licensed under the GNU General Public License (GPL) and it is used by all Wikimedia projects, as well as many other wiki projects. Originally, Wikipedia ran on UseModWiki written in Perl by Clifford Adams (Phase I), which initially required CamelCase for article hyperlinks; the present double bracket style was incorporated later. Starting in January 2002 (Phase II), Wikipedia began running on a PHP wiki engine with a MySQL database; this software was custom-made for Wikipedia by Magnus Manske. The Phase II software was repeatedly modified to accommodate the exponentially increasing demand. In July 2002 (Phase III), Wikipedia shifted to the third-generation software, MediaWiki, originally written by Lee Daniel Crocker. Several MediaWiki extensions are installed to extend the functionality of the MediaWiki software. In April 2005, a Lucene extension was added to MediaWiki's built-in search and Wikipedia switched from MySQL to Lucene for searching. Lucene was later replaced by CirrusSearch which is based on Elasticsearch. In July 2013, after extensive beta testing, a WYSIWYG (What You See Is What You Get) extension, VisualEditor, was opened to public use. It was met with much rejection and criticism, and was described as "slow and buggy". The feature was changed from opt-out to opt-in afterward. Automated editing Computer programs called bots have often been used to perform simple and repetitive tasks, such as correcting common misspellings and stylistic issues, or to start articles such as geography entries in a standard format from statistical data. One controversial contributor, , creating articles with his bot was reported to create up to 10,000 articles on the Swedish Wikipedia on certain days. Additionally, there are bots designed to automatically notify editors when they make common editing errors (such as unmatched quotes or unmatched parentheses). Edits falsely identified by bots as the work of a banned editor can be restored by other editors. An anti-vandal bot is programmed to detect and revert vandalism quickly. Bots are able to indicate edits from particular accounts or IP address ranges, as occurred at the time of the shooting down of the MH17 jet incident in July 2014 when it was reported that edits were made via IPs controlled by the Russian government. Bots on Wikipedia must be approved before activation. According to Andrew Lih, the current expansion of Wikipedia to millions of articles would be difficult to envision without the use of such bots. Hardware operations and support Wikipedia receives between 25,000 and 60,000-page requests per second, depending on the time of the day. page requests are first passed to a front-end layer of Varnish caching servers and back-end layer caching is done by Apache Traffic Server. Further statistics, based on a publicly available 3-month Wikipedia access trace, are available. Requests that cannot be served from the Varnish cache are sent to load-balancing servers running the Linux Virtual Server software, which in turn pass them to one of the Apache web servers for page rendering from the database. The web servers deliver pages as requested, performing page rendering for all the language editions of Wikipedia. To increase speed further, rendered pages are cached in a distributed memory cache until invalidated, allowing page rendering to be skipped entirely for most common page accesses. Wikipedia currently runs on dedicated clusters of Linux servers with Debian. there were 300 in Florida and 44 in Amsterdam. By January 22, 2013, Wikipedia had migrated its primary data center to an Equinix facility in Ashburn, Virginia. In 2017, Wikipedia installed a caching cluster in an Equinix facility in Singapore, the first of its kind in Asia. Internal research and operational development Following growing amounts of incoming donations exceeding seven digits in 2013 as recently reported, the Foundation has reached a threshold of assets which qualify its consideration under the principles of industrial organization economics to indicate the need for the re-investment of donations into the internal research and development of the Foundation. Two of the recent projects of such internal research and development have been the creation of a Visual Editor and a largely under-utilized "Thank" tab which were developed to ameliorate issues of editor attrition, which have met with limited success. The estimates for reinvestment by industrial organizations into internal research and development was studied by Adam Jaffe, who recorded that the range of 4% to 25% annually was to be recommended, with high-end technology requiring the higher level of support for internal reinvestment. At the 2013 level of contributions for Wikimedia presently documented as 45 million dollars, the computed budget level recommended by Jaffe and Caballero for reinvestment into internal research and development is between 1.8 million and 11.3 million dollars annually. In 2016, the level of contributions were reported by Bloomberg News as being at $77 million annually, updating the Jaffe estimates for the higher level of support to between $3.08 million and $19.2 million annually. Internal news publications Community-produced news publications include the English Wikipedia's The Signpost, founded in 2005 by Michael Snow, an attorney, Wikipedia administrator, and former chair of the Wikimedia Foundation board of trustees. It covers news and events from the site, as well as major events from other Wikimedia projects, such as Wikimedia Commons. Similar publications are the German-language Kurier, and the Portuguese-language Correio da Wikipédia. Other past and present community news publications on English Wikipedia include the Wikiworld webcomic, the Wikipedia Weekly podcast, and newsletters of specific WikiProjects like The Bugle from WikiProject Military History and the monthly newsletter from The Guild of Copy Editors. There are also several publications from the Wikimedia Foundation and multilingual publications such as Wikimedia Diff and This Month in Education. The Wikipedia Library The Wikipedia Library is a resource for Wikipedia editors which provides free access to a wide range of digital publications, so that they can consult and cite these while editing the encyclopedia. Over 60 publishers have partnered with The Wikipedia Library to provide access to their resources: when ICE Publishing joined in 2020, a spokesman said "By enabling free access to our content for Wikipedia editors, we hope to further the research community's resources – creating and updating Wikipedia entries on civil engineering which are read by thousands of monthly readers." Access to content Content licensing When the project was started in 2001, all text in Wikipedia was covered by the GNU Free Documentation License (GFDL), a copyleft license permitting the redistribution, creation of derivative works, and commercial use of content while authors retain copyright of their work. The GFDL was created for software manuals that come with free software programs licensed under the GPL. This made it a poor choice for a general reference work: for example, the GFDL requires the reprints of materials from Wikipedia to come with a full copy of the GFDL text. In December 2002, the Creative Commons license was released: it was specifically designed for creative works in general, not just for software manuals. The license gained popularity among bloggers and others distributing creative works on the Web. The Wikipedia project sought the switch to the Creative Commons. Because the two licenses, GFDL and Creative Commons, were incompatible, in November 2008, following the request of the project, the Free Software Foundation (FSF) released a new version of the GFDL designed specifically to allow Wikipedia to by August 1, 2009. (A new version of the GFDL automatically covers Wikipedia contents.) In April 2009, Wikipedia and its sister projects held a community-wide referendum which decided the switch in June 2009. The handling of media files (e.g. image files) varies across language editions. Some language editions, such as the English Wikipedia, include non-free image files under fair use doctrine, while the others have opted not to, in part because of the lack of fair use doctrines in their home countries (e.g. in Japanese copyright law). Media files covered by free content licenses (e.g. Creative Commons' CC BY-SA) are shared across language editions via Wikimedia Commons repository, a project operated by the Wikimedia Foundation. Wikipedia's accommodation of varying international copyright laws regarding images has led some to observe that its photographic coverage of topics lags behind the quality of the encyclopedic text. The Wikimedia Foundation is not a licensor of content, but merely a hosting service for the contributors (and licensors) of the Wikipedia. This position has been successfully defended in court. Methods of access Because Wikipedia content is distributed under an open license, anyone can reuse or re-distribute it at no charge. The content of Wikipedia has been published in many forms, both online and offline, outside the Wikipedia website. Websites: Thousands of "mirror sites" exist that republish content from Wikipedia: two prominent ones, that also include content from other reference sources, are Reference.com and Answers.com. Another example is Wapedia, which began to display Wikipedia content in a mobile-device-friendly format before Wikipedia itself did. Mobile apps: A variety of mobile apps provide access to Wikipedia on hand-held devices, including both Android and iOS devices (see Wikipedia apps). (see also Mobile access.) Search engines: Some web search engines make special use of Wikipedia content when displaying search results: examples include Microsoft Bing (via technology gained from Powerset) and DuckDuckGo. Compact discs, DVDs: Collections of Wikipedia articles have been published on optical discs. An English version, 2006 Wikipedia CD Selection, contained about 2,000 articles. The Polish-language version contains nearly 240,000 articles. There are German- and Spanish-language versions as well. Also, "Wikipedia for Schools", the Wikipedia series of CDs / DVDs produced by Wikipedians and SOS Children, is a free, hand-checked, non-commercial selection from Wikipedia targeted around the UK National Curriculum and intended to be useful for much of the English-speaking world. The project is available online; an equivalent print encyclopedia would require roughly 20 volumes. Printed books: There are efforts to put a select subset of Wikipedia's articles into printed book form. Since 2009, tens of thousands of print-on-demand books that reproduced English, German, Russian and French Wikipedia articles have been produced by the American company Books LLC and by three Mauritian subsidiaries of the German publisher VDM. Semantic Web: The website DBpedia, begun in 2007, extracts data from the infoboxes and category declarations of the English-language Wikipedia. Wikimedia has created the Wikidata project with a similar objective of storing the basic facts from each page of Wikipedia and the other WMF wikis and make it available in a queriable semantic format, RDF. it has 93,337,731 items. Obtaining the full contents of Wikipedia for reuse presents challenges, since direct cloning via a web crawler is discouraged. Wikipedia publishes "dumps" of its contents, but these are text-only; there was no dump available of Wikipedia's images. Wikimedia Enterprise is a for-profit solution to this. Several languages of Wikipedia also maintain a reference desk, where volunteers answer questions from the general public. According to a study by Pnina Shachaf in the Journal of Documentation, the quality of the Wikipedia reference desk is comparable to a standard library reference desk, with an accuracy of 55 percent. Mobile access Wikipedia's original medium was for users to read and edit content using any standard web browser through a fixed Internet connection. Although Wikipedia content has been accessible through the mobile web since July 2013, The New York Times on February 9, 2014, quoted Erik Möller, deputy director of the Wikimedia Foundation, stating that the transition of internet traffic from desktops to mobile devices was significant and a cause for concern and worry. The article in The New York Times reported the comparison statistics for mobile edits stating that, "Only 20 percent of the readership of the English-language Wikipedia comes via mobile devices, a figure substantially lower than the percentage of mobile traffic for other media sites, many of which approach 50 percent. And the shift to mobile editing has lagged even more." The New York Times reports that Möller has assigned "a team of 10 software developers focused on mobile", out of a total of approximately 200 employees working at the Wikimedia Foundation. One principal concern cited by The New York Times for the "worry" is for Wikipedia to effectively address attrition issues with the number of editors which the online encyclopedia attracts to edit and maintain its content in a mobile access environment. Bloomberg Businessweek reported in July 2014 that Google's Android mobile apps have dominated the largest share of global smartphone shipments for 2013 with 78.6% of market share over their next closest competitor in iOS with 15.2% of the market. At the time of the Tretikov appointment and her posted web interview with Sue Gardner in May 2014, Wikimedia representatives made a technical announcement concerning the number of mobile access systems in the market seeking access to Wikipedia. Directly after the posted web interview, the representatives stated that Wikimedia would be applying an all-inclusive approach to accommodate as many mobile access systems as possible in its efforts for expanding general mobile access, including BlackBerry and the Windows Phone system, making market share a secondary issue. The Android app for Wikipedia was released on July 23, 2014, to generally positive reviews, scoring over four of a possible five in a poll of approximately 200,000 users downloading from Google. The version for iOS was released on April 3, 2013, to similar reviews. Later versions have also been released. Access to Wikipedia from mobile phones was possible as early as 2004, through the Wireless Application Protocol (WAP), via the Wapedia service. In June 2007 Wikipedia launched en.mobile.wikipedia.org, an official website for wireless devices. In 2009 a newer mobile service was officially released, located at en.m.wikipedia.org, which caters to more advanced mobile devices such as the iPhone, Android-based devices or WebOS-based devices. Several other methods of mobile access to Wikipedia have emerged. Many devices and applications optimize or enhance the display of Wikipedia content for mobile devices, while some also incorporate additional features such as use of Wikipedia metadata, such as geoinformation. Wikipedia Zero was an initiative of the Wikimedia Foundation to expand the reach of the encyclopedia to the developing countries. It was discontinued in February 2018. Andrew Lih and Andrew Brown both maintain editing Wikipedia with smartphones is difficult and this discourages new potential contributors. The number of Wikipedia editors has been declining after several years and Tom Simonite of MIT Technology Review claims the bureaucratic structure and rules are a factor in this. Simonite alleges some Wikipedians use the labyrinthine rules and guidelines to dominate others and those editors have a vested interest in keeping the status quo. Lih alleges there is a serious disagreement among existing contributors on how to resolve this. Lih fears for Wikipedia's long-term future while Brown fears problems with Wikipedia will remain and rival encyclopedias will not replace it. Chinese access Access to the Chinese Wikipedia has been blocked in mainland China since May 2015. This was done after Wikipedia started to use HTTPS encryption, which made selective censorship more difficult. In 2017, Quartz reported that the Chinese government had begun creating an unofficial version of Wikipedia. However, unlike Wikipedia, the website's contents would only be editable by scholars from state-owned Chinese institutions. The article stated it had been approved by the State Council of the People's Republic of China in 2011. Cultural impact Trusted source to combat fake news In 2017–18, after a barrage of false news reports, both Facebook and YouTube announced they would rely on Wikipedia to help their users evaluate reports and reject false news. Noam Cohen, writing in The Washington Post states, "YouTube's reliance on Wikipedia to set the record straight builds on the thinking of another fact-challenged platform, the Facebook social network, which announced last year that Wikipedia would help its users root out 'fake news'." Alexa records the daily pageviews per visitor as 3.03 and the average daily time on site as 3:46 minutes. Readership In February 2014, The New York Times reported that Wikipedia was ranked fifth globally among all websites, stating "With 18 billion page views and nearly 500 million unique visitors a month ... Wikipedia trails just Yahoo, Facebook, Microsoft and Google, the largest with 1.2 billion unique visitors." However, its ranking dropped to 13th globally by June 2020 due mostly to a rise in popularity of Chinese websites for online shopping. In addition to logistic growth in the number of its articles, Wikipedia has steadily gained status as a general reference website since its inception in 2001. About 50 percent of search engine traffic to Wikipedia comes from Google, a good portion of which is related to academic research. The number of readers of Wikipedia worldwide reached 365 million at the end of 2009. The Pew Internet and American Life project found that one third of US Internet users consulted Wikipedia. In 2011 Business Insider gave Wikipedia a valuation of $4 billion if it ran advertisements. According to "Wikipedia Readership Survey 2011", the average age of Wikipedia readers is 36, with a rough parity between genders. Almost half of Wikipedia readers visit the site more than five times a month, and a similar number of readers specifically look for Wikipedia in search engine results. About 47 percent of Wikipedia readers do not realize that Wikipedia is a non-profit organization. COVID-19 pandemic During the COVID-19 pandemic, Wikipedia's coverage of the pandemic received international media attention, and brought an increase in Wikipedia readership overall. Cultural significance Wikipedia's content has also been used in academic studies, books, conferences, and court cases. The Parliament of Canada's website refers to Wikipedia's article on same-sex marriage in the "related links" section of its "further reading" list for the Civil Marriage Act. The encyclopedia's assertions are increasingly used as a source by organizations such as the US federal courts and the World Intellectual Property Organization—though mainly for supporting information rather than information decisive to a case. Content appearing on Wikipedia has also been cited as a source and referenced in some US intelligence agency reports. In December 2008, the scientific journal RNA Biology launched a new section for descriptions of families of RNA molecules and requires authors who contribute to the section to also submit a draft article on the RNA family for publication in Wikipedia. Wikipedia has also been used as a source in journalism, often without attribution, and several reporters have been dismissed for plagiarizing from Wikipedia. In 2006, Time magazine recognized Wikipedia's participation (along with YouTube, Reddit, MySpace, and Facebook) in the rapid growth of online collaboration and interaction by millions of people worldwide. In July 2007, Wikipedia was the focus of a 30-minute documentary on BBC Radio 4 which argued that, with increased usage and awareness, the number of references to Wikipedia in popular culture is such that the word is one of a select group of 21st-century nouns that are so familiar (Google, Facebook, YouTube) that they no longer need explanation. On September 28, 2007, Italian politician Franco Grillini raised a parliamentary question with the minister of cultural resources and activities about the necessity of freedom of panorama. He said that the lack of such freedom forced Wikipedia, "the seventh most consulted website", to forbid all images of modern Italian buildings and art, and claimed this was hugely damaging to tourist revenues. On September 16, 2007, The Washington Post reported that Wikipedia had become a focal point in the 2008 US election campaign, saying: "Type a candidate's name into Google, and among the first results is a Wikipedia page, making those entries arguably as important as any ad in defining a candidate. Already, the presidential entries are being edited, dissected and debated countless times each day." An October 2007 Reuters article, titled "Wikipedia page the latest status symbol", reported the recent phenomenon of how having a Wikipedia article vindicates one's notability. Active participation also has an impact. Law students have been assigned to write Wikipedia articles as an exercise in clear and succinct writing for an uninitiated audience. A working group led by Peter Stone (formed as a part of the Stanford-based project One Hundred Year Study on Artificial Intelligence) in its report called Wikipedia "the best-known example of crowdsourcing... that far exceeds traditionally-compiled information sources, such as encyclopedias and dictionaries, in scale and depth." In a 2017 opinion piece for Wired, Hossein Derakhshan describes Wikipedia as "one of the last remaining pillars of the open and decentralized web" and contrasted its existence as a text-based source of knowledge with social media and social networking services, the latter having "since colonized the web for television's values". For Derakhshan, Wikipedia's goal as an encyclopedia represents the Age of Enlightenment tradition of rationality triumphing over emotions, a trend which he considers "endangered" due to the "gradual shift from a typographic culture to a photographic one, which in turn mean[s] a shift from rationality to emotions, exposition to entertainment". Rather than "" (), social networks have led to a culture of "[d]are not to care to know". This is while Wikipedia faces "a more concerning problem" than funding, namely "a flattening growth rate in the number of contributors to the website". Consequently, the challenge for Wikipedia and those who use it is to "save Wikipedia and its promise of a free and open collection of all human knowledge amid the conquest of new and old television—how to collect and preserve knowledge when nobody cares to know." Awards Wikipedia won two major awards in May 2004. The first was a Golden Nica for Digital Communities of the annual Prix Ars Electronica contest; this came with a €10,000 (£6,588; $12,700) grant and an invitation to present at the PAE Cyberarts Festival in Austria later that year. The second was a Judges' Webby Award for the "community" category. In 2007, readers of brandchannel.com voted Wikipedia as the fourth-highest brand ranking, receiving 15 percent of the votes in answer to the question "Which brand had the most impact on our lives in 2006?" In September 2008, Wikipedia received Quadriga A Mission of Enlightenment award of Werkstatt Deutschland along with Boris Tadić, Eckart Höfling, and Peter Gabriel. The award was presented to Wales by David Weinberger. In 2015, Wikipedia was awarded both the annual Erasmus Prize, which recognizes exceptional contributions to culture, society or social sciences, and the Spanish Princess of Asturias Award on International Cooperation. Speaking at the Asturian Parliament in Oviedo, the city that hosts the awards ceremony, Jimmy Wales praised the work of the Asturian language Wikipedia users. Satire Many parodies target Wikipedia's openness and susceptibility to inserted inaccuracies, with characters vandalizing or modifying the online encyclopedia project's articles. Comedian Stephen Colbert has parodied or referenced Wikipedia on numerous episodes of his show The Colbert Report and coined the related term wikiality, meaning "together we can create a reality that we all agree on—the reality we just agreed on". Another example can be found in "Wikipedia Celebrates 750 Years of American Independence", a July 2006 front-page article in The Onion, as well as the 2010 The Onion article "'L.A. Law' Wikipedia Page Viewed 874 Times Today". In an April 2007 episode of the American television comedy The Office, office manager (Michael Scott) is shown relying on a hypothetical Wikipedia article for information on negotiation tactics to assist him in negotiating lesser pay for an employee. Viewers of the show tried to add the episode's mention of the page as a section of the actual Wikipedia article on negotiation, but this effort was prevented by other users on the article's talk page. "My Number One Doctor", a 2007 episode of the television show Scrubs, played on the perception that Wikipedia is an unreliable reference tool with a scene in which Perry Cox reacts to a patient who says that a Wikipedia article indicates that the raw food diet reverses the effects of bone cancer by retorting that the same editor who wrote that article also wrote the Battlestar Galactica episode guide. In 2008, the comedy website CollegeHumor produced a video sketch named "Professor Wikipedia", in which the fictitious Professor Wikipedia instructs a class with a medley of unverifiable and occasionally absurd statements. The Dilbert comic strip from May 8, 2009, features a character supporting an improbable claim by saying "Give me ten minutes and then check Wikipedia." In July 2009, BBC Radio 4 broadcast a comedy series called Bigipedia, which was set on a website which was a parody of Wikipedia. Some of the sketches were directly inspired by Wikipedia and its articles. On August 23, 2013, the New Yorker website published a cartoon with this caption: "Dammit, Manning, have you considered the pronoun war that this is going to start on your Wikipedia page?" The cartoon referred to Chelsea Elizabeth Manning (born Bradley Edward Manning), an American activist, politician, and former United States Army soldier and a trans woman. In December 2015, John Julius Norwich stated, in a letter published in The Times newspaper, that as a historian he resorted to Wikipedia "at least a dozen times a day", and had never yet caught it out. He described it as "a work of reference as useful as any in existence", with so wide a range that it is almost impossible to find a person, place, or thing that it has left uncovered and that he could never have written his last two books without it. Sister projectsWikimedia Wikipedia has spawned several sister projects, which are also wikis run by the Wikimedia Foundation. These other Wikimedia projects include Wiktionary, a dictionary project launched in December 2002, Wikiquote, a collection of quotations created a week after Wikimedia launched, Wikibooks, a collection of collaboratively written free textbooks and annotated texts, Wikimedia Commons, a site devoted to free-knowledge multimedia, Wikinews, for citizen journalism, and Wikiversity, a project for the creation of free learning materials and the provision of online learning activities. Another sister project of Wikipedia, Wikispecies, is a catalogue of species. In 2012 Wikivoyage, an editable travel guide, and Wikidata, an editable knowledge base, launched. Publishing The most obvious economic effect of Wikipedia has been the death of commercial encyclopedias, especially the printed versions, e.g. Encyclopædia Britannica, which were unable to compete with a product that is essentially free. Nicholas Carr wrote a 2005 essay, "The amorality of Web 2.0", that criticized websites with user-generated content, like Wikipedia, for possibly leading to professional (and, in his view, superior) content producers' going out of business, because "free trumps quality all the time". Carr wrote: "Implicit in the ecstatic visions of Web 2.0 is the hegemony of the amateur. I for one can't imagine anything more frightening." Others dispute the notion that Wikipedia, or similar efforts, will entirely displace traditional publications. For instance, Chris Anderson, the editor-in-chief of Wired Magazine, wrote in Nature that the "wisdom of crowds" approach of Wikipedia will not displace top scientific journals, with their rigorous peer review process. There is also an ongoing debate about the influence of Wikipedia on the biography publishing business. "The worry is that, if you can get all that information from Wikipedia, what's left for biography?" said Kathryn Hughes, professor of life writing at the University of East Anglia and author of The Short Life and Long Times of Mrs Beeton and George Eliot: the Last Victorian. Research use Wikipedia has been widely used as a corpus for linguistic research in computational linguistics, information retrieval and natural language processing. In particular, it commonly serves as a target knowledge base for the entity linking problem, which is then called "wikification", and to the related problem of word-sense disambiguation. Methods similar to wikification can in turn be used to find "missing" links in Wikipedia. In 2015, French researchers José Lages of the University of Franche-Comté in Besançon and Dima Shepelyansky of Paul Sabatier University in Toulouse published a global university ranking based on Wikipedia scholarly citations. They used PageRank, CheiRank and similar algorithms "followed by the number of appearances in the 24 different language editions of Wikipedia (descending order) and the century in which they were founded (ascending order)". The study was updated in 2019. A 2017 MIT study suggests that words used on Wikipedia articles end up in scientific publications. Studies related to Wikipedia have been using machine learning and artificial intelligence to support various operations. One of the most important areas—automatic detection of vandalism and data quality assessment in Wikipedia. In February 2022, civil servants from the UK's Department for Levelling Up, Housing and Communities were found to have used Wikipedia for research in the drafting of the Levelling Up White Paper after journalists at The Independent noted that parts of the document had been lifted directly from Wikipedia articles on Constantinople and the list of largest cities throughout history. Related projects Several interactive multimedia encyclopedias incorporating entries written by the public existed long before Wikipedia was founded. The first of these was the 1986 BBC Domesday Project, which included text (entered on BBC Micro computers) and photographs from more than a million contributors in the UK, and covered the geography, art, and culture of the UK. This was the first interactive multimedia encyclopedia (and was also the first major multimedia document connected through internal links), with the majority of articles being accessible through an interactive map of the UK. The user interface and part of the content of the Domesday Project were emulated on a website until 2008. Several free-content, collaborative encyclopedias were created around the same period as Wikipedia (e.g. Everything2), with many later being merged into the project (e.g. GNE). One of the most successful early online encyclopedias incorporating entries by the public was h2g2, which was created by Douglas Adams in 1999. The h2g2 encyclopedia is relatively lighthearted, focusing on articles which are both witty and informative. Subsequent collaborative knowledge websites have drawn inspiration from Wikipedia. Others use more traditional peer review, such as Encyclopedia of Life and the online wiki encyclopedias Scholarpedia and Citizendium. The latter was started by Sanger in an attempt to create a reliable alternative to Wikipedia. See also Democratization of knowledge Interpedia, an early proposal for a collaborative Internet encyclopedia List of online encyclopedias List of Wikipedia controversies Network effect Outline of Wikipediaguide to the subject of Wikipedia presented as a tree structured list of its subtopics; for an outline of the contents of Wikipedia, see Portal:Contents/Outlines QRpediamultilingual, mobile interface to Wikipedia Wikipedia Review Recursion Notes References Further reading Academic studies (A blog post by the author.) (Open access) Rosenzweig, Roy. Can History be Open Source? Wikipedia and the Future of the Past. (Originally published in The Journal of American History 93.1 (June 2006): 117–146.) Books (Substantial criticisms of Wikipedia and other web 2.0 projects.) Listen to: The NPR interview with A. Keen, Weekend Edition Saturday, June 16, 2007. (See book review by Baker, as listed hereafter.) Book review-related articles Baker, Nicholson. "The Charms of Wikipedia". The New York Review of Books, March 20, 2008. Retrieved December 17, 2008. (Book rev. of The Missing Manual, by John Broughton, as listed previously.) Crovitz, L. Gordon. "Wikipedia's Old-Fashioned Revolution: The online encyclopedia is fast becoming the best." (Originally published in Wall Street Journal onlineApril 6, 2009.) Postrel, Virginia, "Who Killed Wikipedia? : A hardened corps of volunteer editors is the only force protecting Wikipedia. They might also be killing it", Pacific Standard magazine, November/December 2014 issue. Learning resources Wikiversity list of learning resources. (Includes related courses, Web-based seminars, slides, lecture notes, textbooks, quizzes, glossaries, etc.) The Great Book of Knowledge, Part 1: A Wiki is a Kind of Bus, Ideas, with Paul Kennedy, CBC Radio One, originally broadcast January 15, 2014. The webpage includes a link to the archived audio program (also found here). The radio documentary discusses Wikipedia's history, development, and its place within the broader scope of the trend to democratized knowledge. It also includes interviews with several key Wikipedia staff and contributors, including Kat Walsh and Sue Gardner (audio, 53:58, Flash required). Other media coverage General articles "Wikipedia probe into paid-for 'sockpuppet' entries", BBC News, October 21, 2013. "The Decline of Wikipedia" , MIT Technology Review, October 22, 2013 "Edits to Wikipedia pages on Bell, Garner, Diallo traced to 1Police Plaza" (March 2015), Capital Angola's Wikipedia Pirates Are Exposing Problems (March 2016), Motherboard Full Measure with Sharyl Attkisson, April 17, 2016. (Includes video.) Articles re Wikipedia usage patterns Wikipedia's Year-End List Shows What the Internet Needed to Know in 2019. Alyse Stanley, December 27, 2019, Gizmodo. "Is Wikipedia Cracking Up?" The Independent, February 3, 2009. External links – multilingual portal (contains links to all language editions) (wikipedia.com still redirects here) Wikipedia topic page at The New York Times Video of TED talk by Jimmy Wales on the birth of Wikipedia 2001 establishments in the United States Advertising-free websites Articles containing video clips Creative Commons-licensed websites Free-content websites Internet properties established in 2001 Jimmy Wales Multilingual websites Online encyclopedias Social information processing Wikimedia projects
255849
https://en.wikipedia.org/wiki/LaserDisc
LaserDisc
The LaserDisc (LD) is a home video format and the first commercial optical disc storage medium, initially licensed, sold and marketed as MCA DiscoVision (also known simply as "DiscoVision") in the United States in 1978. Its diameter typically spans . Unlike most optical disc standards, LaserDisc is not fully digital and instead requires the use of analog video signals. Although the format was capable of offering higher-quality video and audio than its consumer rivals, VHS and Betamax videotape, LaserDisc never managed to gain widespread use in North America, largely due to high costs for the players and video titles themselves and the inability to record TV programs. It eventually did gain some traction in that region and became somewhat popular in the 1990s. It was not a popular format in Europe and Australia. By contrast, the format was much more popular in Japan and in the more affluent regions of Southeast Asia, such as Hong Kong, Singapore and Malaysia, and was the prevalent rental video medium in Hong Kong during the 1990s. Its superior video and audio quality made it a popular choice among videophiles and film enthusiasts during its lifespan. The technologies and concepts behind LaserDisc were the foundation for later optical disc formats, including Compact Disc (CD), DVD and Blu-ray (BD). History Optical video recording technology, using a transparent disc, was invented by David Paul Gregg and James Russell in 1963 (and patented in 1970 and 1990). The Gregg patents were purchased by MCA in 1968. By 1969, Philips had developed a videodisc in reflective mode, which has advantages over the transparent mode. MCA and Philips then decided to combine their efforts and first publicly demonstrated the videodisc in 1972. LaserDisc was first available on the market in Atlanta, Georgia, on December 11, 1978, two years after the introduction of the VHS VCR, and four years before the introduction of the CD (which is based on laser disc technology). Initially licensed, sold, and marketed as MCA DiscoVision (also known as simply DiscoVision) in 1978, the technology was previously referred to internally as Optical Videodisc System, Reflective Optical Videodisc, Laser Optical Videodisc, and Disco-Vision (with a hyphen), with the first players referring to the format as Video Long Play. Pioneer Electronics later purchased the majority stake in the format and marketed it as both LaserVision (format name) and LaserDisc (brand name) in 1980, with some releases unofficially referring to the medium as Laser Videodisc. Philips produced the players while MCA produced the discs. The Philips-MCA collaboration was unsuccessful – and was discontinued after a few years. Several of the scientists responsible for the early research (Richard Wilkinson, Ray Dakin and John Winslow) founded Optical Disc Corporation (now ODC Nimbus). In 1979, the Museum of Science and Industry in Chicago opened its "Newspaper" exhibit which used interactive LaserDiscs to allow visitors to search the front page of any Chicago Tribune newspaper. This was a very early example of public access to electronically stored information in a museum. LaserDisc was launched in Japan in October 1981, and a total of approximately 3.6 million LaserDisc players had been sold before its discontinuation in 2009. In 1984, Sony introduced a LaserDisc format that could store any form of digital data, as a data storage device similar to CD-ROM, with a large 3.28 GiB storage capacity, comparable to the later DVD-ROM format. The first LaserDisc title marketed in North America was the MCA DiscoVision release of Jaws on December 15, 1978. The last title released in North America was Paramount's Bringing Out the Dead on October 3, 2000. A dozen or so more titles continued to be released in Japan until September 21, 2001, with the last Japanese movie released being the Hong Kong film Tokyo Raiders from Golden Harvest. Production of LaserDisc players continued until January 14, 2009, when Pioneer stopped making them. It was estimated that in 1998, LaserDisc players were in approximately 2% of U.S. households (roughly two million). By comparison, in 1999, players were in 10% of Japanese households. A total of 16.8 million LaserDisc players were sold worldwide, of which 9.5 million were sold by Pioneer. By 2001, LaserDisc had been completely replaced by DVD in the North American retail marketplace, as media was no longer being produced. Players were still exported to North America from Japan until the end of 2001. The format has retained some popularity among American collectors, and to a greater degree in Japan, where the format was better supported and more prevalent during its lifespan. In Europe, LaserDisc always remained an obscure format. It was chosen by the British Broadcasting Corporation (BBC) for the BBC Domesday Project in the mid-1980s, a school-based project to commemorate the 900 years since the original Domesday Book in England. From 1991 until the late 1990s, the BBC also used LaserDisc technology (specifically Sony CRVdisc) to play out their channel idents. Design A standard home video LaserDisc is in diameter and made up of two single-sided aluminum discs layered in plastic. Although similar in appearance to compact discs or DVDs, early LaserDiscs used analog video stored in the composite domain (having a video bandwidth and resolution approximately equivalent to the Type C videotape format) with analog FM stereo sound and PCM digital audio. Later discs used D-2 instead of Type C videotape for mastering. The LaserDisc at its most fundamental level was still recorded as a series of pits and lands much like CDs, DVDs, and even Blu-ray Discs are today. In true digital media the pits, or their edges, directly represent 1s and 0s of a binary digital information stream. On a LaserDisc, the information is encoded as analog frequency modulation and is contained in the lengths and spacing of the pits. A carrier frequency is modulated by the baseband video signal (and analog soundtracks). In a simplified view, positive parts of this variable frequency signal can produce lands and negative parts can be pits, which results in a projection of the FM signal along the track on the disc. When reading, the FM carrier can be reconstructed from the succession of pit edges, and demodulated to extract the original video signal (in practice, selection between pit and land parts uses intersection of the FM carrier with an horizontal line having an offset from the zero axis, for noise considerations). If PCM sound is present, its waveform, considered as an analog signal, can be added to the FM carrier, which modulates the width of the intersection with the horizontal threshold. As a result, space between pit centers essentially represent video (as frequency), and pits lengths code for PCM sound information. Early LaserDiscs featured in 1978 were entirely analog but the format evolved to incorporate digital stereo sound in CD format (sometimes with a TOSlink or coax output to feed an external DAC), and later multi-channel formats such as Dolby Digital and DTS. Since digital encoding and compression schemes were either unavailable or impractical in 1978, three encoding formats based on the rotation speed were used: CAV Constant angular velocity or Standard Play discs supported several unique features such as freeze frame, variable slow motion and reverse. CAV discs were spun at a constant rotational speed (1800 rpm for 525 line and Hi-Vision, and 1500 rpm for 625 line discs) during playback, with one video frame read per revolution. In this mode, 54,000 individual frames (30 minutes of audio/video for NTSC and Hi-Vision, 36 minutes for PAL) could be stored on a single side of a CAV disc. Another unique attribute to CAV was to reduce the visibility of crosstalk from adjacent tracks, since on CAV discs any crosstalk at a specific point in a frame is simply from the same point in the next or previous frame. CAV was used less frequently than CLV, and reserved for special editions of feature films to highlight bonus material and special effects. One of the most intriguing advantages of this format was the ability to reference every frame of a film directly by number, a feature of particular interest to film buffs, students and others intrigued by the study of errors in staging, continuity and so on. CLV Constant linear velocity or Extended Play discs do not have the "trick play" features of CAV, offering only simple playback on all but the high-end LaserDisc players incorporating a digital frame store. These high-end LaserDisc players could add features not normally available to CLV discs such as variable forward and reverse, and a VCR-like "pause". By gradually slowing down their rotational speed (1800–600 rpm for NTSC and 2470–935 rpm for Hi-Vision) CLV encoded discs could store 60 minutes of audio/video per side for NTSC and Hi-Vision (64 minutes for PAL), or two hours per disc. For films with a run-time less than 120 minutes, this meant they could fit on one disc, lowering the cost of the title and eliminating the distracting exercise of "getting up to change the disc", at least for those who owned a dual-sided player. The majority of titles were only available in CLV (a few titles were released partly CLV, partly CAV. For example, a 140-minute movie could fit on two CLV sides and one CAV side, thus allowing for the CAV-only features during the climax of the film). CAA In the early 1980s, due to problems with crosstalk distortion on CLV extended play LaserDiscs, Pioneer Video introduced constant angular acceleration (CAA) formatting for extended play discs. CAA is very similar to CLV, save for the fact that CAA varies the angular rotation of the disc in controlled steps instead of gradually slowing down in a steady linear pace as a CLV disc is read. With the exception of 3M/Imation, all LaserDisc manufacturers adopted the CAA encoding scheme, although the term was rarely (if ever) used on any consumer packaging. CAA encoding noticeably improved picture quality and greatly reduced crosstalk and other tracking problems while being fully compatible with existing players. As Pioneer introduced digital audio to LaserDisc in 1985, it further refined the CAA format. CAA55 was introduced in 1985 with a total playback capacity per side of 55 minutes 5 seconds, reducing the video capacity to resolve bandwidth issues with the inclusion of digital audio. Several titles released between 1985 and 1987 were analog audio only due to the length of the title and the desire to keep the film on one disc (e.g., Back to the Future). By 1987, Pioneer had overcome the technical challenges and was able to once again encode in CAA60, allowing a total of 60 minutes 5 seconds. Pioneer further refined CAA, offering CAA45, encoding 45 minutes of material, but filling the entire playback surface of the side. Used on only a handful of titles, CAA65 offered 65 minutes 5 seconds of playback time per side. There are a handful of titles pressed by Technidisc that used CAA50. The final variant of CAA is CAA70, which could accommodate 70 minutes of playback time per side. There are no known uses of this format on the consumer market. Audio Sound could be stored in either analog or digital format and in a variety of surround sound formats; NTSC discs could carry a stereo analog audio track, plus a stereo CD-quality uncompressed PCM digital audio track, which were (EFM, CIRC, 16-bit and 44.1 kHz sample rate). PAL discs could carry one pair of audio tracks, either analog or digital and the digital tracks on a PAL disc were 16-bit 44.1 kHz as on a CD; in the UK, the term "LaserVision" is used to refer to discs with analog sound, while "LaserDisc" is used for those with digital audio. The digital sound signal in both formats are EFM-encoded as in CD. Dolby Digital (also called AC-3) and DTS, which are now common on DVD releases, first became available on LaserDisc, and Star Wars: Episode I – The Phantom Menace (1999) which was released on LaserDisc in Japan, is among the first home video releases ever to include 6.1 channel Dolby Digital EX Surround; along with a few other late-life releases from 1999 to 2001. Unlike DVDs, which carry Dolby Digital audio in digital form, LaserDiscs store Dolby Digital in a frequency modulated form within a track normally used for analog audio. Extracting Dolby Digital from a LaserDisc required a player equipped with a special "AC-3 RF" output and an external demodulator in addition to an AC-3 decoder. The demodulator was necessary to convert the 2.88 MHz modulated AC-3 information on the disc into a 384 kbit/s signal that the decoder could handle. In the mid to late 1990s many higher-end AV receivers included the demodulator circuit specifically for the LaserDisc players RF modulated Dolby Digital AC-3 signal. By the late 1990s with LaserDisc players and disc sales declining due to DVD's growing popularity the AV receiver manufacturers removed the demodulator circuit. Although DVD players were capable of playing Dolby Digital tracks, the signals out of DVD players were not in a modulated form and not compatible with the inputs designed for LaserDisc AC-3. Outboard demodulators were available for a period that converted the AC-3 signal to standard Dolby Digital signal that was compatible with the standard Dolby Digital/PCM inputs on capable AV receivers. Another type marketed by Onkyo and Marantz converted the RF AC-3 signal to 6-channel analog audio. The two FM audio channels occupied the disc spectrum at 2.3 and 2.8 MHz on NTSC formatted discs and each channel had a 100 kHz FM deviation. The FM audio carrier frequencies were chosen to minimize their visibility in the video image, so that even with a poorly mastered disc, audio carrier beats in the video will be at least ‑35 dB down, and thus, invisible. Due to the frequencies chosen, the 2.8 MHz audio carrier (Right Channel) and the lower edge of the chroma signal are very close together and if filters are not carefully set during mastering, there can be interference between the two. In addition, high audio levels combined with high chroma levels can cause mutual interference, leading to beats becoming visible in highly saturated areas of the image. To help deal with this, Pioneer decided to implement the CX Noise Reduction System on the analog tracks. By reducing the dynamic range and peak levels of the audio signals stored on the disc, filtering requirements were relaxed and visible beats greatly reduced or eliminated. The CX system gives a total NR effect of 20 dB, but in the interest of better compatibility for non-decoded playback, Pioneer reduced this to only 14 dB of noise reduction (the RCA CED system used the "original" 20 dB CX system). This also relaxed calibration tolerances in players and helped reduce audible pumping if the CX decoder was not calibrated correctly. At least where the digital audio tracks were concerned, the sound quality was unsurpassed at the time compared to consumer videotape, but the quality of the analog soundtracks varied greatly depending on the disc and, sometimes, the player. Many early and lower-end LD players had poor analog audio components, and in turn many early discs had poorly mastered analog audio tracks, making digital soundtracks in any form desirable to serious enthusiasts. Early DiscoVision and LaserDisc titles lacked the digital audio option, but many of those movies received digital sound in later re-issues by Universal, and the quality of analog audio tracks generally got far better as time went on. Many discs that had originally carried old analog stereo tracks received new Dolby Stereo and Dolby Surround tracks instead, often in addition to digital tracks, helping boost sound quality. Later analog discs also applied CX noise reduction, which improved the signal-noise ratio of their audio. DTS audio, when available on a disc, replaced the digital audio tracks; hearing DTS sound required only an S/PDIF compliant digital connection to a DTS decoder. On a DTS disc, digital PCM audio was not available, so if a DTS decoder was also not available, the only option is to fall back to the analog Dolby Surround or stereo audio tracks. In some cases, the analog audio tracks were further made unavailable through replacement with supplementary audio such as isolated scores or audio commentary. This effectively reduced playback of a DTS disc on a non-DTS equipped system to mono audio, or in a handful of cases, no film soundtrack at all. Only one 5.1 surround sound option exists on a given LaserDisc (either Dolby Digital or DTS), so if surround sound is desired, the disc must be matched to the capabilities of the playback equipment (LD player and receiver/decoder) by the purchaser. A fully capable LaserDisc playback system includes a newer LaserDisc player that is capable of playing digital tracks, has a digital optical output for digital PCM and DTS audio, is aware of AC-3 audio tracks, and has an AC-3 coaxial output; an external or internal AC-3 RF demodulator and AC-3 decoder; and a DTS decoder. Many 1990s A/V receivers combined the AC-3 decoder and DTS decoder logic, but an integrated AC-3 demodulator is rare both in LaserDisc players and in later A/V receivers. PAL LaserDiscs have a slightly longer playing time than NTSC discs, but have fewer audio options. PAL discs only have two audio tracks, consisting of either two analog-only tracks on older PAL LDs, or two digital-only tracks on newer discs. In comparison, later NTSC LDs are capable of carrying four tracks (two analog and two digital). On certain releases, one of the analog tracks is used to carry a modulated AC-3 signal for 5.1 channel audio (for decoding and playback by newer LD players with an "AC-3 RF" output). Older NTSC LDs made before 1984 (such as the original DiscoVision discs) only have two analog audio tracks. LaserDisc players The earliest players employed gas helium–neon laser tubes to read discs and had a red-orange light with a wavelength of 632.8 nm, while later solid-state players used infrared semiconductor laser diodes with a wavelength of 780 nm. In March 1984, Pioneer introduced the first consumer player with a solid-state laser, the LD-700. It was also the first LD player to load from the front and not the top. One year earlier Hitachi introduced an expensive industrial player with a laser diode, but the player, which had poor picture quality due to an inadequate dropout compensator, was made only in limited quantities. After Pioneer released the LD-700, gas lasers were no longer used in consumer players, despite their advantages, although Philips continued to use gas lasers in their industrial units until 1985. Most LaserDisc players required the user to manually turn the disc over to play the other side. A number of players (all diode laser based) were made that were capable of playing both sides of the disc automatically, using a mechanism to physically flip a single laser pickup. Pioneer produced some multi-disc models that hold more than 50 LaserDiscs. One company offered, for a short time in 1984, a "LaserStack" unit that added multi-disc capability to existing players: the Pioneer LD-600, LD-1100 or the Sylvania/Magnavox clones. It requires the user to physically remove the player lid for installation and attached to the top of the player. LaserStack holds up to 10 discs and can automatically load or remove them from the player or change sides in around 15 seconds. The first mass-produced industrial LaserDisc player was the MCA DiscoVision PR-7820, later rebranded the Pioneer PR7820. In North America, this unit was used in many General Motors dealerships as a source of training videos and presentation of GM's new line of cars and trucks in the late 1970s and early 1980s. Most players made after the mid-1980s are capable of also playing Compact Discs. These players include a indentation in the loading tray, where the CD is placed for playback. At least two Pioneer models (the CLD-M301 and the CLD-M90) also operate as a CD changer, with several 4.7 in indentations around the circumference of the main tray. The Pioneer DVL-9, introduced in 1996, is both Pioneer's first consumer DVD player and the first combination DVD/LD player. The first high-definition video player is the Pioneer HLD-X0. A later model, the HLD-X9, features a superior comb filter, and laser diodes on both sides of the disc. Notable players Pioneer PR7820, first industrial LaserDisc player, capable of being controlled by an external computer, was used in the first US LaserDisc arcade game Dragon's Lair. Pioneer CLD-1010, first player capable of playing CD-Video discs. Released in 1987. Pioneer CLD-D703, a 1994 model with digital audio playback. Pioneer LaserActive players: The Pioneer CLD-A100 and NEC PCE-LD1 provided the ability to play Sega Genesis (Mega Drive) and TurboGrafx16 (PC Engine) video games when used in conjunction with additional components. Pioneer DVL series, capable of playing both LaserDiscs and DVDs Branding During its development, MCA, which co-owned the technology, referred to it as the Optical Videodisc System, "Reflective Optical Videodisc" or "Laser Optical Videodisc", depending on the document; changing the name once in 1969 to Disco-Vision and then again in 1978 to DiscoVision (without the hyphen), which became the official spelling. Technical documents and brochures produced by MCA Disco-Vision during the early and mid-'70s also used the term "Disco-Vision Records" to refer to the pressed discs. MCA owned the rights to the largest catalog of films in the world during this time, and they manufactured and distributed the DiscoVision releases of those films under the "MCA DiscoVision" software and manufacturing label; consumer sale of those titles began on December 11, 1978, with the aforementioned Jaws. Philips' preferred name for the format was "VLP", after the Dutch words Video Langspeel-Plaat ("Video long-play disc"), which in English-speaking countries stood for Video Long-Play. The first consumer player, the Magnavox VH-8000 even had the VLP logo on the player. For a while in the early and mid-1970s, Philips also discussed a compatible audio-only format they called "ALP", but that was soon dropped as the Compact Disc system became a non-compatible project in the Philips corporation. Until early 1980, the format had no "official" name. The LaserVision Association, made up of MCA, Universal-Pioneer, IBM, and Philips/Magnavox, was formed to standardize the technical specifications of the format (which had been causing problems for the consumer market) and finally named the system officially as "LaserVision". After its introduction in Japan in 1981, the format was introduced in Europe in 1983 with the LaserVision name although Philips used "VLP" in model designations, such as VLP-600. Following lacklustre sales there (around 12–15,000 units Europe-wide), Philips tried relaunching the entire format as "CD-Video" in 1987, with the name appearing not just on the new hybrid 12 cm discs, but also on standard 20 and 30 cm LaserDiscs with digital audio. While this name and logo appeared on players and labels for years, the 'official' name of the format remained LaserVision. In the early 1990s, the format's name was changed again to LaserDisc. Pioneer Pioneer Electronics also entered the optical disc market in 1977 as a 50/50 joint-venture with MCA called Universal-Pioneer and manufacturing MCA designed industrial players under the MCA DiscoVision name (the PR-7800 and PR-7820). For the 1980 launch of the first Universal-Pioneer player, the VP-1000 was noted as a "laser disc player", although the "LaserDisc" logo displayed clearly on the device. In 1981, "LaserDisc" was used exclusively for the medium itself, although the official name was "LaserVision" (as seen at the beginning of many LaserDisc releases just before the start of the film). Pioneer reminded numerous video magazines and stores in 1984 that LaserDisc was a trademarked word, standing only for LaserVision products manufactured for sale by Pioneer Video or Pioneer Electronics. A 1984 Ray Charles ad for the LD-700 player bore the term "Pioneer LaserDisc brand videodisc player". From 1981 until the early 1990s, all properly licensed discs carried the LaserVision name and logo, even Pioneer Artists titles. On single sided LaserDiscs mastered by Pioneer, playing the wrong side will cause a still screen to appear with a happy, upside down turtle that has a LaserDisc for a stomach (nicknamed the "LaserDisc Turtle"). The words "Program material is recorded on the other side of this disc" are below the turtle. Other manufacturers used a regular text message without graphics. MCA During the early years, MCA also manufactured discs for other companies including Paramount, Disney and Warner Bros. Some of them added their own names to the disc jacket to signify that the movie was not owned by MCA. After Discovision Associates shut down in early 1982, Universal Studio's videodisc software label, called MCA Videodisc until 1984, began reissuing many DiscoVision titles. Unfortunately, quite a few, such as Battlestar Galactica and Jaws, were time-compressed versions of their CAV or CLV Disco Vision originals. The time-compressed CLV re-issue of Jaws no longer had the original soundtrack, having had incidental background music replaced for the videodisc version due to high licensing costs (the original music would not be available until the THX LaserDisc box set was released in 1995). One Universal/Columbia co-production issued by MCA Disco Vision in both CAV and CLV versions, The Electric Horseman, is still not available in any other home video format with its original score intact; even the most recent DVD release has had substantial music replacements of both instrumental score and Willie Nelson's songs. An MCA release of Universal's Howard the Duck sees only the start credits shown in widescreen before changing to 4:3 for the rest of the film. For many years this was the only disc-based release of the film, until widescreen DVD formats were released with extras. Also, the 1989 and 1996 LaserDisc releases of E.T. the Extra-Terrestrial are the only formats to include the cut scene of Harrison Ford, in the role of the school principal, telling off Elliott for letting the frogs free in the biology class. Comparison with other formats VHS LaserDisc had several advantages over VHS. It featured a far sharper picture with a horizontal resolution of 425 TVL lines for NTSC and 440 TVL lines for PAL discs, while VHS featured only 240 TVL lines with NTSC. Super VHS, released in 1987, reduced the quality gap having horizontal luma resolution comparable to LaserDisc, but horizontal chroma resolution of Super VHS remained as low as of standard VHS, about 40 TVL, while LaserDisc offered about 70 TVL of chroma resolution. LaserDisc could handle analog and digital audio where VHS was mostly analog only (VHS can have PCM audio in professional applications but is uncommon), and the NTSC discs could store multiple audio tracks. This allowed for extras such as director's commentary tracks and other features to be added onto a film, creating "Special Edition" releases that would not have been possible with VHS. Disc access was random and chapter based, like the DVD format, meaning that one could jump to any point on a given disc very quickly. By comparison, VHS would require tedious rewinding and fast-forwarding to get to specific points. LaserDiscs were initially cheaper than videocassettes to manufacture, because they lacked the moving parts and plastic outer shell that are necessary for VHS tapes to work, and the duplication process was much simpler. A VHS cassette has at least 14 parts including the actual tape while LaserDisc has one part with five or six layers. A disc can be stamped out in a matter of seconds whereas duplicating videotape required a complex bulk tape duplication mechanism and was a time-consuming process. By the end of the 1980s, average disc-pressing prices were over $5.00 per two-sided disc, due to the large amount of plastic material and the costly glass-mastering process needed to make the metal stamper mechanisms. Due to the larger volume of demand, videocassettes quickly became much cheaper to duplicate, costing as little as $1.00 by the beginning of the 1990s. LaserDiscs potentially had a much longer lifespan than videocassettes. Because the discs were read optically instead of magnetically, no physical contact needs to be made between the player and the disc, except for the player's clamp that holds the disc at its center as it is spun and read. As a result, playback would not wear the information-bearing part of the discs, and properly manufactured LDs would theoretically last beyond a lifetime. By contrast, a VHS tape held all of its picture and sound information on the tape in a magnetic coating which is in contact with the spinning heads on the head drum, causing progressive wear with each use (though later in VHS's lifespan, engineering improvements allowed tapes to be made and played back without contact). The tape was also thin and delicate, and it was easy for a player mechanism, especially on a low quality or malfunctioning model, to mishandle the tape and damage it by creasing it, frilling (stretching) its edges, or even breaking it. DVD By the advent of DVD, LaserDisc had declined considerably in popularity, so the two formats never directly competed with each other. LaserDisc is a composite video format: the luminance (black and white) and chrominance (color) information were transmitted in one signal, separated by the receiver. While good comb filters can do so adequately, these two signals cannot be completely separated. On DVDs, data is stored in the form of digital blocks which make up each independent frame. The signal produced is dependent on the equipment used to master the disc. Signals range from composite and split, to YUV and RGB. Depending upon which format is used, this can result in far higher fidelity, particularly at strong color borders or regions of high detail (especially if there is moderate movement in the picture) and low-contrast details such as skin tones, where comb filters almost inevitably smudge some detail. In contrast to the entirely digital DVD, LaserDiscs use only analog video. As the LaserDisc format is not digitally encoded and does not make use of compression techniques, it is immune to video macroblocking (most visible as blockiness during high motion sequences) or contrast banding (subtle visible lines in gradient areas, such as out-of-focus backgrounds, skies, or light casts from spotlights) that can be caused by the MPEG-2 encoding process as video is prepared for DVD. Early DVD releases held the potential to surpass their LaserDisc counterparts, but often managed only to match them for image quality, and in some cases, the LaserDisc version was preferred. Proprietary human-assisted encoders manually operated by specialists can vastly reduce the incidence of artifacts, depending on playing time and image complexity. By the end of LaserDisc's run, DVDs were living up to their potential as a superior format. DVDs use compressed audio formats such as Dolby Digital and DTS for multichannel sound. Most LaserDiscs were encoded with stereo (often Dolby Surround) CD quality audio 16bit/44.1 kHz tracks as well as analog audio tracks. DTS-encoded LaserDiscs have DTS soundtracks of 1,235 kbit/s instead of the reduced bitrate of 768 kbit/s commonly employed on DVDs with optional DTS audio. Advantages LaserDisc players can provide a great degree of control over the playback process. Unlike many DVD players, the transport mechanism always obeys commands from the user: pause, fast-forward, and fast-reverse commands are always accepted (barring, of course, malfunctions). There were no "User Prohibited Options" where content protection code instructs the player to refuse commands to skip a specific part (such as fast forwarding through copyright warnings). (Some DVD players, particularly higher-end units, do have the ability to ignore the blocking code and play the video without restrictions, but this feature is not common in the usual consumer market.) With CAV LaserDiscs, the user can jump directly to any individual frame of a video simply by entering the frame number on the remote keypad, a feature not common among DVD players. Some DVD players have cache features which stores a certain amount of the video in RAM which allows the player to index a DVD as quickly as an LD, even down to the frame in some players. Damaged spots on a LaserDisc can be played through or skipped over, while a DVD will often become unplayable past the damage. Some newer DVD players feature a repair+skip algorithm, which alleviates this problem by continuing to play the disc, filling in unreadable areas of the picture with blank space or a frozen frame of the last readable image and sound. The success of this feature depends upon the amount of damage. LaserDisc players, when working in full analog, recover from such errors faster than DVD players. Similar to the CD versus LP sound quality debates common in the audiophile community, some videophiles argue that LaserDisc maintains a "smoother", more "film-like", natural image while DVD still looks slightly more artificial. Early DVD demo discs often had compression or encoding problems, lending additional support to such claims at the time. The video signal-to-noise ratio and bandwidth of LaserDisc are substantially less than that of DVDs, making DVDs appear sharper and clearer to most viewers. Another advantage, at least to some consumers, was the lack of any sort of anti-piracy technology. It was claimed that Macrovision's Copyguard protection could not be applied to LaserDisc, due to the format's design. The vertical blanking interval, where the Macrovision signal would be implemented, was also used for timecode and/or frame coding as well as player control codes on LaserDisc players, so test discs with Macrovision would not play at all. There was never a push to redesign the format despite the obvious potential for piracy due to its relatively small market share. The industry simply decided to engineer it into the DVD specification. LaserDisc's support for multiple audio tracks allowed for vast supplemental materials to be included on-disc and made it the first available format for "Special Edition" releases; the 1984 Criterion Collection edition of Citizen Kane is generally credited as being the first "Special Edition" release to home video (King Kong being the first release to have an audio commentary track included), and for setting the standard by which future SE discs were measured. The disc provided interviews, commentary tracks, documentaries, still photographs, and other features for historians and collectors. Disadvantages Despite the advantages over competing technology at the time (namely VHS and Betamax), the discs are heavy—weighing about each—and cumbersome, were more prone than a VHS tape to damage if mishandled, and manufacturers did not market LD units with recording capabilities to consumers. Also, because of their size, greater mechanical effort was required to spin the discs at the proper speed, resulting in much more noise generated than other media. The space-consuming analog video signal of a LaserDisc limited playback duration to 30/36 minutes (CAV NTSC/PAL) or 60/64 minutes (CLV NTSC/PAL) per side, because of the hardware manufacturer's refusal to reduce line count and bandwidth for increased playtime, (as is done in VHS; VHS tapes have a 3 MHz video bandwidth, while LaserDisc preserves the full 6 MHz bandwidth and resolution used in NTSC broadcasts). After one side was finished playing, a disc has to be flipped over to continue watching a movie, and some titles fill two or more discs, depending on the film's runtime and whether or not special features are included. Many players, especially units built after the mid-1980s, can "flip" discs automatically by rotating the optical pickup to the other side of the disc, but this is accompanied by a pause in the movie during the side change. In the event the movie is longer than what could be stored on two sides of a single disc, manually swapping to a second disc is required at some point during the film (one exception to this rule is the Pioneer LD-W1, which features the ability to load two discs and to play each side of one disc and then to switch to playing each side of the other disc). In addition, perfect still frames and random access to individual still frames is limited only to the more expensive CAV discs, which only had a playing time of approximately 30 minutes per side. In later years, Pioneer and other manufacturers overcame this limitation by incorporating a digital memory buffer, which "grabbed" a single field or frame from a CLV disc. The analog information encoded on LaserDiscs also does not include any form of built-in checksum or error correction. Because of this, slight dust and scratches on the disc surface can result in read-errors which cause various video quality problems: glitches, streaks, bursts of static, or momentary picture interruptions. In contrast, the digital MPEG-2 format information used on DVDs has built-in error correction which ensures that the signal from a damaged disc will remain identical to that from a perfect disc right up until the damage to the disc surface prevents the laser from being able to identify usable data. In addition, LaserDisc videos sometimes exhibit a problem known as "crosstalk". The issue can arise when the laser optical pickup assembly within the player is out of alignment or because the disc is damaged and/or excessively warped, but it could also occur even with a properly functioning player and a factory-new disc, depending on electrical and mechanical alignment problems. In these instances, the issue arose due to the fact that CLV discs require subtle changes in rotating speed at various points during playback. During a change in speed, the optical pickup inside the player might read video information from a track adjacent to the intended one, causing data from the two tracks to "cross"; the extra video information picked up from that second track shows up as distortion in the picture which looks reminiscent of swirling "barber poles" or rolling lines of static. Assuming the player's optical pickup is in proper working order, crosstalk distortion normally does not occur during playback of CAV format LaserDiscs, as the rotational speed never varies. If the player calibration is out of order or if the CAV disc is faulty or damaged, other problems affecting tracking accuracy can occur. One such problem is "laser lock", where the player reads the same two fields for a given frame over and over, causing the picture to look frozen as if the movie were paused. Another significant issue unique to LaserDisc is one involving the inconsistency of playback quality between different makers and models of player. On the majority of televisions, a given DVD player will produce a picture that is visually indistinguishable from other units; differences in image quality between players only becomes easily apparent on larger televisions, and substantial leaps in image quality are generally only obtained with expensive, high-end players that allow for post-processing of the MPEG-2 stream during playback. In contrast, LaserDisc playback quality is highly dependent on hardware quality, and major variances in picture quality appear between different makers and models of LD players, even when tested on a low to mid-range television. The obvious benefits of using high quality equipment has helped keep demand for some players high, thus also keeping pricing for those units comparably high: in the 1990s, notable players sold for anywhere from US$200 to well over $1,000, while older and less desirable players could be purchased in working condition for as little as $25. Laser rot Many early LDs were not manufactured properly; the adhesive used contained impurities that were able to penetrate the lacquer seal layer and chemically attack the metallized reflective aluminum layer, altering its reflective characteristics which, in turn, deteriorated the recorded signal. This was a problem that was termed "laser rot" among LD enthusiasts (also called "color flash" internally by LaserDisc pressing plants). Some forms of laser rot could appear as black spots that looked like mold or burned plastic which cause the disc to skip and the movie to exhibit excessive speckling noise. But, for the most part, rotted discs could actually appear perfectly fine to the naked eye. Later optical standards have also been known to suffer similar problems, including a notorious batch of defective CDs manufactured by Philips-DuPont Optical at their Blackburn, Lancashire facility in England during the late 1980s/early 1990s. Impact and decline LaserDisc did not have high market penetration in North America due to the high cost of the players and discs, which were far more expensive than VHS players and tapes, and due to marketplace confusion with the technologically inferior CED, which also went by the name Videodisc. While the format was not widely adopted by North American consumers, it was well received among videophiles due to the superior audio and video quality compared to VHS and Betamax tapes, finding a place in nearly one million American homes by the end of 1990. The format was more popular in Japan than in North America because prices were kept low to ensure adoption, resulting in minimal price differences between VHS tapes and the higher quality LaserDiscs, helping ensure that it quickly became the dominant consumer video format in Japan. Anime collectors in every country in which the LD format was released, which included both North America and Japan, also quickly became familiar with this format, and sought the higher video and sound quality of LaserDisc and the availability of numerous titles not available on VHS (encouraged by Pioneer's in-house production of anime which made titles specifically with the format in mind). LaserDiscs were also popular alternatives to videocassettes among movie enthusiasts in the more affluent regions of South East Asia, such as Singapore, due to their high integration with the Japanese export market and the disc-based media's superior longevity compared to videocassette, especially in the humid conditions endemic to that area of the world. The format also became quite popular in Hong Kong during the 1990s before the introduction of VCDs and DVD; although people rarely bought the discs (because each LD was priced around US$100), high rental activity helped the video rental business in the city grow larger than it had ever been previously. Due to integration with the Japanese export market, NTSC LaserDiscs were used in the Hong Kong market, in contrast to the PAL standard used for broadcast (this anomaly also exists for DVD). This created a market for multi-system TVs and multi-system VCRs which could display or play both PAL and NTSC materials in addition to SECAM materials (which were never popular in Hong Kong). Some LD players could convert NTSC signals to PAL so that most TVs used in Hong Kong could display the LD materials. Despite the relative popularity, manufacturers refused to market recordable LaserDisc devices on the consumer market, even though the competing VCR devices could record onto cassette, which hurt sales worldwide. The inconvenient disc size, the high cost of both the players and the media and the inability to record onto the discs combined to take a serious toll on sales, and contributed to the format's poor adoption figures. Although the LaserDisc format was supplanted by DVD by the late 1990s, many LD titles are still highly coveted by movie enthusiasts (for example, Disney's Song of the South which is unavailable in the US in any format, but was issued in Japan on LD). This is largely because there are many films that are still only available on LD and many other LD releases contain supplementary material not available on subsequent DVD versions of those films. Until the end of 2001, many titles were released on VHS, LD and DVD in Japan. Further developments and applications Computer control In the early 1980s, Philips produced a LaserDisc player model adapted for a computer interface, dubbed "professional". In 1985, Jasmine Multimedia created LaserDisc jukeboxes featuring music videos from Michael Jackson, Duran Duran, and Cyndi Lauper. When connected to a PC this combination could be used to display images or information for educational or archival purposes, for example thousands of scanned medieval manuscripts. This strange device could be considered a very early equivalent of a CD-ROM. In the mid-1980s Lucasfilm pioneered the EditDroid non-linear editing system for film and television based on computer-controlled LaserDisc players. Instead of printing dailies out on film, processed negatives from the day's shoot would be sent to a mastering plant to be assembled from their 10-minute camera elements into 20-minute film segments. These were then mastered onto single-sided blank LaserDiscs, just as a DVD would be burnt at home today, allowing for much easier selection and preparation of an edit decision list (EDL). In the days before video assist was available in cinematography, this was the only other way a film crew could see their work. The EDL went to the negative cutter who then cut the camera negative accordingly and assembled the finished film. Only 24 EditDroid systems were ever built, even though the ideas and technology are still in use today. Later EditDroid experiments borrowed from hard-drive technology of having multiple discs on the same spindle and added numerous playback heads and numerous electronics to the basic jukebox design so that any point on each of the discs would be accessible within seconds. This eliminated the need for racks and racks of industrial LaserDisc players since EditDroid discs were only single-sided. In 1986, a SCSI-equipped LaserDisc player attached to a BBC Master computer was used for the BBC Domesday Project. The player was referred as an LV-ROM (LaserVision Read Only Memory) as the discs contained the driving software as well as the video frames. The discs used the CAV format, and encoded data as a binary signal represented by the analog audio recording. These discs could contain in each CAV frame video/audio or video/binary data, but not both. "Data" frames would appear blank when played as video. It was typical for each disc to start with the disc catalog (a few blank frames) then the video introduction before the rest of the data. Because the format (based on the ADFS hard disc format) used a starting sector for each file, the data layout effectively skipped over any video frames. If all 54,000 frames are used for data storage an LV-ROM disc can contain 324 MB of data per side. The Domesday Project systems also included a genlock, allowing video frames, clips and audio to be mixed with graphics originated from the BBC Master; this was used to great effect for displaying high resolution photographs and maps, which could then be zoomed into. During the 1980s in the United States, Digital Equipment Corporation developed the standalone PC control IVIS (Interactive VideoDisc Information System) for training and education. One of the most influential programs developed at DEC was Decision Point, a management gaming simulation, which won the Nebraska Video Disc Award for Best of Show in 1985. Apple's HyperCard scripting language provided Macintosh computer users with a means to design databases of slides, animation, video and sounds from LaserDiscs and then to create interfaces for users to play specific content from the disc through software called LaserStacks. User-created "stacks" were shared and were especially popular in education where teacher-generated stacks were used to access discs ranging from art collections to basic biological processes. Commercially available stacks were also popular with the Voyager company being possibly the most successful distributor. Commodore International's 1992 multimedia presentation system for the Amiga, AmigaVision, included device drivers for controlling a number of LaserDisc players through a serial port. Coupled with the Amiga's ability to use a Genlock, this allowed for the LaserDisc video to be overlaid with computer graphics and integrated into presentations and multimedia displays, years before such practice was commonplace. Pioneer also made computer-controlled units such as the LD-V2000. It had a back-panel RS-232 serial connection through a five-pin DIN connector, and no front-panel controls except Open/Close. (The disc would be played automatically upon insertion.) Under contract from the U.S. military, Matrox produced a combination computer/LaserDisc player for instructional purposes. The computer was a 286, the LaserDisc player only capable of reading the analog audio tracks. Together they weighed and sturdy handles were provided in case two people were required to lift the unit. The computer controlled the player via a 25-pin serial port at the back of the player and a ribbon cable connected to a proprietary port on the motherboard. Many of these were sold as surplus by the military during the 1990s, often without the controller software. Nevertheless, it is possible to control the unit by removing the ribbon cable and connecting a serial cable directly from the computer's serial port to the port on the LaserDisc player. Video games The format's instant-access capability made it possible for a new breed of LaserDisc-based video arcade games and several companies saw potential in using LaserDiscs for video games in the 1980s and 1990s, beginning in 1983 with Sega's Astron Belt. Cinematronics and American Laser Games produced elaborate arcade games that used the random-access features to create interactive movies such as Dragon's Lair and Space Ace. Similarly, the Pioneer Laseractive and Halcyon were introduced as home video game consoles that used LaserDisc media for their software. MUSE LD In 1991, several manufacturers announced specifications for what would become known as MUSE LaserDisc, representing a span of almost 15 years until the feats of this HD analog optical disc system would finally be duplicated digitally by HD DVD and Blu-ray Disc. Encoded using NHK's MUSE "Hi-Vision" analogue HDTV system, MUSE discs would operate like standard LaserDiscs but would contain high-definition 1,125-line (1,035 visible lines; Sony HDVS) video with a 5:3 aspect ratio. The MUSE players were also capable of playing standard NTSC format discs and are superior in performance to non-MUSE players even with these NTSC discs. The MUSE-capable players had several noteworthy advantages over standard LaserDisc players, including a red laser with a much narrower wavelength than the lasers found in standard players. The red laser was capable of reading through disc defects such as scratches and even mild disc rot that would cause most other players to stop, stutter or drop-out. Crosstalk was not an issue with MUSE discs, and the narrow wavelength of the laser allowed for the virtual elimination of crosstalk with normal discs. To view MUSE encoded discs, it was necessary to have a MUSE decoder in addition to a compatible player. There are televisions with MUSE decoding built-in and set top tuners with decoders that can provide the proper MUSE input. Equipment prices were high, especially for early HDTVs which generally eclipsed US$10,000, and even in Japan the market for MUSE was tiny. Players and discs were never officially sold in North America, although several distributors imported MUSE discs along with other import titles. Terminator 2: Judgment Day, Lawrence of Arabia, A League of Their Own, Bugsy, Close Encounters of the Third Kind, Bram Stoker's Dracula and Chaplin were among the theatrical releases available on MUSE LDs. Several documentaries, including one about Formula One at Japan's Suzuka Circuit were also released. LaserDisc players and LaserDiscs that worked with the competing European HD-MAC HDTV standard were also made. Picture discs Picture discs have artistic etching on one side of the disc to make the disc more visually attractive than the standard shiny silver surface. This etching might look like a movie character, logo, or other promotional material. Sometimes that side of the LD would be made with colored plastic, rather than the clear material used for the data side. Picture disc LDs only had video material on one side as the "picture" side could not contain any data. Picture discs are rare in North America. LD-G Pioneer Electronics—one of the format's largest supporters/investors—was also deeply involved in the karaoke business in Japan, and used LaserDiscs as the storage medium for music and additional content such as graphics. This format was generally called LD-G. While several other karaoke labels manufactured LaserDiscs, there was nothing like the breadth of competition in that industry that exists now, as almost all manufacturers have transitioned to CD+G discs. Anamorphic LaserDiscs With the release of 16:9 televisions in the early 1990s, Pioneer and Toshiba decided that it was time to take advantage of this aspect ratio. Squeeze LDs were enhanced 16:9-ratio widescreen LaserDiscs. During the video transfer stage, the movie was stored in an anamorphic "squeezed" format. The widescreen movie image was stretched to fill the entire video frame with less or none of the video resolution wasted to create letterbox bars. The advantage was a 33% greater vertical resolution compared to letterboxed widescreen LaserDisc. This same procedure was used for anamorphic DVDs, but unlike all DVD players, very few LD players had the ability to unsqueeze the image for 4:3 sets, If the discs were played on a standard 4:3 television the image would be distorted. Some 4:3 sets (such as the Sony WEGA series) could be set to unsqueeze the image. Since very few people outside of Japan owned 16:9 displays, the marketability of these special discs was very limited. There were no anamorphic LaserDisc titles available in the US except for promotional purposes. Upon purchase of a Toshiba 16:9 television viewers had the option of selecting a number of Warner Bros. 16:9 films. Titles include Unforgiven, Grumpy Old Men, The Fugitive, and Free Willy. The Japanese lineup of titles was different. A series of releases under the banner "Squeeze LD" from Pioneer of mostly Carolco titles included Basic Instinct, Stargate, Terminator 2: Judgment Day, Showgirls, Cutthroat Island, and Cliffhanger. Terminator 2 was released twice in Squeeze LD, the second release being THX certified and a notable improvement over the first. Recordable formats Another type of video media, CRVdisc, or "Component Recordable Video Disc" were available for a short time, mostly to professionals. Developed by Sony, CRVdiscs resemble early PC CD-ROM caddies with a disc inside resembling a full-sized LD. CRVdiscs were blank, write-once, read-many media that can be recorded once on each side. CRVdiscs were used largely for backup storage in professional and commercial applications. Another form of recordable LaserDisc that is completely playback-compatible with the LaserDisc format (unlike CRVdisc with its caddy enclosure) is the RLV, or Recordable Laser Videodisc. It was developed and first marketed by the Optical Disc Corporation (ODC, now ODC Nimbus) in 1984. RLV discs, like CRVdisc, are also a WORM technology, and function exactly like a CD-R disc. RLV discs look almost exactly like standard LaserDiscs, and can play in any standard LaserDisc player after they have been recorded. The only cosmetic difference between an RLV disc and a regular factory-pressed LaserDiscs is their reflective Red (showing up in photos as a purple-violet or blue with some RLV discs) color resulting from the dye embedded in the reflective layer of the disc to make it recordable, as opposed to the silver mirror appearance of regular LDs. The reddish color of RLVs is very similar to DVD-R and DVD+R discs. RLVs were popular for making short-run quantities of LaserDiscs for specialized applications such as interactive kiosks and flight simulators. Another, 'single sided' form of RLV exists with the silver side being covered in small bumps. Blank RLV discs show a standard Test Card when played in a Laserdisc player. Pioneer also produced a rewritable LaserDisc system, the VDR-V1000 "LaserRecorder" for which the discs had a claimed erase/record potential of 1,000,000 cycles. These recordable LD systems were never marketed toward the general public, and are so unknown as to create the misconception that home recording for LaserDiscs was impossible and thus a perceived "weakness" of the LaserDisc format. LaserDisc sizes 30 cm (Full-size) The most common size of LaserDisc was , approximately the size of LP vinyl records. These discs allowed for 30/36 minutes per side (CAV NTSC/PAL) or 60/64 minutes per side (CLV NTSC/PAL). The vast majority of programming for the LaserDisc format was produced on these discs. 20 cm ("EP"-size) A number of LaserDiscs were also published. These smaller "EP"-sized LDs allowed for 20 minutes per side (CLV). They are much rarer than the full-size LDs, especially in North America, and roughly approximate the size of 45rpm () vinyl singles. These discs were often used for music video compilations (e.g. Bon Jovi's "Breakout", Bananarama's "Video Singles" or T'Pau's "View from a Bridge"), as well as Japanese karaoke machines. 12 cm (CD Video and Video Single Disc) There were also (CD size) "single"-style discs produced that were playable on LaserDisc players. These were referred to as CD Video (CD-V) discs, and Video Single Discs (VSD). CD-V was a hybrid format launched in the late 1980s, and carried up to five minutes of analog LaserDisc-type video content with a digital soundtrack (usually a music video), plus up to 20 minutes of digital audio CD tracks. The original 1989 release of David Bowie's retrospective Sound + Vision CD box set prominently featured a CD-V video of "Ashes to Ashes", and standalone promo CD-Vs featured the video, plus three audio tracks: "John, I'm Only Dancing", "Changes", and "The Supermen". Despite the similar name, CD Video is entirely incompatible with the later all-digital Video CD (VCD) format, and can only be played back on LaserDisc players with CD-V capability or one of the players dedicated to the smaller discs. CD-Vs were somewhat popular for a brief time worldwide but soon faded from view. (In Europe, Philips also used the "CD Video" name as part of a short-lived attempt in the late 1980s to relaunch and rebrand the entire LaserDisc system. Some 20 and 30 cm discs were also branded "CD Video", but unlike the 12 cm discs, these were essentially just standard LaserDiscs with digital soundtracks and no audio-only CD content). The VSD format was announced in 1990, and was essentially the same as the 12 cm CD-V, but without the audio CD tracks, and intended to sell at a lower price. VSDs were popular only in Japan and other parts of Asia and were never fully introduced to the rest of the world. See also Blu-ray Ultra HD Blu-ray CED SelectaVision Holographic Versatile Disc VHD Videodisc Laser turntable Footnotes References Further reading Isailovic, Jordan. Videodisc and Optical Memory Systems. Vol. 1, Boston: Prentice Hall, 1984. . Lenk, John D. Complete Guide to Laser/VideoDisc Player Troubleshooting and Repair. Englewood Cliffs, N.J.: Prentice-Hall, 1985. . External links CED Magic LaserDisc Database LaserDisc Player Archive LaserDisc Technical Page TotalRewind.org vintage format site BLAM Entertainment Group comprehensive LaserDisc site Laserdisc Player Formats and Features eBay guide (archived) Digital Audio Modulation in the PAL and NTSC Laservision Video Disc Coding Formats Audiovisual introductions in 1978 Products introduced in 1978 Products and services discontinued in 2001 Digital media Discontinued media formats Film and video technology Pioneer Corporation products Philips products Video game distribution Video storage Home video el:Μέσο αποθήκευσης δεδομένων#Laserdisc
46145
https://en.wikipedia.org/wiki/Oracle%20Solaris
Oracle Solaris
Solaris is a proprietary Unix operating system originally developed by Sun Microsystems. After the Sun acquisition by Oracle in 2010, it was renamed Oracle Solaris. Solaris superseded the company's earlier SunOS in 1993, and became known for its scalability, especially on SPARC systems, and for originating many innovative features such as DTrace, ZFS and Time Slider. Solaris supports SPARC and x86-64 workstations and servers from Oracle and other vendors. Solaris was registered as compliant with UNIX 03 until 29 April 2019. Historically, Solaris was developed as proprietary software. In June 2005, Sun Microsystems released most of the codebase under the CDDL license, and founded the OpenSolaris open-source project. With OpenSolaris, Sun wanted to build a developer and user community around the software. After the acquisition of Sun Microsystems in January 2010, Oracle decided to discontinue the OpenSolaris distribution and the development model. In August 2010, Oracle discontinued providing public updates to the source code of the Solaris kernel, effectively turning Solaris 11 back into a closed source proprietary operating system. Following that, OpenSolaris was forked as illumos and is alive through several illumos distributions. In 2011, the Solaris 11 kernel source code leaked to BitTorrent. Through the Oracle Technology Network (OTN), industry partners can gain access to the in-development Solaris source code. Solaris is developed under a proprietary development model, and only the source for open-source components of Solaris 11 is available for download from Oracle. History In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time: Berkeley Software Distribution, UNIX System V, and Xenix. This became Unix System V Release 4 (SVR4). On September 4, 1991, Sun announced that it would replace its existing BSD-derived Unix, SunOS 4, with one based on SVR4. This was identified internally as SunOS 5, but a new marketing name was introduced at the same time: Solaris 2. The justification for this new overbrand was that it encompassed not only SunOS, but also the OpenWindows graphical user interface and Open Network Computing (ONC) functionality. Although SunOS 4.1.x micro releases were retroactively named Solaris 1 by Sun, the Solaris name is used almost exclusively to refer only to the releases based on SVR4-derived SunOS 5.0 and later. For releases based on SunOS 5, the SunOS minor version is included in the Solaris release number. For example, Solaris 2.4 incorporates SunOS 5.4. After Solaris 2.6, the 2. was dropped from the release name, so Solaris 7 incorporates SunOS 5.7, and the latest release SunOS 5.11 forms the core of Solaris 11.4. Although SunSoft stated in its initial Solaris 2 press release their intent to eventually support both SPARC and x86 systems, the first two Solaris 2 releases, 2.0 and 2.1, were SPARC-only. An x86 version of Solaris 2.1 was released in June 1993, about 6 months after the SPARC version, as a desktop and uniprocessor workgroup server operating system. It included the Wabi emulator to support Windows applications. At the time, Sun also offered the Interactive Unix system that it had acquired from Interactive Systems Corporation. In 1994, Sun released Solaris 2.4, supporting both SPARC and x86 systems from a unified source code base. On September 2, 2017, Simon Phipps, a former Sun Microsystems employee not hired by Oracle in the acquisition, reported on Twitter that Oracle had laid off the Solaris core development staff, which many interpreted as sign that Oracle no longer intended to support future development of the platform. While Oracle did have a large layoff of Solaris development engineering staff, development continued and Solaris 11.4 was released in 2018. Supported architectures Solaris uses a common code base for the platforms it supports: SPARC and i86pc (which includes both x86 and x86-64). Solaris has a reputation for being well-suited to symmetric multiprocessing, supporting a large number of CPUs. It has historically been tightly integrated with Sun's SPARC hardware (including support for 64-bit SPARC applications since Solaris 7), with which it is marketed as a combined package. This has led to more reliable systems, but at a cost premium compared to commodity PC hardware. However, it has supported x86 systems since Solaris 2.1 and 64-bit x86 applications since Solaris 10, allowing Sun to capitalize on the availability of commodity 64-bit CPUs based on the x86-64 architecture. Sun has heavily marketed Solaris for use with both its own "x64" workstations and servers based on AMD Opteron and Intel Xeon processors, as well as x86 systems manufactured by companies such as Dell, Hewlett-Packard, and IBM. As of 2009, the following vendors support Solaris for their x86 server systems: Dell – will "test, certify, and optimize Solaris and OpenSolaris on its rack and blade servers and offer them as one of several choices in the overall Dell software menu" Intel Hewlett Packard Enterprise – distributes and provides software technical support for Solaris on BL, DL, and SL platforms Fujitsu Siemens Other platforms Solaris 2.5.1 included support for the PowerPC platform (PowerPC Reference Platform), but the port was canceled before the Solaris 2.6 release. In January 2006, a community of developers at Blastwave began work on a PowerPC port which they named Polaris. In October 2006, an OpenSolaris community project based on the Blastwave efforts and Sun Labs' Project Pulsar, which re-integrated the relevant parts from Solaris 2.5.1 into OpenSolaris, announced its first official source code release. A port of Solaris to the Intel Itanium architecture was announced in 1997 but never brought to market. On November 28, 2007, IBM, Sun, and Sine Nomine Associates demonstrated a preview of OpenSolaris for System z running on an IBM System z mainframe under z/VM, called Sirius (in analogy to the Polaris project, and also due to the primary developer's Australian nationality: HMS Sirius of 1786 was a ship of the First Fleet to Australia). On October 17, 2008, a prototype release of Sirius was made available and on November 19 the same year, IBM authorized the use of Sirius on System z Integrated Facility for Linux (IFL) processors. Solaris also supports the Linux platform application binary interface (ABI), allowing Solaris to run native Linux binaries on x86 systems. This feature is called Solaris Containers for Linux Applications (SCLA), based on the branded zones functionality introduced in Solaris 10 8/07. Installation and usage options Solaris can be installed from various pre-packaged software groups, ranging from a minimalistic Reduced Network Support to a complete Entire Plus OEM. Installation of Solaris is not necessary for an individual to use the system. Additional software, like Apache, MySQL, etc. can be installed as well in a packaged form from sunfreeware and OpenCSW. Solaris can be installed from physical media or a network for use on a desktop or server, or be used without installing on a desktop or server. Desktop environments Early releases of Solaris used OpenWindows as the standard desktop environment. In Solaris 2.0 to 2.2, OpenWindows supported both NeWS and X applications, and provided backward compatibility for SunView applications from Sun's older desktop environment. NeWS allowed applications to be built in an object-oriented way using PostScript, a common printing language released in 1982. The X Window System originated from MIT's Project Athena in 1984 and allowed for the display of an application to be disconnected from the machine where the application was running, separated by a network connection. Sun's original bundled SunView application suite was ported to X. Sun later dropped support for legacy SunView applications and NeWS with OpenWindows 3.3, which shipped with Solaris 2.3, and switched to X11R5 with Display Postscript support. The graphical look and feel remained based upon OPEN LOOK. OpenWindows 3.6.2 was the last release under Solaris 8. The OPEN LOOK Window Manager (olwm) with other OPEN LOOK specific applications were dropped in Solaris 9, but support libraries were still bundled, providing long term binary backwards compatibility with existing applications. The OPEN LOOK Virtual Window Manager (olvwm) can still be downloaded for Solaris from sunfreeware and works on releases as recent as Solaris 10. Sun and other Unix vendors created an industry alliance to standardize Unix desktops. As a member of the Common Open Software Environment (COSE) initiative, Sun helped co-develop the Common Desktop Environment (CDE). This was an initiative to create a standard Unix desktop environment. Each vendor contributed different components: Hewlett-Packard contributed the window manager, IBM provided the file manager, and Sun provided the e-mail and calendar facilities as well as drag-and-drop support (ToolTalk). This new desktop environment was based upon the Motif look and feel and the old OPEN LOOK desktop environment was considered legacy. CDE unified Unix desktops across multiple open system vendors. CDE was available as an unbundled add-on for Solaris 2.4 and 2.5, and was included in Solaris 2.6 through 10. In 2001, Sun issued a preview release of the open-source desktop environment GNOME 1.4, based on the GTK+ toolkit, for Solaris 8. Solaris 9 8/03 introduced GNOME 2.0 as an alternative to CDE. Solaris 10 includes Sun's Java Desktop System (JDS), which is based on GNOME and comes with a large set of applications, including StarOffice, Sun's office suite. Sun describes JDS as a "major component" of Solaris 10. The Java Desktop System is not included in Solaris 11 which instead ships with a stock version of GNOME. Likewise, CDE applications are no longer included in Solaris 11, but many libraries remain for binary backwards compatibility. The open source desktop environments KDE and Xfce, along with numerous other window managers, also compile and run on recent versions of Solaris. Sun was investing in a new desktop environment called Project Looking Glass since 2003. The project has been inactive since late 2006. License Traditional operating system license (1982 to 2004) For versions up to 2005 (Solaris 9), Solaris was licensed under a license that permitted a customer to buy licenses in bulk, and install the software on any machine up to a maximum number. The key license grant was: In addition, the license provided a "License to Develop" granting rights to create derivative works, restricted copying to only a single archival copy, disclaimer of warranties, and the like. The license varied only little through 2004. Open source (2005 until March 2010) From 2005–10, Sun began to release the source code for development builds of Solaris under the Common Development and Distribution License (CDDL) via the OpenSolaris project. This code was based on the work being done for the post-Solaris 10 release (code-named "Nevada"; eventually released as Oracle Solaris 11). As the project progressed, it grew to encompass most of the necessary code to compile an entire release, with a few exceptions. Post-Oracle closed source (March 2010 to present) When Sun was acquired by Oracle in 2010, the OpenSolaris project was discontinued after the board became unhappy with Oracle's stance on the project. In March 2010, the previously freely available Solaris 10 was placed under a restrictive license that limited the use, modification and redistribution of the operating system. The license allowed the user to download the operating system free of charge, through the Oracle Technology Network, and use it for a 90-day trial period. After that trial period had expired the user would then have to purchase a support contract from Oracle to continue using the operating system. With the release of Solaris 11 in 2011, the license terms changed again. The new license allows Solaris 10 and Solaris 11 to be downloaded free of charge from the Oracle Technology Network and used without a support contract indefinitely; however, the license only expressly permits the user to use Solaris as a development platform and expressly forbids commercial and "production" use. Educational use is permitted in some circumstances. From the OTN license: When Solaris is used without a support contract it can be upgraded to each new "point release"; however, a support contract is required for access to patches and updates that are released monthly. Version history Notable features of Solaris include DTrace, Doors, Service Management Facility, Solaris Containers, Solaris Multiplexed I/O, Solaris Volume Manager, ZFS, and Solaris Trusted Extensions. Updates to Solaris versions are periodically issued. In the past, these were named after the month and year of their release, such as "Solaris 10 1/13"; as of Solaris 11, sequential update numbers are appended to the release name with a period, such as "Oracle Solaris 11.4". In ascending order, the following versions of Solaris have been released: A more comprehensive summary of some Solaris versions is also available. Solaris releases are also described in the Solaris 2 FAQ. Development release The underlying Solaris codebase has been under continuous development since work began in the late 1980s on what was eventually released as Solaris 2.0. Each version such as Solaris 10 is based on a snapshot of this development codebase, taken near the time of its release, which is then maintained as a derived project. Updates to that project are built and delivered several times a year until the next official release comes out. The Solaris version under development by Sun since the release of Solaris 10 in 2005, was codenamed Nevada, and is derived from what is now the OpenSolaris codebase. In 2003, an addition to the Solaris development process was initiated. Under the program name Software Express for Solaris (or just Solaris Express), a binary release based on the current development basis was made available for download on a monthly basis, allowing anyone to try out new features and test the quality and stability of the OS as it progressed to the release of the next official Solaris version. A later change to this program introduced a quarterly release model with support available, renamed Solaris Express Developer Edition (SXDE). In 2007, Sun announced Project Indiana with several goals, including providing an open source binary distribution of the OpenSolaris project, replacing SXDE. The first release of this distribution was OpenSolaris 2008.05. The Solaris Express Community Edition (SXCE) was intended specifically for OpenSolaris developers. It was updated every two weeks until it was discontinued in January 2010, with a recommendation that users migrate to the OpenSolaris distribution. Although the download license seen when downloading the image files indicates its use is limited to personal, educational and evaluation purposes, the license acceptance form displayed when the user actually installs from these images lists additional uses including commercial and production environments. SXCE releases terminated with build 130 and OpenSolaris releases terminated with build 134 a few weeks later. The next release of OpenSolaris based on build 134 was due in March 2010, but it was never fully released, though the packages were made available on the package repository. Instead, Oracle renamed the binary distribution Solaris 11 Express, changed the license terms and released build 151a as 2010.11 in November 2010. Open source derivatives Current illumos – A fully open source fork of the project, started in 2010 by a community of Sun OpenSolaris engineers and Nexenta OS. Note that OpenSolaris was not 100% open source: Some drivers and some libraries were property of other companies that Sun (now Oracle) licensed and was not able to release. OpenIndiana – A project under the illumos umbrella aiming "... to become the de facto OpenSolaris distribution installed on production servers where security and bug fixes are required free of charge." SchilliX – The first LiveCD released after OpenSolaris code was opened to public. napp-it – A webmanaged ZFS storage appliance based on Solaris and the free forks like OmniOS with a Free and Pro edition. NexentaStor – Optimized for storage workloads, based on Nexenta OS. Dyson – illumos kernel with GNU userland and packages from Debian. Project is no longer active and the website is offline. SmartOS – Virtualization centered derivative from Joyent. Discontinued OpenSolaris – A project initiated by Sun Microsystems, discontinued after the acquisition by Oracle. Nexenta OS (discontinued October 31, 2012) – First distribution based on Ubuntu userland with Solaris-derived kernel. StormOS (discontinued September 14, 2012) – A lightweight desktop OS based on Nexenta OS and Xfce. MartUX – The first SPARC distribution of OpenSolaris, with an alpha prototype released by Martin Bochnig in April 2006. It was distributed as a Live CD but is later available only on DVD as it has had the Blastwave community software added. Its goal was to become a desktop operating system. The first SPARC release was a small Live CD, released as marTux_0.2 Live CD in summer of 2006, the first straight OpenSolaris distribution for SPARC (not to be confused with GNOME metacity theme). It was later re-branded as MartUX and the next releases included full SPARC installers in addition to the Live media. Much later, MartUX was re-branded as OpenSXCE when it moved to the first OpenSolaris release to support both SPARC and Intel architectures after Sun was acquired by Oracle. MilaX – A small Live CD/Live USB with minimal set of packages to fit a 90 MB image. EON ZFS Storage – A NAS implementation targeted at embedded systems. Jaris OS – Live DVD and also installable. Pronounced according to the IPA but in English as Yah-Rees. This distribution has been heavily modified to fully support a version of Wine called Madoris that can install and run Windows programs at native speed. Jaris stands for "Japanese Solaris". Madoris is a combination of the Japanese word for Windows "mado" and Solaris. OpenSXCE – An OpenSolaris distribution release for both 32-bit and 64-bit x86 platforms and SPARC microprocessors, initially produced from OpenSolaris source code repository, ported to the illumos source code repository to form OpenIndiana's first SPARC distribution. Notably, the first OpenSolaris distribution with illumos source for SPARC based upon OpenIndiana, OpenSXCE finally moved to a new source code repository, based upon DilOS. Reception Robert Lipschutz and Gregg Harrington from PC Magazine reviewed Solaris 9 in 2002: Robert Lipschutz also reviewed Solaris 10: Tom Henderson reviewed Solaris 10 for Network World: Robert Escue for OSNews: Thomas Greene for The Register: See also IBM AIX HP-UX illumos Trusted Solaris Oracle VM Server for SPARC References External links Solaris Documentation Lifetime Support Policy: Oracle and Sun System Software and Operating Systems SunHELP – Sun/Solaris News, References, and Information Nikolai Bezroukov. Solaris vs. Linux: Ecosystem-based Approach and Framework for the Comparison in Large Enterprise Environments – Large Softpanorama article comparing Solaris 10 and Linux 2.6 – Solaris information site by Michael Holve 1993 software Formerly free software OpenSolaris Oracle Corporation Oracle software Proprietary operating systems Sun Microsystems software UNIX System V X86 operating systems
47049424
https://en.wikipedia.org/wiki/BQ%20Aquaris%20E5
BQ Aquaris E5
The Aquaris E5 and Aquaris E5 FHD are dual-SIM Android smartphones from the Spanish manufacturer BQ that were released to market in July 2014. The devices shipped with Android 4.4 (KitKat). BQ elected not to skin the operating system and as such it retains the unmodified "Google Experience", such as found on the Google Nexus. On 9 June 2015 BQ in partnership with Canonical launched the Aquaris E5 HD Ubuntu Edition. The phone is based on the Aquaris E5 hardware (with the 1280×720 screen) and was sold in the European Union only. Notably, this is only the third phone to be sold with the Ubuntu Touch mobile operating system, after the BQ Aquaris E4.5 and the Meizu MX4. See also BQ Aquaris E4.5 Comparison of smartphones References External links Android (operating system) devices Ubuntu Touch devices Aquaris E5 Mobile phones introduced in 2014 Discontinued smartphones
16331866
https://en.wikipedia.org/wiki/Security%20of%20automated%20teller%20machines
Security of automated teller machines
Automated teller machines (ATMs) are targets for fraud, robberies and other security breaches. In the past, the main purpose of ATMs was to deliver cash in the form of banknotes, and to debit a corresponding bank account. However, ATMs are becoming more complicated and they now serve numerous functions, thus becoming a high priority target for robbers and hackers. Introduction Modern ATMs are implemented with high-security protection measures. They work under complex systems and networks to perform transactions. The data processed by ATMs are usually encrypted, but hackers can employ discreet hacking devices to hack accounts and withdraw the account's balance. As an alternative, unskilled robbers threaten bank patrons with a weapon to loot their withdrawn money or account. Methods of looting ATMs ATM vandals can either physically tamper with the ATM to obtain cash, or employ credit card skimming methods to acquire control of the user's credit card account. Credit card fraud can be done by inserting discreet skimming devices over the keypad or credit card reader. The alternative way to credit card fraud is to identify the PIN directly with devices such as cameras concealed near the keypad. Security measures of ATMs PIN validation schemes for local transactions On-Line PIN validation The validation of on-line PIN occurs if the terminal in question is connected to the central database. The PIN supplied by the customer is always compared with the recorded reference PIN in the financial institutions. However, one disadvantage is that any malfunction of the network renders the ATM unusable until it is fixed. Off-Line PIN validation In off-line PIN validation, the ATM is not connected to the central database. A condition for off-line PIN validation is that the ATM should be able to compare the customer's entered PIN against the PIN of reference. the terminal must be able to perform cryptographic operations and it must have the required encryption keys at its disposal. The offline validation scheme is extremely slow and inefficient. Offline PIN validation is now obsolete, as the ATMs are connected to the central server over protected networks. PIN validation for interchange transactions There are three PIN procedures for the operation of a high-security interchange transaction. The supplied PIN is encrypted at the entry terminal, during this step, a secret cryptographic key is used. In addition to other transaction elements, the encrypted PIN is transmitted to the acquirer's system. Then, the encrypted PIN is routed from the acquirer's system to a hardware security module. Within it, the PIN is decrypted. With a cryptographic key used for interchange, the decrypted key is immediately re-encrypted and is routed to the issuer's system over normal communications channels. Lastly, the routed PIN is decrypted in the issuer's security module and then validated on the basis of the techniques for on-line local PIN validation. Shared ATMs There are different transaction methods used in shared ATMs with regards to the encipherment of PIN, and message authentication among them is so-called "zone encryption". In this method, a trusted authority is appointed to operate on behalf of a group of banks so they could interchange messages for ATM payment approvals. Hardware security module For successful communication between banks and ATMs, the incorporation of a cryptographic module, usually called a security module is a critical component in maintaining proper connections between banks and the machines. The security module is designed to be tamper resistant. The security module performs a plethora of functions, and among them is PIN verification, PIN translation in interchange, key management and message authentication. The use of PIN in interchanges is causing concerns in security as the PIN can be translated by the security module to the format used for interchange. Moreover, the security module is to generate, protect and maintaining all keys associated with the user's network. Authentication and data integrity The personal verification process begins with the user's supply of personal verification information. This information includes a PIN and the provided customer's information which is recorded on the bank account. In cases where there is a storage of a cryptographic key on the bank card, it is called a personal key (PK). Personal identification processes can be done by the authentication parameter (AP). It is capable of operating in two ways. The first option is where an AP can be time-invariant. The second option is where an AP can be time-variant. There is the case where there is an IP which is based on both time-variant information and on the transaction request message. In such a case where an AP can be used as a message authentication code (MAC), the use of message authentication is made recourse to find out stale or bogus messages which might be routed both into the communication path and the detection of modified messages which are fraudulent and which can traverse non-secure communication systems. In such cases, the AP serves two purposes. Security Security breaches in electronic funds transfer systems can be done without delimiting their components. Electronic funds transfer systems have three components; which are communication links, computers, and terminals (ATMs). First, communication links are prone to attacks. Data can be exposed by passive means or direct means where a device is inserted to retrieve the data. The second component is computer security. There are different techniques that can be used to acquire access to a computer such as accessing it via a remote terminal or other peripheral devices such as the card reader. The hacker had gained unauthorized access to the system, so programs or data can be manipulated and altered by the hacker. Terminal security is a significant component in cases where cipher keys reside in terminals. In the absence of physical security, an abuser may probe for a key that substitutes its value. See also ATMIA (ATM Industry Association) References External links https://www.lightbluetouchpaper.org/ - Security Research, Computer Laboratory University of Cambridge Data security Automated teller machines
65192405
https://en.wikipedia.org/wiki/1964%20USC%20Trojans%20baseball%20team
1964 USC Trojans baseball team
The 1964 USC Trojans baseball team represented the University of Southern California in the 1964 NCAA University Division baseball season. The Trojans played their home games at Bovard Field. The team was coached by Rod Dedeaux in his 23rd year at USC. The Trojans won the California Intercollegiate Baseball Association championship and the District VIII Playoff to advance to the College World Series, where they were defeated by the Maine Black Bears. Roster Schedule ! style="" | Regular Season |- valign="top" |- align="center" bgcolor="#ccffcc" | 1 || February 25 || at || Matador Field • Northridge, California || 12–4 || 1–0 || 0–0 |- |- align="center" bgcolor="#ffcccc" | 2 || March 2 || || Bovard Field • Los Angeles, California || 0–6 || 1–1 || 0–0 |- align="center" bgcolor="#ccffcc" | 3 || March 3 || || Bovard Field • Los Angeles, California || 5–3 || 2–1 || 0–0 |- align="center" bgcolor="#ccffcc" | 4 || March 6 || || Bovard Field • Los Angeles, California || 15–4 || 3–1 || 0–0 |- align="center" bgcolor="#ffcccc" | 5 || March 7 || San Fernando Valley State || Bovard Field • Los Angeles, California || 5–9 || 3–2 || 0–0 |- align="center" bgcolor="#ccffcc" | 6 || March 9 || || Bovard Field • Los Angeles, California || 5–1 || 4–2 || 0–0 |- align="center" bgcolor="#ccffcc" | 7 || March 13 || at || Campus Diamond • Santa Barbara, California || 7–0 || 5–2 || 1–0 |- align="center" bgcolor="#ccffcc" | 8 || March 14 || Santa Barbara || Bovard Field • Los Angeles, California || 8–2 || 6–2 || 2–0 |- align="center" bgcolor="#ccffcc" | 9 || March 14 || || Bovard Field • Los Angeles, California || 5–0 || 7–2 || 2–0 |- align="center" bgcolor="#ffcccc" | 10 || March 17 || || Bovard Field • Los Angeles, California || 2–8 || 7–3 || 2–0 |- align="center" bgcolor="#ccffcc" | 11 || March 20 || at || Unknown • Fresno, California || 14–7 || 8–3 || 2–0 |- align="center" bgcolor="#ffcccc" | 12 || March 20 || at Fresno State || Unknown • Fresno, California || 3–7 || 8–4 || 2–0 |- align="center" bgcolor="#ccffcc" | 13 || March 30 || || Bovard Field • Los Angeles, California || 3–2 || 9–4 || 2–0 |- align="center" bgcolor="#ccffcc" | 14 || March 31 || || Bovard Field • Los Angeles, California || 8–6 || 10–4 || 2–0 |- |- align="center" bgcolor="#ffcccc" | 15 || April 3 || at || Sawtelle Field • Los Angeles, California || 2–3 || 10–5 || 2–1 |- align="center" bgcolor="#ccffcc" | 16 || April 4 || at UCLA || Sawtelle Field • Los Angeles, California || 8–4 || 11–5 || 3–1 |- align="center" bgcolor="#ccffcc" | 17 || April 6 || Cal State Los Angeles || Bovard Field • Los Angeles, California || 4–3 || 12–5 || 3–1 |- align="center" bgcolor="#ffcccc" | 18 || April 7 || || Bovard Field • Los Angeles, California || 7–2 || 12–6 || 3–1 |- align="center" bgcolor="#ffcccc" | 19 || April 11 || at || Buck Shaw Stadium • Santa Clara, California || 2–7 || 12–7 || 3–2 |- align="center" bgcolor="#ccffcc" | 20 || April 11 || at Santa Clara || Buck Shaw Stadium • Santa Clara, California || 3–0 || 13–7 || 4–2 |- align="center" bgcolor="#ccffcc" | 21 || April 13 || at || Sunken Diamond • Stanford, California || 3–2 || 14–7 || 5–2 |- align="center" bgcolor="#ffcccc" | 22 || April 14 || at Long Beach State || Blair Field • Long Beach, California || 8–9 || 14–8 || 5–2 |- align="center" bgcolor="#ccffcc" | 23 || April 17 || Santa Clara || Bovard Field • Los Angeles, California || 7–3 || 15–8 || 6–2 |- align="center" bgcolor="#ccffcc" | 24 || April 18 || || Bovard Field • Los Angeles, California || 10–7 || 16–8 || 7–2 |- align="center" bgcolor="#ccffcc" | 25 || April 18 || California || Bovard Field • Los Angeles, California || 4–2 || 17–8 || 8–2 |- align="center" bgcolor="#ccffcc" | 26 || April 20 || Cal Poly Pomona || Bovard Field • Los Angeles, California || 14–1 || 18–8 || 8–2 |- align="center" bgcolor="#ccffcc" | 27 || April 24 || Santa Clara || Bovard Field • Los Angeles, California || 8–3 || 19–8 || 9–2 |- align="center" bgcolor="#ccffcc" | 28 || April 25 || Stanford || Bovard Field • Los Angeles, California || 5–4 || 20–8 || 10–2 |- align="center" bgcolor="#ccffcc" | 29 || April 25 || Stanford || Bovard Field • Los Angeles, California || 115–4 || 21–8 || 11–2 |- align="center" bgcolor="#ccffcc" | 30 || April 28 || Santa Barbara || Bovard Field • Los Angeles, California || 10–1 || 22–8 || 12–2 |- |- align="center" bgcolor="#ffcccc" | 31 || May 1 || UCLA || Bovard Field • Los Angeles, California || 2–7 || 22–9 || 12–3 |- align="center" bgcolor="#ccffcc" | 32 || May 2 || UCLA || Bovard Field • Los Angeles, California || 8–7 || 23–9 || 13–3 |- align="center" bgcolor="#ccffcc" | 33 || May 5 || at Santa Barbara || Campus Diamond • Santa Barbara, California || 13–5 || 24–9 || 14–3 |- align="center" bgcolor="#ccffcc" | 34 || May 8 || at Stanford || Sunken Diamond • Stanford, California || 9–2 || 25–9 || 15–3 |- align="center" bgcolor="#ccffcc" | 35 || May 9 || at California || Edwards Field • Berkeley, California || 8–2 || 26–9 || 16–3 |- align="center" bgcolor="#ccffcc" | 36 || May 9 || at California || Edwards Field • Berkeley, California || 4–3 || 27–9 || 17–3 |- align="center" bgcolor="#ccffcc" | 37 || May 15 || Pepperdine || Bovard Field • Los Angeles, California || 3–2 || 28–9 || 17–3 |- |- ! style="" | Postseason |- valign="top" |- align="center" bgcolor="#ccffcc" | 38 || May 22 || Cal Poly Pomona || Bovard Field • Los Angeles, California || 12–3 || 29–9 || 17–3 |- align="center" bgcolor="#ccffcc" | 39 || May 23 || Cal Poly Pomona || Bovard Field • Los Angeles, California || 5–2 || 30–9 || 17–3 |- align="center" bgcolor="#ccffcc" | 40 || May 29 || || Bovard Field • Los Angeles, California || 5–0 || 31–9 || 17–3 |- align="center" bgcolor="#ccffcc" | 41 || May 30 || Oregon || Bovard Field • Los Angeles, California || 9–3 || 32–9 || 17–3 |- |- align="center" bgcolor="#ccffcc" | 42 || June 9 || vs Ole Miss || Omaha Municipal Stadium • Omaha, Nebraska || 3–2 || 33–9 || 17–3 |- align="center" bgcolor="#ccffcc" | 43 || June 12 || vs Missouri || Omaha Municipal Stadium • Omaha, Nebraska || 3–2 || 34–9 || 17–3 |- align="center" bgcolor="#ffcccc" | 44 || June 13 || vs Minnesota || Omaha Municipal Stadium • Omaha, Nebraska || 5–6 || 34–10 || 17–3 |- align="center" bgcolor="#ffcccc" | 45 || June 15 || vs || Omaha Municipal Stadium • Omaha, Nebraska || 1–2 || 34–11 || 17–3 |- Awards and honors Joe Austin First Team All-CIBA Willie Brown College World Series All-Tournament Team Second Team All-CIBA Bud Hollowell Second Team All-American American Baseball Coaches Association Second Team All-CIBA Marty Piscovich Honorable Mention All-CIBA Walt Peterson First Team All-American American Baseball Coaches Association First Team All-American The Sports Network First Team All-CIBA Larry Sandel Second Team All-CIBA Gary Sutherland Third Team All-American American Baseball Coaches Association College World Series All-Tournament Team First Team All-CIBA References USC Trojans baseball seasons USC Trojans baseball College World Series seasons USC Pac-12 Conference baseball champion seasons
66486669
https://en.wikipedia.org/wiki/Samsung%20Wave%20III%20s8600
Samsung Wave III s8600
The Samsung Wave III S8600 (or "Samsung Wave III") is a smartphone running the Bada 2.0 operating system designed by Samsung, which was commercially released on August, 2011. The Wave is a slim touchscreen phone powered by "Scorpion" CPU, which includes 1.4 GHz ARM Cortex-8 CPU and a powerful graphics engine, "Super LCD" screen and 720p high-definition video capture capabilities. Shortage of Super AMOLED screens was one of the primary reasons for the release of this model. Hardware features Design The phone is made of mostly metal alloy and is measured at 12.9 mm thick. In terms of form factor, it is a slate style featuring 3 physical buttons on the front: call, reject/ shutdown, and main menu button. Screen The screen is a capacitive touchscreen Super LCD with an anti-smudge oleophobic coating on top of the scratch-resistant tempered-glass (Gorilla Glass Display) touch panel which has been shown to be capable of resisting extreme friction (scratch-resistant). The screen resolution is 800x480 WVGA and an area of 45.5 cm^2 (~233 ppi). Processor The phone features a Scorpion processor Qualcomm S2 MSM8255T 1.4 GHz SoC, which internally contains an ARM Cortex A8 CPU core. Camera The phone features a 5 megapixel which supports 2592 × 1944 pixels, along with autofocus, LED flash, Geo-tagging, face, blink detection, image stabilization, touch focus, etc. Other than these features it has various shooting modes such as beauty shot, smile shot, continuous, panorama and vintage shot. As a camcorder it is able to shoot 720p HD recording (1280x720) at 30 FPS with flash. Other features Other feature includes A-GPS, 2 GB/8 GB of internal storage with a microSDHC slot for an additional 32 GB. It also has a magnetometer, a proximity sensor, an accelerometer, 5.1-channel surround sound Mobile Theater, music recognition, a fake call service, smart search, Social Hub and it is the first phone to support Bluetooth version 3.0. In addition to Bluetooth 3.0, the phone also features Wi-Fi 802.11 b/g/n, HSDPA 3.2 Mbit/s and HSUPA 2 Mbit/s. This phone is available with both European/Asia 3G bandings and the North American 3G bandings. Software features User interface The phone is one of the few smartphone to feature the Samsung bada 2.0 operating system platform. Applications By default, the phone comes with Picsel Viewer which is capable of reading. pdf and Microsoft Office file formats. As for Samsung apps, users can also download applications, games and widgets from the application store. Media support Audio MP3, AAC, AAC+, e-AAC+, WMA, AMR, WAV, MP4, FLAC Video MPEG4, H.263, H.264, WMV, AVI, DivX, XviD, MKV Android porting There are many Android Custom ROMs on this mobile, they go up to Android 4.4 See also Exynos - Samsung Amoled - Samsung Wave Y - Samsung References Bada (operating system) S-8530 Mobile phones introduced in 2010 Discontinued smartphones
2231257
https://en.wikipedia.org/wiki/Jim%20Hacker
Jim Hacker
James George Hacker, Baron Hacker of Islington, , BSc (Lond.), Hon. D. Phil (Oxon.) is a fictional character in the 1980s British sitcom Yes Minister and its sequel, Yes, Prime Minister. He is the Minister of the (fictional) Department of Administrative Affairs, and later the Prime Minister. He was portrayed by Paul Eddington in the original show. In the 2013 revival a new version of Hacker was portrayed by David Haig. Fictional biography Before Yes Minister Hacker attended the London School of Economics (around 25 years before his appointment to the Cabinet) and graduated with a third class honours degree. He had a career in political research, university lecturing and journalism - including editorship of a publication named Reform - and was elected as a Member of Parliament, initially serving as a backbencher. While his party was in opposition, Hacker served for seven years as Shadow Minister of Agriculture. During an internal contest for leadership of his party, Hacker ran the campaign of his colleague Martin Walker, but this was unsuccessful, leaving Hacker with a strained relationship with the party leader. Yes Minister When Hacker was in his late 40s, his party won a general election victory, with Hacker himself being re-elected in the Birmingham East constituency with an increased majority. Hacker expected to be appointed Minister of Agriculture, due to his extensive knowledge of the subject, but the Civil Service, for the same reason, encouraged the new Prime Minister to appoint him elsewhere. Hacker was appointed Minister of Administrative Affairs. The Department of Administrative Affairs was described by a commentator as 'a political graveyard', implying the Prime Minister may have chosen it as an act of revenge. Hacker worked with the ministry's Permanent Secretary, Sir Humphrey Appleby, who as a senior civil servant tries to control the ministry and the minister himself, and his own Principal Private Secretary, Bernard Woolley. Hacker had been helped in his re-election by political adviser Frank Wiesel, saying of him, "I depend on him more than anyone." Initially Hacker brought Wiesel with him to the DAA, but his presence was resented by the Civil Servants, who referred to him as "the weasel". Eventually Hacker and Wiesel came to conflict when Wiesel proposed reforming the quango system, as he put it, "ending the scandal of ministerial patronage". Sir Humphrey arranged a situation where Hacker could avoid a scandal only by appointing an unqualified candidate to chair such a quango. When Hacker agreed, Wiesel was disgusted and threatened to go to the press, but instead accepted Hacker's offer of heading a well compensated "super-quango" on the abolition of quangos. This left Hacker to be advised entirely by civil servants for the remainder of his time at the DAA. Hacker hoped for promotion to a more prestigious Cabinet post such as Foreign Secretary. He considered the 'top jobs' to be Foreign Secretary, Chancellor of the Exchequer and Home Secretary, and dreaded the prospect of being made Secretary of State for Northern Ireland or Minister with General Responsibility for Industrial Harmony. The Prime Minister still saw Hacker as a supporter of his rival, Martin Walker, and at one point almost abolished the Department of Administrative Affairs, in which case Hacker may have been 'kicked upstairs' (appointed to the House of Lords). Hacker was able to blackmail the Prime Minister into abandoning the idea. However, fearing demotion in an upcoming Cabinet reshuffle, he seriously considered accepting an offer to become an EEC Commissioner, a move he considered to be "curtains as far as British politics is concerned. It's worse than a peerage... You're reduced to forming a new party if ever you want to get back." Sir Humphrey persuaded Hacker to refuse the offer, and Hacker remained Minister of Administrative Affairs. Hacker was pleased to take on additional responsibilities while remaining at the same department, including the role of 'Transport Supremo', responsible for an integrated transport policy. Following a cabinet reshuffle, his department absorbed the Local Authority Directorate. Hacker was awarded an honorary doctorate of Law from Baillie College, Oxford (a possible reference to Balliol College), in return for allowing them to continue taking overseas students. Hacker was appointed Chair of his party. When he had held this position for less than a year, and been a minister for two, the Prime Minister unexpectedly retired. Sir Humphrey, who was now Cabinet Secretary, encouraged and assisted Hacker in using the position of Chair to his advantage, resulting in Hacker becoming party leader and Prime Minister. Yes, Prime Minister Although Hacker had believed the Prime Minister had more freedom to act than Cabinet ministers, he found that in his new role Sir Humphrey was still able to prevent him implementing many of his ideas. Early in his Premiership, Hacker intended to implement what he called his 'Grand Design' - actually the idea of the Chief Scientific Adviser - which involved cancelling the Trident missile programme, enlarging the armed forces and reintroducing conscription. Sir Humphrey, through the Permanent Secretaries of the various departments, was able to persuade the Cabinet to oppose the scheme. The former Prime Minister posed a problem for Hacker by describing him unflatteringly in his memoirs. Hacker was delighted by his sudden death, not only because the memoirs would not be finished, but because the funeral offered the opportunity for him to host an unofficial summit of world leaders. After Yes, Prime Minister The original television series ended in 1988 with Hacker still in office as Prime Minister; however, both before and after this, writers Antony Jay and Jonathan Lynn made references to Hacker's career in print which reveal his life after the series ended. In 1981, Jay and Lynn adapted the first series of Yes Minister into book form, presenting it in the form of Hacker's diaries, ostensibly edited by Jay and Lynn more than 30 years later in 2017. In summarizing his career, they say that he "failed upwards from one senior cabinet post to the next, culminating with his ultimate failure at Number Ten and his final demise on his elevation to the House of Lords (as it then was)." This would be partly contradicted by the 1984 episode "Party Games", as Hacker does not hold any other cabinet post between being Minister for Administrative Affairs and becoming Prime Minister. The foreword to the third volume of the book series (published 1983, but dated September 2019) makes clear that Hacker has actually died, not merely suffered a political demise. All five volumes of the book series are supposedly written at 'Hacker College, Oxford', an institution apparently named after him. In 2003, Jay and Lynn wrote an obituary for Hacker for The Politico's Book of the Dead. It gives his dates of birth and death as 18 June 1927 and 4 November 1995, the same as Paul Eddington, the actor who portrayed him. The obituary states that Hacker was Minister of Administrative Affairs (the events of the three series of 'Yes, Minister') for a period of two years. His time as Prime Minister is also described as brief, finishing in a general election defeat for his party. Despite re-election being Hacker's main motivation throughout the series, it appears that he was in government for only a single term, a maximum of five years. The obituary confirms that Hacker was elevated to the House of Lords, taking the title Lord Hacker of Islington, and also reveals that he was made a Knight Companion of the Order of the Garter. Both are customary retirement honours for former Prime Ministers. Hacker is described as an 'Hon. D. Phil', indicating that his honorary law degree from Baillie college was as a Doctor of Philosophy rather than Doctor of Civil Law. Personal life Hacker and his wife, Annie, are seen to have one daughter, Lucy, a left-wing activist and sociology student at the University of Sussex. Hacker mentions having more than one child, saying, "Our children are reaching the age where Annie and I are hoping to spend much more time with each other." Character Jim Hacker first appears in Yes Minister having been recently re-elected as Member of Parliament for Birmingham East, soundly defeating his opponents. His early character is that of a very gung-ho, albeit naive, politician, ready to bring sweeping change into his department, unaware that Sir Humphrey and the civil service are out to stop any semblance of change, despite their insistence that they are his allies. Hacker is also noted as having challenged Humphrey while he was a member of the Opposition by asking difficult questions when Sir Humphrey was testifying to a Parliamentary committee: Sir Humphrey stated that Hacker had asked "...all the questions I hoped nobody would ask," showing his new Minister to be at least a reasonably capable politician. Before long, Hacker begins to notice that the Civil Service has been preventing any of his changes from actually being put into practice. Bernard is sympathetic to Hacker's plight and tries to enlighten his Minister as to the tricks and techniques employed by government staff, but his ability to help is limited by his own loyalties in the Civil Service. Hacker soon learns and becomes more sly and cynical, using some of these ploys himself. While Sir Humphrey nearly always gets the upper hand, Hacker now and again plays a trump card, and on even fewer occasions, the two of them work towards a common goal. Hacker also learns that his efforts to change the government or Britain are all really for naught, as he discovers in the episode "The Whisky Priest", when he attempts to stop the export of British-made munitions to Italian terrorists. Throughout Yes Minister, there are many occasions when Hacker is portrayed as a publicity-mad bungler, incapable of making a firm decision, and prone to blunders that embarrass him or his party, eliciting bad press and stern lectures from the party apparatus, particularly the Chief Whip. He is continually concerned with what the newspapers of the day will have to say about him, and is always hoping to be promoted by the Prime Minister. (Hacker ran the unsuccessful campaign for a political ally during the party's last leadership election – his man lost, becoming Foreign Secretary, and leaving Hacker nervous about his prospects under the winner, now Prime Minister.) He is equally afraid of either staying at his current level of Cabinet seniority, or being demoted. Just prior to the start of Yes, Prime Minister, Hacker shows a zeal for making speeches and presents himself as a viable party leader after the Prime Minister announces his resignation in the episode "Party Games". He is given embarrassing information about the two front-runner candidates, and manages to persuade them (by insinuating that secret information pertaining to both may be revealed to the public) to drop out of the race, and lend their support to him. With help from the recently promoted Sir Humphrey and other senior civil servants, Hacker emerges as a compromise candidate and becomes head of his party unopposed – and Prime Minister. In Yes, Prime Minister Hacker strives to perfect all the skills needed by a statesman, giving more grandiose speeches, dreaming up "courageous" political programmes, and honing his diplomatic craft, nearly all of these attempts landing him in trouble at some point. In a Radio Times interview to promote the latter series, Paul Eddington stated, "He's beginning to find his feet as a man of power, and he's begun to confound those who thought they'd be able to manipulate him out of hand." Hacker becomes a more competent politician by the end. Though primarily interested in his personal career survival and advancement, he, unlike Sir Humphrey, views government as a means rather than an end in itself. Interests and habits Hacker has many prominent habits that feature throughout the series: Drinking. Hacker enjoys various alcoholic beverages, particularly harder liquors, including Scotch whisky: "the odd drinkie", as he likes to call them. He is seen drunk on more than one occasion and was caught drinking and driving in the episode "Party Games". He used his political immunity to escape charges. He even went as far as to smuggle alcohol into a diplomatic function in Qumran (a dry Islamic oil sheikhdom) by establishing a false diplomatic communications room in The Moral Dimension. Disdain for certain types of culture. Sir Humphrey thinks Hacker to be a cultural philistine who is unaware of the importance of protecting Britain's artistic heritage. Hacker believes it only important to the upper-class snobs (such as Humphrey himself), and several other "wet, long-haired, scruffy art lovers", arguing that operas created by Italians and Germans are not representative of Britain's cultural heritage. However, upon becoming Minister for the Arts (in "The Middle-Class Rip-Off"), Hacker asks Humphrey if he could tag along on a gala night at the Royal Opera House. Humphrey is delighted by the volte-face and declares, "Yes, Minister" enthusiastically. But Hacker and his wife enjoy seeing foreign films, and in the same episode Hacker demonstrates some grasp of art, enough to make a strong case that a disputed art gallery in his constituency is not worth saving. (See also "Football" below.) Pomposity. Hacker is often seen going off into sentimental, overly pretentious speeches either to himself or to Bernard and Sir Humphrey, holding his lapel on his suit jacket in a very royal manner. He also mimicked Napoleon by slipping his hand in the front of his suit jacket upon hearing he was selected by the party to become party leader and hence Prime Minister. However, it appears that Hacker's political idol is Winston Churchill: he occasionally speaks in the statesman's gruff style, on several occasions imitating or paraphrasing Churchill's "We shall fight on the beaches" speech, and is seen reading biographies of him. Football. Hacker believes that sport is of great cultural importance and is even willing to sacrifice a local art gallery in order to bail out his constituency's football team, the fictional Aston Wanderers, that was being threatened with bankruptcy. He didn't support the team though, and was mentioned as being an Aston Villa supporter in the first episode. Political affiliation Hacker's political party is never explicitly stated – a deliberate ploy by the series' creators to prevent the show from having a partisan affiliation. This begins in the very first scene of the 'Yes Minister' pilot episode, where the victorious Hacker's party rosette is white, as opposed to the red (for Labour) and blue (for the Conservatives) rosettes worn by the other candidates. The party that formed the previous government, which is now the opposition, is not explicitly identified either. In Yes Minister the prime minister was unseen and unnamed, but was always referred to as male, whereas the real Prime Minister of the day was Margaret Thatcher. The Labour and Conservative parties are eventually compared in The National Education Service, when Sir Humphrey tells Bernard, "When there is a Labour government, the education authorities tell them that comprehensives abolish the class system and when there's a Tory government we tell them that it's the cheapest way of providing mass education; to Labour we explain that selective education is divisive and to the Tories we explain that it is expensive." but Sir Humphrey then goes on to tell Hacker neither of these things, forgoing any suggestion that Hacker is from either party. Throughout the show, Hacker's political opinions tend towards reform of administration and are neither left nor right wing. On first becoming a minister, Hacker intends to implement his party's manifesto commitment to 'open government', but backs down when he is shown the dangers of the policy. He is known as 'a good European', a believer in 'the European ideal' embodied in the European Economic Community, but a critic of the bureaucracy in Brussels. Throughout the series, the party is mentioned as having constituencies in the West Midlands (such as Hacker's seat of Birmingham East), Merseyside, Glasgow, Nottingham and (oddly for a major British political party) Northern Ireland. Most of these are described as marginal seats, often mentioned when a potentially unpopular decision is under consideration, such as revitalising a nationalised chemical plant by producing propanol using metadioxin (a chemical linked to the Seveso disaster and purported to cause foetal damage) or introducing ameliorative measures to deter smoking (to the detriment of tax revenue, jobs in the tobacco industry and patronage for culture and sports). The party is also mentioned as controlling the South Derbyshire Council as well as contesting a by-election in Newcastle. Other media In a radio broadcast spoof of Yes Minister performed by both Eddington and Nigel Hawthorne, both of whom played their respective parts from the show, Hacker is a Minister in the government of the day, that of Margaret Thatcher, who also played herself as Prime Minister. In the sketch, she asks that Hacker and Sir Humphrey abolish economists. In the 2010 stage production of Yes, Prime Minister, the role was played by David Haig; Graham Seed took the role in a touring production of the play. Notes 1.In "The Economy Drive", Annie tells Hacker, "For twenty years you've complained that as a backbencher you had no facilities." Annie must mean that he has been complaining for 20 years, not that he has been a backbencher for all that time. In "Open Government" it is said that he had been a shadow minister - not a backbencher - for "many years"; the book adaptation specifically says seven years. Hacker is in his late forties (Open Government) and graduated from LSE 25 years earlier (The Official Visit), so he must have been elected five years after graduation, aged in his late 20s. This suggests Hacker must have had his various other careers, such as editing Reform, at the same time as being an MP, or that he left parliament and later returned. In the Yes, Prime Minister episode A Real Partnership, Annie says, "You were a backbench MP only five years ago." In the previous episode, Dorothy Wainwright, adviser to Hacker and to the previous Prime Minister, says she has been working at No. 10 Downing Street for three years, so Hacker's party won the general election at least three years earlier. This would mean Hacker was in the Shadow Cabinet for two years rather than seven. 2.Hacker's obituary says that he was a minister for two years. In the Yes, Prime Minister episode The Key, Dorothy has been at No. 10 for three years. As a political adviser, she cannot have been there before Hacker's party won the election. However, how much time has passed between Party Games and The Key is not specified. In the later episode Official Secrets, Hacker refers to events during his time as a cabinet minister as having taken place five years earlier. Assuming that the Parliament Act 1911 applies in the world of the show, there is a maximum of five years between general elections, so either Hacker has won an election between episodes, or an election is imminent. The obituary implies the latter, stating that Hacker fought and lost a general election after only a short time as Prime Minister. 3.This implies that the first episode, Open Government, in which Hacker is said to be in his late 40s, takes place no later than 1976, whereas the first book adaptation says it is in 'the 1980s'. References Yes Minister characters Television characters introduced in 1980 Fictional prime ministers of the United Kingdom British male characters in television
12778412
https://en.wikipedia.org/wiki/Multinational%20Corps%20Northeast
Multinational Corps Northeast
Multinational Corps Northeast was formed on 18 September 1999 at Szczecin, Poland, which became its headquarters. It evolved from what was for many years the only multinational corps in NATO, Allied Land Forces Schleswig-Holstein and Jutland (LANDJUT) (in its turn, a part of Allied Forces Northern Europe). From 1962 LANDJUT had been responsible for the defence of the Baltic Approaches from a headquarters at Rendsburg, Germany. It comprised the 6th Panzergrenadier Division and the Danish Jutland Division. History A tri-national working group was established following the July 1997 decision that Poland was to be admitted to NATO with the aim of establishing the corps as part of NATO’s Main Defence Forces. Its missions are three-fold: to participate in the collective defence of NATO territory, under Article 5 of the North Atlantic Treaty, to contribute to multinational crisis management including peace support operations, and to provide command and control for humanitarian, rescue, and disaster relief operations. In July 1997, Ministers of Defence of Denmark, Germany and Poland decided to establish a Danish-German-Polish Corps. This corps was to be named Multinational Corps Northeast with its headquarters located in Szczecin, Poland. The Headquarters Allied Land Forces Schleswig-Holstein and Jutland (LANDJUT) from Rendsburg in Germany was to form the nucleus of this new command. Ministers of Defence of Denmark, Germany and Poland signed the Corps Convention in 1998, when Poland was not a member of NATO yet, but the date of the country’s accession (12 March 1999) had already been set. On 18 September 1999, the three Framework Nations – Denmark, Germany, Poland – could hoist their flags in the Baltic Barracks, the seat of the Corps in Szczecin. The Corps has significantly developed decisively since that time. Though it is a NATO-affiliated formation, the Corps Convention is a trilateral agreement between the three nations. The positions of commander, deputy commander, and chief of staff rotate between the three nations. For common purposes of practice and training the corps was assigned to Joint Sub-Regional Command Northeast (JSRC NE), at Karup, Denmark. For Article 5 common defence purposes, the Corps was to have been assigned either to JSRC NE or the JSRC Centre at Heidelberg, Germany. Following the latest reorganisation, it might report if designated for operations in Central Europe to Allied Force Command Heidelberg. The 14th Panzergrenadier Division of the German Army used to be part of the Corps, but disbanded at the end of 2008. Due to its geographical location, the only NATO HQ East of the former Iron Curtain, Multinational Corps North East has a key function in the integration of new NATO member states. This is reflected in the structure of its personnel. Officers and NCO's from the Czech Republic, Estonia, Latvia, Lithuania and Slovakia are serving at Multinational Corps North East. Since April 2004, the flags of Estonia, Latvia and Lithuania have been fluttering at the Headquarters. In January 2005, Slovakia joined Multinational Corps Northeast, whereas the Czech Republic - in October 2005. The US flag was hoisted in November 2006 indicating the US membership in the Corps. In July 2008, first Romanian officers arrived to serve at the HQ. In August 2009, Slovenia entered the MNC NE family. In January 2012, Croatia officially became the twelfth nation of the Corps. In July 2013, the flag of Hungary was hoisted in Baltic Barracks. Sweden, a non-NATO member, sent its representative to the Baltic Barracks in autumn 2014. In 2015 Turkish, British, French and Dutch officers started their tours of duty in Szczecin. Canada, Iceland, Belgium, Norway and Greece joined the Corps in 2016. In 2005, during the Compact Eagle exercise, the headquarters achieved full operational capability. From January to August 2007 a considerable number of personnel from Multinational Corps Northeast were put at the disposal of International Security Assistance Force's headquarters in Kabul, Afghanistan. On 5 February 2015, a trilateral statement by the Corps Convention countries stated, in part, that: 'At the NATO Summit in September 2014 the Ministers of Defence from Germany, Poland and Denmark informed their colleagues and signed a statement that they had decided to raise the level of readiness of the Headquarters MNC NE from a Forces of Lower Readiness Headquarters to a High Readiness Force Headquarters and to enhance its capability to address future threats and challenges'. '..the level of readiness [of the corps will be raised] and fulfil a joint and regional role within the framework of NATO’s Readiness Action Plan, for both Assurance and Adaptation Measures in order to exercise command and control in the full range of Alliance missions in the north-eastern region (Estonia, Latvia, Lithuania and Poland) of the Alliance with the emphasis on Article 5 operations including command and control over the Very High Readiness Joint Task Force (VJTF). Additionally, MNC NE will execute command and control over the NATO Force Integration Units (NFIUs) in Estonia, Latvia, Lithuania and Poland.' In June 2016, during the exercise Brilliant Capability 16 the Corps has become operationally capable to assume command of the Very High Readiness Joint Task Force, also referred to as the “spearhead force”. Mission in Afghanistan The MNC NE staff was assigned to the International Security Assistance Force (ISAF) mission and officially took over their duties for the first time on 4 February 2007. Nearly 160 officers and non-commissioned officers spent over 6 month in Kabul. The majority of the MNC NE staff filled the posts in a newly established composite ISAF Headquarters in Kabul. From February to August 2010, the personnel of the Corps participated in the ISAF mission for the second time. The majority of approximately 130 officers and non-commissioned officers filled the posts at the ISAF Joint Command, a tactical level headquarters. Serving at different branches, they were gaining valuable mission experience and improving their skills. The third deployment with the participation of more than 120 soldiers from the Corps and partnering formations started in January 2014 and ended in January 2015. As soon as the ISAF mandate expired, the Resolute Support Mission commenced in January 2015. Mission: International Security Assistance Force, Afghanistan February - August 2007 February - August 2010 January 2014 - January 2015 Affiliated Forces HQ-Coy (Szczecin) Command Support Brigade (HQ Szczecin) 12th Infantry Division (Poland) (HQ Szczecin) 2nd Legion Mechanized Brigade in Złocieniec 7th Coastal Defense Brigade in Słupsk 12th Mechanised Brigade in Szczecin 5th Artillery Regiment in Sulechów 8th Air Defence Regiment in Koszalin 12th Command Battalion in Szczecin Commanders 1999-2001 – LTG Henrik Ekmann Deputy Corps Commander – MG Edward Pietrzyk (since 2000 BG Zdzisław Goral) Chief of Staff – BG Joachim Sachau 2001-2003 – LTG Zygmunt Sadowski Deputy Corps Commander – MG Rolf Schneider Chief of Staff – BG Karl Nielsen 2004-2006 – LTG Egon Ramms Deputy Corps Commander – MG Rolf Schneider (since 2004 MG Jan Andersen) Chief of Staff – BG Karl Nielsen (since 2004 BG Henryk Skarżyński) 2006-2009 – LTG Zdzisław Goral Deputy Corps Commander – MG Jan Andersen (since 2008 MG Ole Køppen) Chief of Staff – BG Josef Heinrichs (since 2008 BG Josef Heinrichs) 2009-2012 – LTG Rainer Korff Deputy Corps Commander – MG Ryszard Sorokosz (since 2012 MG Bogusław Samol) Chief of Staff – BG Morten Danielsson 2013-2015 – LTG Bogusław Samol (since December 2012) Deputy Corps Commander – BG Morten Danielsson (since May 2013 MG Agner Rokos) Chief of Staff – BG Lutz Niemann 2015-2018 – LTG Manfred Hofmann Deputy Corps Commander – BG Krzysztof Król (till January 2016 MG Agner Rokos) Chief of Staff – BG Per Orluff Knudsen (till January 2016 BG Lutz Niemann) 2018-2021 – LTG Sławomir Wojciechowski Deputy Corps Commander – MG Ulrich Hellebjerg Chief of Staff – BG Wolf-Jürgen Stahl 2021 - present – LTG Jürgen-Joachim von Sandrart Deputy Corps Commander – MG Ulrich Hellebjerg Chief of Staff – BG Wolf-Jürgen Stahl References External links Official website of Multinational Corps Northeast Multinational Corps Northeast on Facebook MNC NE on twitter MNC NE on flickr Official profile HQ MNC NE on YouTube Army corps of the Bundeswehr Corps of Poland Army units and formations of Denmark Military units and formations established in 1999 1999 establishments in Poland Multinational army units and formations
4444327
https://en.wikipedia.org/wiki/Pro/DESKTOP
Pro/DESKTOP
Pro/DESKTOP (commonly referred to as Pro/D and formerly known as DesignWave) is a discontinued computer-aided design (CAD) program from Parametric Technology Corporation (PTC), that allowed users to design and model in 3D and create 2D drawings. It can transfer a 3D design into a 2D engineering drawing format and also create photo-realistic views using Album Views. It is part-compatible with Pro/ENGINEER, and uses the Granite kernel, but otherwise is a freestanding CAD system. Initially written by David Taylor in Cambridge, England, the original software is registered to Cyril Slug of the fictitious company Bimorcad Ltd. Dialog box 356 shows a picture of the author, wearing a red and black checked jacket sat on a wall, drinking a cup of tea outside his house. The software was built using Microsoft Visual C++ (Microsoft Foundation Class 4.X) and also software modules from D-Cubed to implement geometric constraint solution as well as parametric sketching and other part features required for file support with Pro/Engineer. The Pro/DESKTOP interface was available in English, French, German, Chinese (simplified and traditional) and Japanese. Areas Drawing View This area allows the user to generate technical drawings from their design. The drawing view supports a number of industry standards including: AFNOR ANSI (Inch) ANSI (mm) BS308 DIN ISO JIS This view allows full linear, radial and angular dimensioning of designs and also supports part lists for assemblies of 2 or more components. Album Views This area allows the user to create photo-realistic view of their design. There is an array of different options available to alter this view. Options available are: A large library of different materials (sometimes called textures, with the option to add more, which can be applied to whole components or individual faces). A Choice of five different lighting options (Default, Room Lighting, Day Lighting, Flood Lighting and Spot Lighting) A Choice of eight different camera lenses, these digitally alter the view to give the illusions of the chosen camera lens. These are: Fisheye, 16 mm Ultra Wideangle, 20mm Wideangle, 28 mm Moderate Wideangle, 35 mm Standard, 50mm Moderate Telephoto, 70mm Telephoto, 110mm Reflex, 1000000mm System Requirements Computer Intel x86 or compatible (e.g. Pentium 166Mhz) microprocessor 64Mb of Random access memory Display 16 bit colour graphics capability (65535 colours) Recommended but not required: Size 1024 x 768 pixels 70 Hz refresh rate non interlaced 17" monitor for optimal use OpenGL hardware support recommended for optimal performance Hard Disk 40Mb (2001i) 80Mb (8.0) Variants There were multiple versions of the software release: 2001i 2001i Express 8.0 8.0 Express The Express versions did not support some features of the full software such as the use of bitmaps for rendering images. Threads were represented as cosmetics rather than geometry on the part. This limited use with say STL exports as the resultant exported surface in the data would be a smooth rod rather than a thread form. The 2008 Express release also removed support for Visual Basic for Applications (VBA) available in the 2001i version. The resolution of STL exports was also limited in the Express versions e.g. external diameters limited to 0.1° resolution in the export dialog box. Licensing License keys were required, for the Express variants which were obtained from the PTC website without charge. The returned keys (8 groups of 4 numeric digits) had expiry dates encoded to prevent use of the software after a certain date. The key obtained was based on data from the computer, which limited use of the key to the computer generating the license request. The key was based on the computers hard disk's Volume Serial Number (8 characters) and additional data about the computer (5 bytes). The number was passed to the web server in an obfuscated by alphabetic shift re-arrangement technique in order to make decoding of the transferred data more difficult. e.g. a Hard disk Volume ID of 0000-0000 would be encoded as petssssssssscs where the s is equal to the 0. Educational licensing Pro/DESKTOP had particularly distinguished itself in the education market. From a start in the CAD/CAM in Schools Initiative in the UK, Pro/DESKTOP has been donated free of charge to over 14,000 secondary schools worldwide, and is used by nearly 4 million students each school year. Its main educational rival was AutoCAD, which is widely used in secondary schools to teach 2D drafting and technical drawing. The use of Pro/DESKTOP in schools for 3D design has far exceeded AutoCad as it is generally considered too technical for the education market. Its minimal system requirements further cement the software as the program of choice for those just starting out in CAD. However, Pro/DESKTOP's user interface provides difficulty due to the lack of a multiple string undo function and the program's grouping of entities into work-planes that must have the same characteristics. Pro/DESKTOP is no longer available for download or activation, but PTC instead now offers up to 300 seats of Pro/ENGINEER Schools Edition to high schools free of charge. In 2012 PTC continued its support of schools by making its latest Creo 3D Modelling software available free to schools. Creo represents a major update of Pro|ENGINEER with ribbon style menus, streamlined workflows and the addition of several major new features including; direct modelling, flexible modelling and freestyle Sub Division surfacing tools. The use of a common data model supports seamless movement between the different modelling tools with no loss of data. PTC Schools Programme. See also Parametric Technology Corporation Creo Pro/ENGINEER Creo Elements/View References External links Pro/DESKTOP product page on the Parametric Technology Corporation website (archived on 11 December 2004) Computer-aided design software
63734590
https://en.wikipedia.org/wiki/IPadOS%2014
IPadOS 14
iPadOS 14 is the second major release of the iPadOS operating system developed by Apple for their iPad line of tablet computers. It was announced on June 22, 2020 at the company's Worldwide Developers Conference (WWDC) as the successor to iPadOS 13, making it the second version of the iPadOS fork from iOS. It was released to the public on September 16, 2020. It was succeeded by iPadOS 15 on September 20, 2021. History Updates The first developer beta of iPadOS 14 was released on June 22, 2020 and the first public beta was released on July 9, 2020. iPadOS 14 was officially released on September 16, 2020. There was no public beta testing of 14.1. Legend: Features Home screen Widgets To the left of the first page, the Today View now has new redesigned widgets. Widgets may be added, with options for small, medium, or large widgets, but the widgets can no longer collapse or expand. Widgets of the same size may be stacked over each other and swiped between for convenience; a Smart Stack may be placed which automatically show the most relevant widget to the user based on the time of day. Unlike in iOS 14, widgets cannot be placed directly on to the home screen in iPadOS 14; this was only allowed starting in iPadOS 15. Compact UI A series of changes were made in iPadOS 14 to reduce the visual space taken by previously full-screen interfaces; such interfaces now appear and hover in front of an app, allowing for touch (and therefore multitasking) on the app behind. Voice calling interfaces, including Phone, or other third-party apps such as Skype, are made substantially thinner, taking approximately as much space as a notification. Siri's interface is now also compact. Search and Siri Improvements to the Search feature on the home screen were made, including a refined UI, quick launcher for apps, more detailed web search, shortcuts to in-app search, and improved as-you-type search suggestions. The search function now appears and functions more like the Spotlight Search feature of macOS. In addition to being made compact, Siri can now answer a broader set of questions and translate more languages. Users can also share their ETA with contacts and ask for cycling directions. Storage iPadOS 14 gains the ability to mount encrypted external drives. However, this capability is limited to APFS-encrypted drives. Upon connecting an APFS-encrypted external drive to the USB-C port on the iPad, the Files app will present the external drive on the sidebar. Selecting the drive will prompt the user to enter the password to unlock the drive. Supported devices All the devices that supported iPadOS 13 also support iPadOS 14. Devices include: iPad Air 2 iPad Air (3rd generation) iPad Air (4th generation) iPad (5th generation) iPad (6th generation) iPad (7th generation) iPad (8th generation) iPad Mini 4 iPad Mini (5th generation) iPad Pro (all models) References External links – official site – official developer site iOS Reference Library at the Apple Developer site IPadOS IPad Apple Inc. operating systems Mach (kernel) Mobile operating systems Products introduced in 2020 Tablet operating systems
156512
https://en.wikipedia.org/wiki/University%20of%20Liverpool
University of Liverpool
The University of Liverpool is a public research university based in the city of Liverpool, England. Founded as a college in 1881, it gained its Royal Charter in 1903 with the ability to award degrees and is also known to be one of the six 'red brick' civic universities, the first to be referred to as The Original Red Brick. It comprises three faculties organised into 35 departments and schools. It is a founding member of the Russell Group, the N8 Group for research collaboration and the university management school is triple crown accredited. Nine Nobel Prize winners are amongst its alumni and past faculty and the university offers more than 230 first degree courses across 103 subjects. Its alumni include the CEOs of GlobalFoundries, ARM Holdings, Tesco, Motorola and The Coca-Cola Company. It was the world's first university to establish departments in oceanography, civic design, architecture, and biochemistry at the Johnston Laboratories. In 2006 the university became the first in the UK to establish an independent university in China, Xi'an Jiaotong-Liverpool University, making it the world's first Sino-British university. For 2020–21, Liverpool had a turnover of £597.4 million, including £112.5 million from research grants and contracts. It has the seventh largest endowment of any university in England. Graduates of the university are styled with the post-nominal letters Lpool, to indicate the institution. History University College Liverpool The university was established in 1881 as University College Liverpool, admitting its first students in 1882. In 1884, it became part of the federal Victoria University. In 1894 Oliver Lodge, a professor at the university, made the world's first public radio transmission and two years later took the first surgical X-ray in the United Kingdom. The Liverpool University Press was founded in 1899, making it the third oldest university press in England. Students in this period were awarded external degrees by the University of London. University status Following a royal charter and act of Parliament in 1903, it became an independent university (the University of Liverpool) with the right to confer its own degrees. The next few years saw major developments at the university, including Sir Charles Sherrington's discovery of the synapse and William Blair-Bell's work on chemotherapy in the treatment of cancer. In the 1930s to 1940s Sir James Chadwick and Sir Joseph Rotblat made major contributions to the development of the atomic bomb. From 1943 to 1966 Allan Downie, Professor of Bacteriology, was involved in the eradication of smallpox. In 1994 the university was a founding member of the Russell Group, a collaboration of twenty leading research-intensive universities, as well as a founding member of the N8 Group in 2004. In the 21st century physicists, engineers and technicians from the University of Liverpool were involved in the construction of the Large Hadron Collider at CERN, working on two of the four detectors in the LHC. In 2004, Sylvan Learning, later known as Laureate International Universities, became the worldwide partner for University of Liverpool online. In 2019, it was announced that Kaplan Open Learning, part of Kaplan, Inc, would be the new partner for the University of Liverpool's online programmes. Laureate will continue providing some teaching provision for existing students until 2021. The university has produced ten Nobel Prize winners, from the fields of science, medicine, economics and peace. The Nobel laureates include the physician Sir Ronald Ross, physicist Charles Barkla, physicist Martin Lewis Perl, the physiologist Sir Charles Sherrington, physicist Sir James Chadwick, chemist Sir Robert Robinson, chemist Har Gobind Khorana, physiologist Rodney Porter, economist Ronald Coase and physicist Joseph Rotblat. Sir Ronald Ross was also the first British Nobel laureate in 1902. The university is also associated with Professors Ronald Finn and Sir Cyril Clarke who jointly won the Lasker-DeBakey Clinical Medical Research Award in 1980 and Sir David Weatherall who won the Lasker-Koshland Special Achievement Award in Medical Science in 2010. These Lasker Awards are popularly known as America's Nobels. Over the 2013/2014 academic year, members of staff took part in numerous strikes after staff were offered a pay rise of 1% which unions equated to a 13% pay cut since 2008. The strikes were supported by both the university's Guild of Students and the National Union of Students. Some students at the university supported the strike, occupying buildings on campus. Campus and facilities The university is mainly based around a single urban campus approximately five minutes' walk from Liverpool City Centre, at the top of Brownlow Hill and Mount Pleasant. Occupying 100 acres, it contains 192 non-residential buildings that house 69 lecture theatres, 114 teaching areas and research facilities. The main site is divided into three faculties: Health and Life Sciences; Humanities and Social Sciences; and Science and Engineering. The Veterinary Teaching Hospital (Leahurst) and Ness Botanical Gardens are based on the Wirral Peninsula. There was formerly a marine biology research station at Port Erin on the Isle of Man until it closed in 2006. Fifty-one residential buildings, on or near the campus, provide 3,385 rooms for students, on a catered or self-catering basis. The centrepiece of the campus remains the university's original red brick building, the Victoria Building. Opened in 1892, it has recently been restored as the Victoria Gallery and Museum, complete with cafe and activities for school visits Victoria Gallery and Museum, University of Liverpool. In 2011 the university made a commitment to invest £660m into the 'Student Experience', £250m of which will reportedly be spent on Student Accommodation. Announced so far have been two large On-Campus halls of residences (the first of which, Vine Court, opened September 2012, new Veterinary Science facilities, and a £10m refurbishment of the Liverpool Guild of Students. New Central Teaching Laboratories for physics, earth sciences, chemistry and archaeology were opened in autumn 2012. In 2013, the University of Liverpool opened a satellite campus in Finsbury Square in London, offering a range of professionally focussed masters programmes. Central Teaching Hub The Central Teaching Hub is a large multi-use building that houses a recently refurbished Lecture Theatre Block (LTB) and teaching facilities (Central Teaching Labs, CTL) for the Departments of Chemistry, Physics and Environmental Sciences, within the university's Central City Centre Campus. It was completed and officially opened in September 2012 with an estimated project cost of £23m. The main building, the 'Central Teaching Laboratory', is built around a large atrium and houses seven separate laboratories that can accommodate 1,600 students at a time. A flexible teaching space, computing centre, multi-departmental teaching spaces and communal work spaces can also be found inside. The adjoining University Lecture Block building contains four lecture rooms and further social spaces. Sustainability In 2008 the University of Liverpool was voted joint seventeenth greenest university in Britain by WWF supported company Green League. This represents an improvement after finishing 55th in the league table the previous year. The position of the university is determined by point allocation in departments such as Transport, Waste management, sustainable procurement and Emissions among other categories; these are then transpired into various awards. Liverpool was awarded the highest achievement possible in Environmental policy, Environmental staff, Environmental audit, Fair trade status, Ethical investment policy and Waste recycled while also scoring points in Carbon emissions, Water recycle and Energy source. Liverpool was the first among UK universities to develop their desktop computer power management solution, which has been widely adopted by other institutions. The university has subsequently piloted other advanced software approaches further increasing savings. The university has also been at the forefront of using the Condor HTC computing platform in a power saving environment. This software, which makes use of unused computer time for computationally intensive tasks usually results in computers being left turned on. The university has demonstrated an effective solution for this problem using a mixture of Wake-on-LAN and commercial power management software. Organisation and structure The university is ranked in the top 1% of universities worldwide according to Academic ranking of world universities and has previously been ranked within the top 150 university globally by the guide. It is also a founding member of the Russell Group and a founding member of the Northern Consortium. The university is a research-based university with 33,000 students pursuing over 450 programmes spanning 54 subject areas. It has a broad range of teaching and research in both arts and sciences, and the University of Liverpool School of Medicine established in 1835 is today one of the largest medical schools in the UK. It also has strong links to the neighbouring Royal Liverpool University Hospital. In September 2008, Sir Howard Newby took up the post of Vice Chancellor of the university, following the retirement of Sir Drummond Bone. The university has a students' union to represent students' interests, known as the Liverpool Guild of Students. The university previously had a strategic partnership with Laureate International Universities, a for-profit college collective, for University of Liverpool online degrees. In 2019 the university announced a new partnership with Kaplan Open Learning for delivery of their online degrees. Senior leadership The figurehead of the university is the chancellor. The following have served in that role: 1996–2009: David Owen, Baron Owen 2010–2013: Sir David King 2017–present: Colm Tóibín The professional head of the university is the vice-chancellor. The following have served in that role: 1919–1926: John George Adami 1927–1936: Hector Hetherington 1936–1937: John Leofric Stocks 1986–1991: Graeme Davies 2002–2008: Sir Drummond Bone 2008–2014: Sir Howard Newby 2015–present: Dame Janet Beer Faculties Since 2009, teaching departments of the university have been divided into three faculties: Science and Engineering, Health and Life Sciences, and Humanities and Social Sciences. Each faculty is headed by an Executive Pro-Vice-Chancellor, who is responsible for all schools in the faculty. Faculty of Health & Life Sciences School of Dentistry School of Health Sciences School of Life Sciences School of Medicine School of Psychology School of Veterinary Science Faculty of Humanities & Social Sciences School of the Arts School of Histories, Languages & Cultures School of Law & Social Justice Management School Faculty of Science & Engineering School of Engineering School of Physical Sciences School of Electrical Engineering, Electronics and Computer Science School of Environmental Sciences Academic profile Rankings and reputation In the Complete University Guide 2013, published in The Independent, the University of Liverpool was ranked 31st out of 124, based on nine measures, while The Times Good University Guide 2008 ranked Liverpool 34th out of 113 universities. The Sunday Times university guide recently ranked the University of Liverpool 27th out of 123. In 2010, The Sunday Times has ranked University of Liverpool 29th of 122 institutions nationwide. In 2008 the THE-QS World University Rankings rated University of Liverpool 99th best in the world, and 137th best worldwide in 2009. In 2011 the QS World University Rankings ranked the university in 123rd place, up 14. In the Times Good University Guide 2013, the University of Liverpool was ranked 29th. Liverpool is ranked 122nd in the world (and 15th in the UK) in the 2016 Round University Ranking. The 2018 U.S. News & World Report ranks Liverpool 129th in the world. In 2019, it ranked 178th among the universities around the world by SCImago Institutions Rankings. The Research Excellence Framework for 2014 has confirmed the University of Liverpool's reputation for internationally outstanding research. Chemistry, Computer Science, General Engineering, Archaeology, Agriculture, Veterinary & Food Science, Architecture, Clinical Medicine, and English, are ranked in the top 10 in the UK for research excellence rated as 4* (world-leading) or 3* (internationally excellent), and also performed particularly well in terms of the impact of their research. The Computer Science department was ranked 1st in UK for 4* and 3* research, with 97% of the research being rated as world-leading or internationally excellent – the highest proportion of any computer science department in the UK. The Chemistry department was also ranked 1st in the UK with 99% of its research rated as 4* world leading or 3* internationally excellent Admissions In terms of average UCAS points of entrants, Liverpool ranked 40th in Britain in 2014. The university gives offers of admission to 83.1% of its applicants, the 7th highest amongst the Russell Group. According to the 2017 Times and Sunday Times Good University Guide, approximately 12% of Liverpool's undergraduates come from independent schools. In the 2016–17 academic year, the university had a domicile breakdown of 72:3:25 of UK:EU:non-EU students respectively with a female to male ratio of 55:45. Xi'an Jiaotong-Liverpool University In 2006 the university became the first in the UK to establish an independent university in China, making it the world's first Sino-British university. Resulting from a partnership between the University of Liverpool and Xi'an Jiaotong University, Xi'an Jiaotong-Liverpool University is the first Sino-British university between research-led universities, exploring new educational models for China. The campus is situated in Suzhou Industrial Park in the eastern part of Suzhou in the province of Jiangsu, 90 km west of Shanghai. It is a science and engineering university with a second focus in English, recognised by the Chinese Ministry of Education as a "not for profit" educational institution. The university offers undergraduate degree programmes in the fields of Science, Engineering, and Management. Students are rewarded with a University of Liverpool degree as well as a degree from XJTLU. The teaching language is English. Student life University halls The university offers a wide selection of accommodation that are on campus as well as student villages off campus. As part of a £660 million investment in campus facilities and student experience, the university has built 3 new on campus halls, while refurbishing existing accommodation. The accommodation offered currently by the university for 2019/2020 academic year are listed below: On-campus Crown Place Philharmonic Court Vine Court Dover Court Tudor Close Melville Grove Off-campus Greenbank Student Village Derby & Rathbone Halls Roscoe & Dorothy Kuya Halls In 2018, the university faced strong criticism from the student body that the university provided halls were too expensive, by the Cut the Rent campaign. Privately accommodation owned Apollo Court ranked 3rd and Myrtle Court ranked 4th in the UK for value for money on a university review platform StudentCrowd. In 2021 "Gladstone Halls" was renamed after leading communist and anti-racist leader Dorothy Kuya. Sport The University of Liverpool has a proud sporting tradition and has many premier teams in a variety of sports. The current sporting project comes under the title of Sport Liverpool and offers over 50 different sports ranging from football, rugby, cricket and hockey to others such as windsurfing, lacrosse and cheerleading. Many of the sports have both male and female teams and most are involved in competition on a national scale. BUCS is the body which organises national university competitions involving 154 institutions in 47 sports. Most sports involve travelling to various locations across the country, mainly on Wednesday afternoons. Two other prominent competitions are the Christie Championships and the Varsity Cup. The Christie Cup is an inter-university competition between Liverpool, Leeds and Manchester. The Varsity Cup is a popular "derby" event between Liverpool John Moores University and the University of Liverpool. Notable alumni Nobel Prize winners There have been nine Nobel Prize Laureates who have been based at the university during a significant point in their career. Sir Ronald Ross (awarded the Nobel Prize in Medicine in 1902) for his work with malaria. Charles Barkla (awarded the Nobel Prize in Physics in 1917) for discovering the electromagnetic properties of X-rays. Sir Charles Sherrington (awarded the Nobel Prize in Physiology/Medicine in 1932) for his research into neurons. Sir James Chadwick (awarded the Nobel Prize in Physics in 1935) for discovering neutrons. Sir Robert Robinson (awarded the Nobel Prize in Chemistry in 1947) for his research into anthocyanins and alkaloids. Har Gobind Khorana (awarded the Nobel Prize in Physiology/Medicine in 1968) for his work on the interpretation of the genetic code and its function in protein synthesis. Rodney Porter (awarded the Nobel Prize in Physiology/Medicine in 1972) for his discovery of the structure of antibodies. Ronald Coase (awarded the Nobel Prize in Economics in 1991) for his discovery and clarification of the significance of transaction costs and property rights for the institutional structure and functioning of the economy. Joseph Rotblat (awarded the Nobel Peace Prize in 1995) for his efforts with nuclear disarmament. See also Liverpool Knowledge Quarter Liverpool School of Tropical Medicine Royal Liverpool University Hospital Liverpool University School of Architecture List of modern universities in Europe (1801–1945) Cayman Islands Law School Liverpool Life Sciences UTC Notes References Further reading Rigg, J. Anthony (1968) "A comparative history of the libraries of Manchester and Liverpool Universities up to 1903", in: Saunders, W. L., ed. University and Research Library Studies: some contributions from the University of Sheffield Post-graduate School of Librarianship and Information Science. Oxford: Pergamon Press, 1968 External links University of Liverpool University of Liverpool in London Liverpool Guild of Students' Educational institutions established in 1881 Russell Group 1881 establishments in England Universities UK
25686223
https://en.wikipedia.org/wiki/Symbian
Symbian
Symbian is a discontinued mobile operating system (OS) and computing platform designed for smartphones. Symbian was originally developed as a proprietary software OS for PDAs in 1998 by the Symbian Ltd. consortium. Symbian OS is a descendant of Psion's EPOC, and was released exclusively on ARM processors, although an unreleased x86 port existed. Symbian was used by many major mobile phone brands, like Samsung, Motorola, Sony Ericsson, and above all by Nokia. It was also prevalent in Japan by brands including Fujitsu, Sharp and Mitsubishi. As a pioneer that established the smartphone industry, it was the most popular smartphone OS on a worldwide average until the end of 2010, at a time when smartphones were in limited use, when it was overtaken by iOS and Android. It was notably less popular in North America. The Symbian OS platform is formed of two components: one being the microkernel-based operating system with its associated libraries, and the other being the user interface (as middleware), which provides the graphical shell atop the OS. The most prominent user interface was the S60 (formerly Series 60) platform built by Nokia, first released in 2002 and powering most Nokia Symbian devices. UIQ was a competing user interface mostly used by Motorola and Sony Ericsson that focused on pen-based devices, rather than a traditional keyboard interface from S60. Another interface was the MOAP(S) platform from carrier NTT DoCoMo in the Japanese market. Applications of these different interfaces were not compatible with each other, despite each being built atop Symbian OS. Nokia became the largest shareholder of Symbian Ltd. in 2004 and purchased the entire company in 2008. The non-profit Symbian Foundation was then created to make a royalty-free successor to Symbian OS. Seeking to unify the platform, S60 became the Foundation's favoured interface and UIQ stopped development. The touchscreen-focused Symbian^1 (or S60 5th Edition) was created as a result in 2009. Symbian^2 (based on MOAP) was used by NTT DoCoMo, one of the members of the Foundation, for the Japanese market. Symbian^3 was released in 2010 as the successor to S60 5th Edition, by which time it became fully free software. The transition from a proprietary operating system to a free software project is believed to be one of the largest in history. Symbian^3 received the Anna and Belle updates in 2011. The Symbian Foundation disintegrated in late 2010 and Nokia took back control of the OS development. In February 2011, Nokia, by now the only remaining company still supporting Symbian outside Japan, announced that it would use Microsoft's Windows Phone 7 as its primary smartphone platform, while Symbian would be gradually wound down. Two months later, Nokia moved the OS to proprietary licensing, only collaborating with the Japanese OEMs and later outsourced Symbian development to Accenture. Although support was promised until 2016, including two major planned updates, by 2012 Nokia had mostly abandoned development and most Symbian developers had already left Accenture, and in January 2014 Nokia stopped accepting new or changed Symbian software from developers. The Nokia 808 PureView in 2012 was officially the last Symbian smartphone from Nokia. NTT DoCoMo continued releasing OPP(S) (Operator Pack Symbian, successor of MOAP) devices in Japan, which still act as middleware on top of Symbian. Phones running this include the from Fujitsu and from Sharp in 2014. History Symbian originated from EPOC32, an operating system created by Psion in the 1990s. In June 1998, Psion Software became Symbian Ltd., a major joint venture between Psion and phone manufacturers Ericsson, Motorola, and Nokia. Afterwards, different software platforms were created for Symbian, backed by different groups of mobile phone manufacturers. They include S60 (Nokia, Samsung and LG), UIQ (Sony Ericsson and Motorola) and MOAP(S) (Japanese only such as Fujitsu, Sharp etc.). With no major competition in the smartphone OS then (Palm OS and Windows Mobile were comparatively small players), Symbian reached as high as 67% of the global smartphone market share in 2006. Despite its sizable market share then, Symbian was at various stages difficult to develop for: First (at around early-to-mid-2000s) due to the complexity of then the only native programming languages Open Programming Language (OPL) and Symbian C++, and of the OS; then the stubborn developer bureaucracy, along with high prices of various integrated development environments (IDEs) and software development kits (SDKs), which were prohibitive for independent or very small developers; and then the subsequent fragmentation, which was in part caused by infighting among and within manufacturers, each of which also had their own IDEs and SDKs. All of this discouraged third-party developers, and served to cause the native app ecosystem for Symbian not to evolve to a scale later reached by Apple's App Store or Android's Google Play. By contrast, iPhone OS (renamed iOS in 2010) and Android had comparatively simpler design, provided easier and much more centralized infrastructure to create and obtain third-party apps, offered certain developer tools and programming languages with a manageable level of complexity, and having abilities such as multitasking and graphics to meet future consumer demands. Although Symbian was difficult to program for, this issue could be worked around by creating Java Mobile Edition apps, ostensibly under a "write once, run anywhere" slogan. This wasn't always the case because of fragmentation due to different device screen sizes and differences in levels of Java ME support on various devices. In June 2008, Nokia announced the acquisition of Symbian Ltd., and a new independent non-profit organization called the Symbian Foundation was established. Symbian OS and its associated user interfaces S60, UIQ, and MOAP(S) were contributed by their owners Nokia, NTT DoCoMo, Sony Ericsson, and Symbian Ltd., to the foundation with the objective of creating the Symbian platform as a royalty-free, Free software, under the Free Software Foundation (FSF) and Open Source Initiative (OSI) approved Eclipse Public License (EPL). The platform was designated as the successor to Symbian OS, following the official launch of the Symbian Foundation in April 2009. The Symbian platform was officially made available as Free software in February 2010. Nokia became the major contributor to Symbian's code, since it then possessed the development resources for both the Symbian OS core and the user interface. Since then Nokia maintained its own code repository for the platform development, regularly releasing its development to the public repository. Symbian was intended to be developed by a community led by the Symbian Foundation, which was first announced in June 2008 and which officially launched in April 2009. Its objective was to publish the source code for the entire Symbian platform under the OSI and FSF approved EPL). The code was published under EPL on 4 February 2010; Symbian Foundation reported this event to be the largest codebase moved to Free software in history. However, some important components within Symbian OS were licensed from third parties, which prevented the foundation from publishing the full source under EPL immediately; instead much of the source was published under a more restrictive Symbian Foundation License (SFL) and access to the full source code was limited to member companies only, although membership was open to any organisation. Also, the Free software Qt framework was introduced to Symbian in 2010, as the primary upgrade path to MeeGo, which was to be the next mobile operating system to replace and supplant Symbian on high-end devices; Qt was by its nature free and very convenient to develop with. Several other frameworks were deployed to the platform, among them Standard C and C++, Python, Ruby, and Adobe Flash Lite. IDEs and SDKs were developed and then released for free, and application software (app) development for Symbian picked up. In November 2010, the Symbian Foundation announced that due to changes in global economic and market conditions (and also a lack of support from members such as Samsung and Sony Ericsson), it would transition to a licensing-only organisation; Nokia announced it would take over the stewardship of the Symbian platform. Symbian Foundation would remain the trademark holder and licensing entity and would only have non-executive directors involved. With market share sliding from 39% in Q32010 to 31% in Q42010, Symbian was losing ground to iOS and Android quickly, eventually falling behind Android in Q42010. Stephen Elop was appointed the CEO of Nokia in September 2010, and on 11 February 2011, he announced a partnership with Microsoft that would see Nokia adopt Windows Phone as its primary smartphone platform, and Symbian would be gradually phased out, together with MeeGo. As a consequence, Symbian's market share fell, and application developers for Symbian dropped out rapidly. Research in June 2011 indicated that over 39% of mobile developers using Symbian at the time of publication were planning to abandon the platform. By 5 April 2011, Nokia ceased to make free any portion of the Symbian software and reduced its collaboration to a small group of preselected partners in Japan. Source code released under the original EPL remains available in third party repositories, including a full set of all public code from the project as of 7 December 2010. On 22 June 2011, Nokia made an agreement with Accenture for an outsourcing program. Accenture will provide Symbian-based software development and support services to Nokia through 2016; about 2,800 Nokia employees became Accenture employees as of October 2011. The transfer was completed on 30 September 2011. Nokia terminated its support of software development and maintenance for Symbian with effect from 1 January 2014, thereafter refusing to publish new or changed Symbian applications or content in the Nokia Store and terminating its 'Symbian Signed' program for software certification. Features User interface Symbian has had a native graphics toolkit since its inception, known as AVKON (formerly known as Series 60). S60 was designed to be manipulated by a keyboard-like interface metaphor, such as the ~15-key augmented telephone keypad, or the mini-QWERTY keyboards. AVKON-based software is binary-compatible with Symbian versions up to and including Symbian^3. Symbian^3 includes the Qt framework, which is now the recommended user interface toolkit for new applications. Qt can also be installed on older Symbian devices. Symbian^4 was planned to introduce a new GUI library framework specifically designed for a touch-based interface, known as "UI Extensions for Mobile" or UIEMO (internal project name "Orbit"), which was built on top of Qt Widget; a preview was released in January 2010, however in October 2010 Nokia announced that Orbit/UIEMO had been cancelled. Nokia later recommended that developers use Qt Quick with QML, the new high-level declarative UI and scripting framework for creating visually rich touchscreen interfaces that allowed development for both Symbian and MeeGo; it would be delivered to existing Symbian^3 devices as a Qt update. When more applications gradually feature a user interface reworked in Qt, the legacy S60 framework (AVKON) would be deprecated and no longer included with new devices at some point, thus breaking binary compatibility with older S60 applications. Browser Symbian^3 and earlier have a built-in WebKit based browser. Symbian was the first mobile platform to make use of WebKit (in June 2005). Some older Symbian models have Opera Mobile as their default browser. Nokia released a new browser with the release of Symbian Anna with improved speed and an improved user interface. Multiple language support Symbian had strong localization support enabling manufacturers and 3rd party application developers to localize Symbian based products to support global distribution. Nokia made languages available in the device, in language packs: a set of languages which cover those commonly spoken in the area where a device variant is to be sold. All language packs have in common English, or a locally relevant dialect of it. The last release, Symbian Belle, supports these 48 languages, with [dialects], and (scripts): Symbian Belle marks the introduction of Kazakh, while Korean is no longer supported. Japanese is only available on Symbian^2 devices as they are made in Japan, and on other Symbian devices Japanese is still supported with limitations. Application development From 2010, Symbian switched to using standard C++ with Qt as the main SDK, which can be used with either Qt Creator or Carbide.c++. Qt supports the older Symbian/S60 3rd (starting with Feature Pack 1, a.k.a. S60 3.1) and Symbian/S60 5th Edition (a.k.a. S60 5.01b) releases, as well as the new Symbian platform. It also supports Maemo and MeeGo, Windows, Linux and Mac OS X. Alternative application development can be done using Python (see Python for S60), Adobe Flash Lite or Java ME. Symbian OS previously used a Symbian specific C++ version, along with CodeWarrior and later Carbide.c++ integrated development environment (IDE), as the native application development environment. Web Run time (WRT) is a portable application framework that allows creating widgets on the S60 Platform; it is an extension to the S60 WebKit based browser that allows launching multiple browser instances as separate JavaScript applications. Application development Qt As of 2010, the SDK for Symbian is standard C++, using Qt. It can be used with either Qt Creator, or Carbide (the older IDE previously used for Symbian development). A phone simulator allows testing of Qt apps. Apps compiled for the simulator are compiled to native code for the development platform, rather than having to be emulated. Application development can either use C++ or QML. Symbian C++ As Symbian OS is written in C++ using Symbian Software's coding standards, it is possible to develop using Symbian C++, although it is not a standard implementation. Before the release of the Qt SDK, this was the standard development environment. There were multiple platforms based on Symbian OS that provided software development kits (SDKs) for application developers wishing to target Symbian OS devices, the main ones being UIQ and S60. Individual phone products, or families, often had SDKs or SDK extensions downloadable from the maker's website too. The SDKs contain documentation, the header files and library files needed to build Symbian OS software, and a Windows-based emulator ("WINS"). Up until Symbian OS version 8, the SDKs also included a version of the GNU Compiler Collection (GCC) compiler (a cross-compiler) needed to build software to work on the device. Symbian OS 9 and the Symbian platform use a new application binary interface (ABI) and needed a different compiler. A choice of compilers is available including a newer version of GCC (see external links below). Unfortunately, Symbian C++ programming has a steep learning curve, as Symbian C++ requires the use of special techniques such as descriptors, active objects and the cleanup stack. This can make even relatively simple programs initially harder to implement than in other environments. It is possible that the techniques, developed for the much more restricted mobile hardware and compilers of the 1990s, caused extra complexity in source code because programmers are required to concentrate on low-level details instead of more application-specific features. As of 2010, these issues are no longer the case when using standard C++, with the Qt SDK. Symbian C++ programming is commonly done with an integrated development environment (IDE). For earlier versions of Symbian OS, the commercial IDE CodeWarrior for Symbian OS was favoured. The CodeWarrior tools were replaced during 2006 by Carbide.c++, an Eclipse-based IDE developed by Nokia. Carbide.c++ is offered in four different versions: Express, Developer, Professional, and OEM, with increasing levels of capability. Fully featured software can be created and released with the Express edition, which is free. Features such as UI design, crash debugging etc. are available in the other, charged-for, editions. Microsoft Visual Studio 2003 and 2005 are also supported via the Carbide.vs plugin. Other languages Symbian devices can also be programmed using Python, Java ME, Flash Lite, Ruby, .NET, Web Runtime (WRT) Widgets and Standard C/C++. Visual Basic programmers can use NS Basic to develop apps for S60 3rd Edition and UIQ 3 devices. In the past, Visual Basic, Visual Basic .NET, and C# development for Symbian were possible through AppForge Crossfire, a plugin for Microsoft Visual Studio. On 13 March 2007 AppForge ceased operations; Oracle purchased the intellectual property, but announced that they did not plan to sell or provide support for former AppForge products. Net60, a .NET compact framework for Symbian, which is developed by redFIVElabs, is sold as a commercial product. With Net60, VB.NET, and C# (and other) source code is compiled into an intermediate language (IL) which is executed within the Symbian OS using a just-in-time compiler. (As of 18 January 2010, RedFiveLabs has ceased development of Net60 with this announcement on their landing page: "At this stage we are pursuing some options to sell the IP so that Net60 may continue to have a future.") There is also a version of a Borland IDE for Symbian OS. Symbian development is also possible on Linux and macOS using tools and methods developed by the community, partly enabled by Symbian releasing the source code for key tools. A plugin that allows development of Symbian OS applications in Apple's Xcode IDE for Mac OS X was available. Java ME applications for Symbian OS are developed using standard techniques and tools such as the Sun Java Wireless Toolkit (formerly the J2ME Wireless Toolkit). They are packaged as JAR (and possibly JAD) files. Both CLDC and CDC applications can be created with NetBeans. Other tools include SuperWaba, which can be used to build Symbian 7.0 and 7.0s programs using Java. Nokia S60 phones can also run Python scripts when the interpreter Python for S60 is installed, with a custom made API that allows for Bluetooth support and such. There is also an interactive console to allow the user to write Python scripts directly from the phone. Deployment Once developed, Symbian applications need to find a route to customers' mobile phones. They are packaged in SIS files which may be installed over-the-air, via PC connect, Bluetooth or on a memory card. An alternative is to partner with a phone manufacturer and have the software included on the phone itself. Applications must be Symbian Signed for Symbian OS 9.x to make use of certain capabilities (system capabilities, restricted capabilities and device manufacturer capabilities). Applications could be signed for free in 2010. Architecture Technology domains and packages Symbian's design is subdivided into technology domains, each of which comprises a set of software packages. Each technology domain has its own roadmap, and the Symbian Foundation has a team of technology managers who manage these technology domain roadmaps. Every package is allocated to exactly one technology domain, based on the general functional area to which the package contributes and by which it may be influenced. By grouping related packages by themes, the Symbian Foundation hopes to encourage a strong community to form around them and to generate discussion and review. The Symbian System Model illustrates the scope of each of the technology domains across the platform packages. Packages are owned and maintained by a package owner, a named individual from an organization member of the Symbian Foundation, who accepts code contributions from the wider Symbian community and is responsible for package. Symbian kernel The Symbian kernel (EKA2) supports sufficiently fast real-time response to build a single-core phone around it – that is, a phone in which a single processor core executes both the user applications and the signalling stack. The real-time kernel has a microkernel architecture containing only the minimum, most basic primitives and functionality, for maximum robustness, availability and responsiveness. It has been termed a nanokernel, because it needs an extended kernel to implement any other abstractions. It contains a scheduler, memory management and device drivers, with networking, telephony, and file system support services in the OS Services Layer or the Base Services Layer. The inclusion of device drivers means the kernel is not a true microkernel. Design Symbian features pre-emptive multitasking and memory protection, like other operating systems (especially those created for use on desktop computers). EPOC's approach to multitasking was inspired by VMS and is based on asynchronous server-based events. Symbian OS was created with three systems design principles in mind: the integrity and security of user data is paramount user time must not be wasted all resources are scarce To best follow these principles, Symbian uses a microkernel, has a request-and-callback approach to services, and maintains separation between user interface and engine. The OS is optimised for low-power battery-based devices and for read-only memory (ROM)-based systems (e.g. features like XIP and re-entrancy in shared libraries). The OS, and application software, follows an object-oriented programming design named model–view–controller (MVC). Later OS iterations diluted this approach in response to market demands, notably with the introduction of a real-time kernel and a platform security model in versions 8 and 9. There is a strong emphasis on conserving resources which is exemplified by Symbian-specific programming idioms like descriptors and a cleanup stack. Similar methods exist to conserve storage space. Further, all Symbian programming is event-based, and the central processing unit (CPU) is switched into a low power mode when applications are not directly dealing with an event. This is done via a programming idiom called active objects. Similarly the Symbian approach to threads and processes is driven by reducing overheads. Operating system The All over Model contains the following layers, from top to bottom: UI Framework Layer Application Services Layer Java ME OS Services Layer generic OS services communications services multimedia and graphics services connectivity services Base Services Layer Kernel Services & Hardware Interface Layer The Base Services Layer is the lowest level reachable by user-side operations; it includes the File Server and User Library, a Plug-In Framework which manages all plug-ins, Store, Central Repository, DBMS and cryptographic services. It also includes the Text Window Server and the Text Shell: the two basic services from which a completely functional port can be created without the need for any higher layer services. Symbian has a microkernel architecture, which means that the minimum necessary is within the kernel to maximise robustness, availability and responsiveness. It contains a scheduler, memory management and device drivers, but other services like networking, telephony and file system support are placed in the OS Services Layer or the Base Services Layer. The inclusion of device drivers means the kernel is not a true microkernel. The EKA2 real-time kernel, which has been termed a nanokernel, contains only the most basic primitives and requires an extended kernel to implement any other abstractions. Symbian is designed to emphasise compatibility with other devices, especially removable media file systems. Early development of EPOC led to adopting File Allocation Table (FAT) as the internal file system, and this remains, but an object-oriented persistence model was placed over the underlying FAT to provide a POSIX-style interface and a streaming model. The internal data formats rely on using the same APIs that create the data to run all file manipulations. This has resulted in data-dependence and associated difficulties with changes and data migration. There is a large networking and communication subsystem, which has three main servers called: ETEL (EPOC telephony), ESOCK (EPOC sockets) and C32 (responsible for serial communication). Each of these has a plug-in scheme. For example, ESOCK allows different ".PRT" protocol modules to implement various networking protocol schemes. The subsystem also contains code that supports short-range communication links, such as Bluetooth, IrDA and USB. There is also a large volume of user interface (UI) Code. Only the base classes and substructure were contained in Symbian OS, while most of the actual user interfaces were maintained by third parties. This is no longer the case. The three major UIs – S60, UIQ and MOAP – were contributed to Symbian in 2009. Symbian also contains graphics, text layout and font rendering libraries. All native Symbian C++ applications are built up from three framework classes defined by the application architecture: an application class, a document class and an application user interface class. These classes create the fundamental application behaviour. The remaining needed functions, the application view, data model and data interface, are created independently and interact solely through their APIs with the other classes. Many other things do not yet fit into this model – for example, SyncML, Java ME providing another set of APIs on top of most of the OS and multimedia. Many of these are frameworks, and vendors are expected to supply plug-ins to these frameworks from third parties (for example, Helix Player for multimedia codecs). This has the advantage that the APIs to such areas of functionality are the same on many phone models, and that vendors get a lot of flexibility. But it means that phone vendors needed to do a great deal of integration work to make a Symbian OS phone. Symbian includes a reference user-interface called "TechView." It provides a basis for starting customisation and is the environment in which much Symbian test and example code runs. It is very similar to the user interface from the Psion Series 5 personal organiser and is not used for any production phone user interface. Symbian UI variants, platforms Symbian, as it advanced to OS version 7.0, spun off into several different graphical user interfaces, each backed by a certain company or group of companies. Unlike Android OS's cosmetic GUIs, Symbian GUIs are referred to as "platforms" due to more significant modifications and integrations. Things became more complicated when applications developed for different Symbian GUI platforms were not compatible with each other, and this led to OS fragmentation. User Interfaces platforms that run on or are based on Symbian OS include: S60, Symbian, also called Series 60. It was backed mainly by Nokia. There are several editions of this platform, appearing first as S60 (1st Edition) on Nokia 7650. It was followed by S60 2nd Edition (e.g. Nokia N70), S60 3rd Edition (e.g. Nokia N73) and S60 5th Edition (which introduced touch UI e.g. Nokia N97). The name, S60, was changed to just Symbian after the formation of Symbian Foundation, and subsequently called Symbian^1, 2 and 3. Series 80 used by Nokia Communicators such as Nokia 9300i. Series 90 Touch and button based. The only phone using this platform is Nokia 7710. UIQ backed mainly by Sony Ericsson and then Motorola. It is compatible with both buttons and touch/stylus based inputs. The last major release version is UIQ3.1 in 2008, on Sony Ericsson G900. It was discontinued after the formation of Symbian Foundation, and the decision to consolidate different Symbian UI variants into one led to the adoption of S60 as the version going forward. MOAP (Mobile Oriented Applications Platform) [Japan Only] used by Fujitsu, Mitsubishi, Sony Ericsson and Sharp-developed phones for NTT DoCoMo. It uses an interface developed specifically for DoCoMo's FOMA "Freedom of Mobile Access" network brand and is based on the UI from earlier Fujitsu FOMA models. The user cannot install new C++ applications. (Japan Only) OPP [Japan Only], successor of MOAP, used on NTT DoCoMo's FOMA phone. Version comparison * Manufactured by Fujitsu † Manufactured by Sharp ▲ Software update service for Nokia Belle and Symbian (S60) phones is discontinued at the end of December 2015 Market share and competition In Q1 2004 2.4 million Symbian phones were shipped, double the number as in Q1 2003. Symbian Ltd. was particularly impressed by progress made in Japan. 3.7 million devices were shipped in Q3 2004, a growth of 201% compared to Q3 2003 and market share growing from 30.5% to 50.2%. However, in the United States it was much less popular, with a 6% market share in Q3 2004, well behind Palm OS (43%) and Windows Mobile (25%). This has been attributed to North American customers preferring wireless PDAs over smartphones, as well as Nokia's low popularity there. On 16 November 2006, the 100 millionth smartphone running the OS was shipped. As of 21 July 2009, more than 250 million devices running Symbian OS had been produced. In 2006, Symbian had 73% of the smartphone market, compared with 22.1% of the market in the second quarter of 2011. By the end of May 2006, 10 million Symbian-powered phones were sold in Japan, representing 11% of Symbian's total worldwide shipments of 89 million. By November 2007 the figure was 30 million, achieving a market share of 65% by June 2007 in the Japanese market. Symbian has lost market share over the years as the market has dramatically grown, with new competing platforms entering the market, though its sales have increased during the same timeframe. E.g., although Symbian's share of the global smartphone market dropped from 52.4% in 2008 to 47.2% in 2009, shipments of Symbian devices grew 4.8%, from 74.9 million units to 78.5 million units. From Q2 2009 to Q2 2010, shipments of Symbian devices grew 41.5%, by 8.0 million units, from 19,178,910 units to 27,129,340; compared to an increase of 9.6 million units for Android, 3.3 million units for RIM, and 3.2 million units for Apple. Prior reports on device shipments as published in February 2010 showed that the Symbian devices formed a 47.2% share of the smart mobile devices shipped in 2009, with RIM having 20.8%, Apple having 15.1% (via iOS), Microsoft having 8.8% (via Windows CE and Windows Mobile) and Android having 4.7%. In the number of "smart mobile device" sales, Symbian devices were the market leaders for 2010. Statistics showed that Symbian devices formed a 37.6% share of smart mobile devices sold, with Android having 22.7%, RIM having 16%, and Apple having 15.7% (via iOS). Some estimates indicate that the number of mobile devices shipped with the Symbian OS up to the end of Q2 2010 is 385 million. Over the course of 2009–10, Motorola, Samsung, LG, and Sony Ericsson announced their withdrawal from Symbian in favour of alternative platforms including Google's Android, Microsoft's Windows Phone. In Q2 2012, according to IDC worldwide market share has dropped to an all-time low of 4.4%. Criticism The users of Symbian in the countries with non-Latin alphabets (such as Russia, Ukraine and others) have been criticizing the complicated method of language switching for many years. For example, if a user wants to type a Latin letter, they must call the menu, click the languages item, use arrow keys to choose, for example, the English language from among many other languages, and then press the 'OK' button. After typing the Latin letter, the user must repeat the procedure to return to their native keyboard. This method slows down typing significantly. In touch-phones and QWERTY phones the procedure is slightly different but remains time-consuming. All other mobile operating systems, as well as Nokia's S40 phones, enable switching between two initially selected languages by one click or a single gesture. Early versions of the firmware for the original Nokia N97, running on Symbian^1/Series 60 5th Edition have been heavily criticized as buggy (also contributed by the low amount of RAM installed in the phone). In November 2010, Smartphone blog All About Symbian criticized the performance of Symbian's default web browser and recommended the alternative browser Opera Mobile. Nokia's Senior Vice President Jo Harlow promised an updated browser in the first quarter of 2011. There are many different versions and editions of Symbian, which led to fragmentation. Apps and software may be incompatible when installed across different versions of Symbian. Malware Symbian OS is subject to a variety of viruses, the best known of which is Cabir. Usually these send themselves from phone to phone by Bluetooth. So far, none have exploited any flaws in Symbian OS. Instead, they have all asked the user whether they want to install the software, with somewhat prominent warnings that it can't be trusted, although some rely on social engineering, often in the form of messages that come with the malware: rogue software purporting to be a utility, game, or some other application for Symbian. However, with a view that the average mobile phone user shouldn't have to worry about security, Symbian OS 9.x adopted a Unix-style capability model (permissions per process, not per object). Installed software is theoretically unable to do damaging things (such as costing the user money by sending network data) without being digitally signed – thus making it traceable. Commercial developers who can afford the cost can apply to have their software signed via the Symbian Signed program. Developers also have the option of self-signing their programs. However, the set of available features does not include access to Bluetooth, IrDA, GSM CellID, voice calls, GPS and few others. Some operators opted to disable all certificates other than the Symbian Signed certificates. Some other hostile programs are listed below, but all of them still require the input of the user to run. Drever.A is a malicious SIS file trojan that attempts to disable the automatic startup from Simworks and Kaspersky Symbian Anti-Virus applications. Locknut.B is a malicious SIS file trojan that pretends to be a patch for Symbian S60 mobile phones. When installed, it drops a binary that will crash a critical system service component. This will prevent any application from being launched in the phone. Mabir.A is basically Cabir with added MMS functionality. The two are written by the same author, and the code shares many similarities. It spreads using Bluetooth via the same routine as early variants of Cabir. As Mabir.A activates, it will search for the first phone it finds, and starts sending copies of itself to that phone. Fontal.A is an SIS file trojan that installs a corrupted file which causes the phone to fail at reboot. If the user tries to reboot the infected phone, it will be permanently stuck on the reboot screen, and cannot be used without disinfection – that is, the use of the reformat key combination which causes the phone to lose all data. Being a trojan, Fontal cannot spread by itself – the most likely way for the user to get infected would be to acquire the file from untrusted sources, and then install it to the phone, inadvertently or otherwise. A new form of malware threat to Symbian OS in the form of 'cooked firmware' was demonstrated at the International Malware Conference, Malcon, December 2010, by Indian hacker Atul Alex. Bypassing platform security Symbian OS 9.x devices can be hacked to remove the platform security introduced in OS 9.1 onwards, allowing users to execute unsigned code. This allows altering system files, and access to previously locked areas of the OS. The hack was criticised by Nokia for potentially increasing the threat posed by mobile viruses as unsigned code can be executed. Version history List of devices See also General Bada Nokia Ovi suite Nokia PC Suite, software package used to establish an interface between Nokia mobile devices and computers running Microsoft Windows operating system; not limited to Symbian Nokia Software Updater Ovi store Nokia's application store on the Internet, not limited to Symbian Development-related Accredited Symbian Developer Carbide.c++, alternative application and OS development IDE Cleanup stack P.I.P.S. Is POSIX on Symbian Python for S60, alternative application development language Qt, preferred development tool, both for the OS and applications, not limited to Symbian Qt Creator IDE Qt Quick QML, JavaScript based language MBM (file format) References Bibliography External links Symbian foundation blog (which the homepage redirects to) Symbian on Ohloh Symbian^3 EPL source Most complete Symbian Open Source archive wildducks – Beagleboard port of Symbian S^3 Symaptic – C-Make build system Symbian Mercurial Repository (Windows platform) Discontinued operating systems Discontinued software Accenture ARM operating systems Embedded operating systems History of software Microkernel-based operating systems Mobile operating systems Nokia platforms Real-time operating systems Smartphones Microkernels
53525
https://en.wikipedia.org/wiki/GameCube
GameCube
The is a home video game console developed and released by Nintendo in Japan on September 14, 2001, in North America on November 18, 2001, and in PAL territories in 2002. It is the successor to the Nintendo 64, which released in 1996, and predecessor of the Wii, which released in 2006. As Nintendo's entry in the sixth generation of video game consoles, the GameCube competed with Sony's PlayStation 2 and Microsoft's original Xbox. Its earliest development began with the 1997 formation of ArtX, a computer graphics company later acquired by ATI, which would go on to produce the console's GPUs. Nintendo publicly announced the console under the code name "Project Dolphin" in a May 1999 press conference. Upon its release in 2001, the GameCube became Nintendo's first console to use optical discs, specifically a miniDVD-based format, as its primary storage medium instead of ROM cartridges. Unlike its competitors, the system is solely focused on gaming and does not support DVD, CDs, or other optical media. The console supports limited online gaming for a small number of games via a GameCube broadband or modem adapter and can connect to a Game Boy Advance with a link cable, which allows players to access exclusive in-game features using the handheld as a second screen and controller. The GameCube supports cards for unlocking special features in a few games. Saved game data can be stored exclusively on memory cards due to the read-only optical disc format. The Game Boy Player add-on runs Game Boy, Game Boy Color, and Game Boy Advance cartridge games. Reception of the GameCube was mixed. It was praised for its controller, extensive software library, and high-quality games, but was criticized for its exterior design and lack of multimedia features. Nintendo sold 21.74 million GameCube units worldwide, much less than anticipated, and discontinued it in 2007. Its successor, the Wii, launched in November 2006 and features full backward compatibility with GameCube games, storage, and controllers. History Background In 1997, a graphics hardware design company called ArtX was launched, staffed by twenty engineers who had previously worked at SGI on the design of the Nintendo 64's graphics hardware. The team was led by Dr. Wei Yen, who had been SGI's head of Nintendo Operations, the department responsible for the Nintendo 64's fundamental architectural design. Development Partnering with Nintendo in 1998, ArtX began the complete design of the system logic and of the graphics processor (codenamed "Flipper") of Nintendo's sixth-generation video game console. The console project had a succession of codenames: N2000, Star Cube, and Nintendo Advance. At Nintendo's press conference in May 1999, the console was first publicly announced as "Project Dolphin", the successor to the Nintendo 64. Subsequently, Nintendo began providing development kits to game developers such as Rare and Retro Studios. Nintendo also formed a strategic partnership with IBM, who created the Dolphin's CPU, named "Gekko". ArtX was acquired by ATI in April 2000, whereupon the Flipper graphics processor design had already been mostly completed by ArtX and was not overtly influenced by ATI. In total, ArtX team cofounder Greg Buchner recalled that their portion of the console's hardware design timeline had arced from inception in 1998 to completion in 2000. Of ATI's acquisition of ArtX, an ATI spokesperson said, "ATI now becomes a major supplier to the game console market via Nintendo. The Dolphin platform is reputed to be king of the hill in terms of graphics and video performance with 128-bit architecture." The console was announced as the GameCube at a press conference in Japan on August 25, 2000, abbreviated as "NGC" in Japan and "GCN" in North America. Nintendo unveiled its software lineup for the sixth-generation console at E3 2001, focusing on fifteen launch games, including Luigi's Mansion and Star Wars Rogue Squadron II: Rogue Leader. Several games originally scheduled to launch with the console were delayed. It is also the first Nintendo home console since the Famicom not to accompany a Super Mario platform game at launch. Long before the console's launch, Nintendo had developed and patented an early prototype of motion controls for the GameCube, with which developer Factor 5 had experimented for its launch games. An interview quoted Greg Thomas, Sega of America's VP of Development as saying, "What does worry me is Dolphin's sensory controllers [which are rumored to include microphones and headphone jacks] because there's an example of someone thinking about something different." These motion control concepts would not be deployed to consumers for several years, until the Wii Remote. Prior to the GameCube's release, Nintendo focused resources on the launch of the Game Boy Advance, a handheld game console and successor to the original Game Boy and Game Boy Color. As a result, several games originally destined for the Nintendo 64 console were postponed in favor of becoming early releases on the GameCube. The last first-party game in 2001 for the Nintendo 64 was released in May, a month before the Game Boy Advance's launch and six months before the GameCube's, emphasizing the company's shift in resources. Concurrently, Nintendo was developing software for the GameCube which would provision future connectivity between it and the Game Boy Advance. Certain games, such as The Legend of Zelda: Four Swords Adventures and Final Fantasy Crystal Chronicles, can use the handheld as a secondary screen and controller when connected to the console via a link cable. Nintendo began its marketing campaign with the catchphrase "The Nintendo Difference" at its E3 2001 reveal. The goal was to distinguish itself from the competition as an entertainment company. Later advertisements have the slogan, "Born to Play", and game ads feature a rotating cube animation that morphs into a GameCube logo and ends with a voice whispering, "GameCube". On May 21, 2001, the console's launch price of was announced, lower than that of the PlayStation 2 and Xbox. In September 2020, leaked documents included Nintendo's plans for a GameCube model that would be both portable with a built-in display and dockable to a TV, similar to their later console the Nintendo Switch. Other leaks suggest plans for a GameCube successor, codenamed "Tako", with HD graphics and slots for SD and memory cards, apparently resulting from a partnership with ATI (now AMD) and scheduled for release in 2005. Release The GameCube was launched in Japan on September 14, 2001. Approximately 500,000 units were shipped in time to retailers. The console was scheduled to launch two months later in North America on November 5, 2001, but the date was pushed back in an effort to increase the number of available units. The console eventually launched in North America on November 18, 2001, with over 700,000 units shipped to the region. Other regions followed suit the following year beginning with Europe in the second quarter of 2002. On April 22, 2002, veteran third party Nintendo console developer Factor 5 announced its 3D audio software development kit titled MusyX. In collaboration with Dolby Laboratories, MusyX provides motion-based surround sound encoded as Dolby Pro Logic II. The Triforce arcade board is a joint development between Nintendo, Namco, and Sega, based on the Gamecube's design. Its games include Mario Kart Arcade GP and F-Zero AX. Discontinuation In February 2007, Nintendo announced that it had ceased first-party support for the GameCube and that the console had been discontinued, as it was shifting its manufacturing and development efforts towards the Wii and Nintendo DS. Hardware Howard Cheng, technical director of Nintendo technology development, said the company's goal was to select a "simple RISC architecture" to help speed the development of games by making it easier on software developers. IGN reported that the system was "designed from the get-go to attract third-party developers by offering more power at a cheaper price. Nintendo's design doc for the console specifies that cost is of utmost importance, followed by space." Hardware partner ArtX's Vice President Greg Buchner stated that their guiding thought on the console's hardware design was to target the developers rather than the players, and to "look into a crystal ball" and discern "what's going to allow the Miyamoto-sans of the world to develop the best games". Initiating the GameCube's design in 1998, Nintendo partnered with ArtX (then acquired by ATI Technologies during development) for the system logic and the GPU, and with IBM for the CPU. IBM designed a PowerPC-based processor with custom architectural extensions for the next-generation console, known as Gekko, which runs at 486 MHz and features a floating point unit (FPU) capable of a total throughput of 1.9 GFLOPS and a peak of 10.5 GFLOPS. Described as "an extension of the IBM PowerPC architecture", the Gekko CPU is based on the PowerPC 750CXe with IBM's 0.18μm CMOS technology, which features copper interconnects. Codenamed "Flipper", the GPU runs at 162 MHz and, in addition to graphics, manages other tasks through its audio and input/output (I/O) processors. The GameCube introduced a proprietary miniDVD optical disc format as the storage medium for the console, capable of storing up to 1.5 GB of data. The technology was designed by Matsushita Electric Industrial (now Panasonic Corporation) which utilizes a proprietary copy-protection scheme—different from the Content Scramble System (CSS) found in standard DVDs—to prevent unauthorized reproduction. The Famicom Data Recorder, Famicom Disk System, SNES-CD, and 64DD represent explorations of complementary storage technologies, but the GameCube is Nintendo's first console to not use primarily cartridge-based media. The GameCube's 1.5 GB mini-disc have sufficient room for most games, although a few games require an extra disc, higher video compression, or removal of content present in versions on other consoles. By comparison, the PlayStation 2 and Xbox, also sixth-generation consoles, both use CDs and DVDs with sizes of up to 8.5 GB. Like its predecessor, the Nintendo 64, GameCube models were produced in several different color motifs. The system launched in "Indigo", the primary color shown in advertising and on the logo, and in "Jet Black". A year later, Nintendo released a "Platinum" limited-edition GameCube, which uses a silver color scheme for both the console and controller. A "Spice" orange-colored console was eventually released as well only in Japan, though the color scheme could be found on controllers released in other countries. Nintendo developed stereoscopic 3D technology for the GameCube, and one launch game, Luigi's Mansion, supports it. However, the feature was never enabled outside of development. 3D televisions were not widespread at the time, and it was deemed that compatible displays and crystals for the add-on accessories would be too cost-prohibitive for the consumer. Another unofficial feature are two audio Easter eggs that can be invoked when the console is turned on. When the power is activated with the "Z" button on the Player 1 controller held down, a more whimsical startup sound is heard in place of the standard one. With four controllers connected, holding down the "Z" button on all four simultaneously produces a kabuki-style tune at startup. Storage The GameCube features two memory card ports for saving game data. Nintendo released three memory card options: Memory Card 59 in gray (512 KB), Memory Card 251 in black (2 MB), and Memory Card 1019 in white (8 MB). These are often advertised in megabits instead: 4 Mb, 16 Mb, and 64 Mb, respectively. A few games have compatibility issues with the Memory Card 1019, and at least two games have save problems with any size. Memory cards with larger capacities were released by third-party manufacturers. Controller Nintendo learned from its experiences—both positive and negative—with the Nintendo 64's three-handled controller design and went with a two-handled, "handlebar" design for the GameCube. The shape was made popular by Sony's PlayStation controller released in 1994 and its follow-up DualShock series of gamepads introduced in 1997. In addition to vibration feedback, the DualShock series was well known for having two analog sticks to improve the 3D experience in games. Nintendo and Microsoft designed similar features in the controllers for their sixth-generation consoles, but instead of having the analog sticks parallel to each other, they chose to stagger them by swapping the positions of the directional pad (d-pad) and left analog stick. The GameCube controller features a total of eight buttons, two analog sticks, a d-pad, and an internal rumble motor. The primary analog stick is on the left with the d-pad located below and closer to the center. On the right are four buttons: a large, green "A" button in the center, a smaller red "B" button to the left, an "X" button to the right, and a "Y" button at the top. Below and to the inside is a yellow "C" analog stick, which often serves a variety of in-game functions, such as controlling the camera angle. The Start/Pause button is located in the middle, and the rumble motor is encased within the center of the controller. On the top of the controller are two "pressure-sensitive" trigger buttons marked "L" and "R". Each essentially provides two functions: one analog and one digital. As the trigger is depressed, it emits an analog signal which increases the more it is pressed in. Once fully depressed, the trigger "clicks" registering a digital signal that can be used for a separate function within a game. There is also a purple, digital button on the right side marked "Z". Unique to the GameCube is the controller's prominent size and placement of the A button. Having been the primary action button in past Nintendo controller designs, it was given a larger size and more centralized placement for the GameCube. The rubberized analog stick, in combination with the controller's overall button orientation, was intended to reduce incidences of "Nintendo thumb" or pain in any part of the hands, wrists, forearms, and shoulders as a result of long-term play. In 2002, Nintendo introduced the WaveBird Wireless Controller, the first wireless gamepad developed by a first-party console manufacturer. The RF-based wireless controller is similar in design to the standard controller. It communicates with the GameCube by way of a wireless receiver dongle connected to one of the console's controller ports. Powered by two AA batteries, which are housed in a compartment on the underside of the controller, the WaveBird lacks the vibration functionality of the standard controller. In addition to the standard inputs, the WaveBird features a channel selection dial—also found on the receiver—and an on/off switch. An orange LED on the face of the controller indicates when it is powered on. The controller is available in light grey and platinum color schemes. Compatibility The GameCube is unable to play games from other Nintendo home consoles, but with the Game Boy Player attachment, it is able to play Game Boy, Game Boy Color, and Game Boy Advance games. The GameCube's successor, the Wii, supports backward compatibility with GameCube controllers, memory cards, and games but not the Game Boy Player or other hardware attachments. However, later revisions of the Wii—including the "Family Edition" released in 2011 and the Wii Mini released in 2012—do not support any GameCube hardware or software. Panasonic Q The is a hybrid version of the GameCube with a commercial DVD player, developed by Panasonic as part of the deal with Nintendo to develop the optical drive for the original GameCube hardware. It features a stainless steel case that is completely revised to accommodate the DVD capabilities, with Panasonic including a DVD-sized front-loading tray and a backlit LCD screen with playback controls among other hardware revisions; a carrying handle was also included as a nod to the one on the GameCube. Announced by Panasonic on October 19, 2001, it was released exclusively in Japan on December 14 at a suggested retail price of ¥39,800; however, low sales resulted in Panasonic announcing the discontinuation of the Q on December 18, 2003. The Q supports CDs in addition to DVDs and GameCube discs; however, there is little-to-no integration between the GameCube and DVD player hardware, emphasized by the "Game" button used to switch between the two modes. As a result, Dolby Digital 5.1 and DTS are only supported by the DVD player via a digital optical output; a bass boost system called Bass Plus is also supported via a dedicated subwoofer jack. Meanwhile, component video (YPbPr) output is only supported by the GameCube hardware, made possible by the "Digital AV Out" port that was present on early GameCube models. Virtually all GameCube peripherals are compatible with the Q; however, the standard Game Boy Player is physically incompatible with the Q due to the latter's legs, resulting in Panasonic producing a version of the former for the latter. A remote control was included with the console along with a Panasonic-branded GameCube controller. Games In its lifespan from 2001 to 2007, Nintendo released over 600 GameCube titles. Known for releasing recognized and innovative first-party games, such as the Super Mario and The Legend of Zelda series, Nintendo continued its trend of releases on the GameCube, which bolstered the console's popularity. As a publisher, Nintendo also focused on creating new franchises, such as Pikmin and Animal Crossing, and renewing some that skipped the Nintendo 64 platform, most notably the Metroid series with the release of Metroid Prime. The console also saw success with the critically acclaimed The Legend of Zelda: The Wind Waker and Super Mario Sunshine, and its best-selling game, Super Smash Bros. Melee, at 7 million copies worldwide. Though committed to its software library, however, Nintendo was still criticized for not featuring enough games during the console's launch window—a sentiment compounded by the release of Luigi's Mansion instead of a 3D Mario game. Early in Nintendo's history, the company had achieved considerable success with third-party developer support on the Nintendo Entertainment System and Super NES. Competition from the Sega Genesis and Sony's PlayStation in the 1990s changed the market's landscape, however, and reduced Nintendo's ability to obtain exclusive, third-party support on the Nintendo 64. The console's cartridge-based media was also increasing the cost to manufacture software, as opposed to the cheaper, higher-capacity optical discs used by the PlayStation. With the GameCube, Nintendo intended to reverse the trend as evidenced by the number of third-party games available at launch. The new optical disc format introduced with the GameCube increased the capacity significantly and reduced production costs. The strategy mostly worked. High-profile exclusives such as Star Wars Rogue Squadron II: Rogue Leader from Factor 5, Resident Evil 4 from Capcom, and Metal Gear Solid: The Twin Snakes from Konami were successful. Sega, which became a third-party developer after discontinuing its Dreamcast console, ported Dreamcast games such as Crazy Taxi and Sonic Adventure 2, and developed new franchises, such as Super Monkey Ball. Several third-party developers were contracted to work on new games for Nintendo franchises, including Star Fox Assault and Donkey Konga by Namco and Wario World from Treasure. Some third-party developers, such as Ubisoft, THQ, Disney Interactive Studios, Humongous Entertainment and EA Sports, continued to release GameCube games into 2007. Online gaming Eight GameCube games support network connectivity, five with Internet support and three with local area network (LAN) support. The only Internet capable games released in western territories are three role-playing games (RPGs) in Sega's Phantasy Star series: Phantasy Star Online Episode I & II, Phantasy Star Online Episode I & II Plus, and Phantasy Star Online Episode III: C.A.R.D. Revolution. The official servers were decommissioned in 2007, but players can still connect to fan maintained private servers. Japan received two additional games with Internet capabilities, a cooperative RPG, Homeland and a baseball game with downloadable content, Jikkyō Powerful Pro Yakyū 10. Lastly, three racing games have LAN multiplayer modes: 1080° Avalanche, Kirby Air Ride, and Mario Kart: Double Dash. These three games can be forced over the Internet with third-party PC software capable of tunneling the GameCube's network traffic. To play online, players must install an official broadband or modem adapter in their system since the GameCube does not have out of the box network capabilities. Nintendo never commissioned any servers or Internet services to interface with the console, but allowed other publishers to do so and made them responsible for managing the online experiences for their games. Reception The GameCube received mixed reviews following its launch. PC Magazine praised the overall hardware design and quality of games available at launch. CNET gave an average review rating, noting that while the console lacks a few features offered by its competition, it is relatively inexpensive, has a great controller design, and launched a decent lineup of games. In later reviews, criticism mounted against the console often centering on its overall look and feel, describing it as "toy-ish." In the midst of poor sales figures and the associated financial harm to Nintendo, a Time International article called the GameCube an "unmitigated disaster." Retrospectively, Joystiq compared the GameCube's launch window to its successor, the Wii, noting that the GameCube's "lack of games" resulted in a subpar launch, and the console's limited selection of online games damaged its market share in the long run. Time International concluded that the system had low sales figures, because it lacked "technical innovations". Sales In Japan, between 280,000 and 300,000 GameCube consoles were sold during the first three days of its sale, out of an initial shipment of 450,000 units. During its launch weekend, $100 million worth of GameCube products were sold in North America. The console was sold out in several stores, faster than initial sales of both of its competitors, the Xbox and the PlayStation 2. Nintendo reported that the most popular launch game is Luigi's Mansion, with more sales at its launch than Super Mario 64 had. Other popular games include Star Wars Rogue Squadron II: Rogue Leader and Wave Race: Blue Storm. By early December 2001, 600,000 units had been sold in the US. Nintendo sold 22 million GameCube units worldwide during its lifespan, placing it slightly behind the Xbox's 24 million, and well behind the PlayStation 2's 155 million. The GameCube's predecessor, the Nintendo 64, outperformed it as well, selling nearly 33 million units. The console was able to outsell the short-lived Dreamcast, however, which yielded 9.13 million unit sales. In September 2009, IGN ranked the GameCube 16th in its list of best gaming consoles of all time, placing it behind all three of its sixth-generation competitors: the PlayStation 2 (3rd), the Dreamcast (8th), and the Xbox (11th). As of March 31, 2003, 9.55 million GameCube units had been sold worldwide, falling short of Nintendo's initial goal of 10 million consoles. Many of Nintendo's own first-party games, such as Super Smash Bros. Melee and Mario Kart: Double Dash, saw strong sales, though this did not typically benefit third-party developers or directly drive sales of their games. Many cross-platform games—such as sports franchises released by Electronic Arts—were sold in numbers far below their PlayStation 2 and Xbox counterparts, eventually prompting some developers to scale back or completely cease support for the GameCube. Exceptions include Sega's family friendly Sonic Adventure 2 and Super Monkey Ball, which reportedly yielded more sales on GameCube than most of the company's games on the PlayStation 2 and Xbox. In June 2003, Acclaim Entertainment CEO Rod Cousens said that the company would no longer support the GameCube, and criticised it as a system "that don't deliver profits". Acclaim would later debunk his claims, by saying they would elevate support for the system and that they would decide wherever to plan to release more titles for the system apart from the ones that were already in development. This decision was made unclear after the company filed for bankruptcy in August 2004. In September 2003, Eidos Interactive announced to end support for the GameCube, as the publisher was losing money from developing for Nintendo's console. This led to several games in development being cancelled for the system. Eidos' CEO Mike McGravey would say that the GameCube was a "declining business". However, after the company's purchase by the SCi Entertainment Group in 2005, Eidos resumed development for the system and released Lego Star Wars: The Video Game and Tomb Raider: Legend. Several third-party games originally intended to be GameCube exclusives—most notably Capcom's Viewtiful Joe and Resident Evil 4—were eventually ported to other systems in an attempt to maximize profits following lackluster sales of the original GameCube versions. In March 2003, now-defunct UK retailer Dixons removed all GameCube consoles, accessories and games from its stores. That same month, another UK retailer Argos, cut the price of the GameCube in their stores to £78.99, which was more than £50 cheaper than Nintendo's SRP for the console at the time. With sales sagging and millions of unsold consoles in stock, Nintendo halted GameCube production for the first nine months of 2003 to reduce surplus units. Sales rebounded slightly after a price drop to US$99.99 on September 24, 2003 and the release of The Legend of Zelda: Collector's Edition bundle. A demo disc, the GameCube Preview Disc, was also released in a bundle in 2003. Beginning with this period, GameCube sales continued to be steady, particularly in Japan, but the GameCube remained in third place in worldwide sales during the sixth-generation era because of weaker sales performance elsewhere. Iwata forecasted to investors that the company would sell 50 million GameCube units worldwide by March 2005, but by the end of 2006, it had only sold 21.7 millionfewer than half. Market share With the GameCube, Nintendo failed to reclaim the market share lost by its predecessor, the Nintendo 64. Throughout the lifespan of its console generation, GameCube hardware sales remained far behind its direct competitor the PlayStation 2, and slightly behind the Xbox. The console's "family-friendly" appeal and lack of support from certain third-party developers skewed the GameCube toward a younger market, which was a minority demographic of the gaming population during the sixth generation. Many third-party games popular with teenagers or adults, such as the blockbuster Grand Theft Auto series and several key first-person shooters, skipped the GameCube entirely in favor of the PlayStation 2 and Xbox. , the GameCube had a 13% market share, tying with the Xbox in sales but far below the 60% of the PlayStation 2. Legacy Many games that debuted on the GameCube, including Pikmin, Chibi-Robo!, Metroid Prime, and Luigi's Mansion became popular Nintendo franchises or subseries. GameCube controllers have limited support on Wii U and Nintendo Switch, to play Super Smash Bros. for Wii U, and Super Smash Bros. Ultimate respectively, via a USB adapter. See also Dolphin (emulator) GameCube accessories Notes References External links 2000s toys Discontinued products Home video game consoles Products introduced in 2001 Products and services discontinued in 2007 GameCube Sixth-generation video game consoles
2821858
https://en.wikipedia.org/wiki/NILFS
NILFS
NILFS or NILFS2 (New Implementation of a Log-structured File System) is a log-structured file system implementation for the Linux kernel. It is being developed by Nippon Telegraph and Telephone Corporation (NTT) CyberSpace Laboratories and a community from all over the world. NILFS was released under the terms of the GNU General Public License (GPL). Design "NILFS is a log-structured filesystem, in that the storage medium is treated like a circular buffer and new blocks are always written to the end.[…]Log-structured filesystems are often used for flash media since they will naturally perform wear-leveling;[…]NILFS emphasizes snapshots. The log-structured approach is a specific form of copy-on-write behavior, so it naturally lends itself to the creation of filesystem snapshots. The NILFS developers talk about the creation of "continuous snapshots" which can be used to recover from user-initiated filesystem problems[…]." Using a copy-on-write technique known as "nothing in life is free", NILFS records all data in a continuous log-like format that is only appended to, never overwritten, an approach that is designed to reduce seek times, as well as minimize the kind of data loss that occurs after a crash with conventional file systems. For example, data loss occurs on ext3 file systems when the system crashes during a write operation. When the system reboots, the journal notes that the write did not complete, and any partial data writes are lost. Some file systems, like UFS-derived file systems used by the Solaris operating system and BSDs, provide a snapshot feature that prevents such data loss, but the snapshot configuration can be lengthy on large file systems. NILFS, in contrast, can "continuously and automatically [save] instantaneous states of the file system without interrupting service", according to NTT Labs. The "instantaneous states" that NILFS continuously saves can actually be mounted, read-only, at the same time that the actual file system is mounted read-write — a capability useful for data recovery after hardware failures and other system crashes. The "lscp" (list checkpoint) command of an interactive NILFS "inspect" utility is first used to find the checkpoint's address, in this case "2048": # inspect /dev/sda2 ... nilfs> listcp 1 6 Tue Jul 12 14:55:57 2005 MajorCP|LogiBegin|LogiEnd 2048 2352 Tue Jul 12 14:55:58 2005 MajorCP|LogiEnd ... nilfs> quit The checkpoint address is then used to mount the checkpoint: # mount -t nilfs -r -o cp=2048 /dev/sda2 /nilfs-cp # df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 70332412 8044540 62283776 12% /nilfs /dev/sda2 70332412 8044540 62283776 12% /nilfs-cp Features NILFS provides continuous snapshotting. In addition to versioning capability of the entire file system, users can even restore files mistakenly overwritten or deleted at any recent time. Since NILFS can keep consistency like conventional LFS, it achieves quick recovery after system crashes. Continuous snapshotting is not provided by most filesystems, including those supporting point-in-time snapshotting (e.g. Btrfs) NILFS creates a number of checkpoints every few seconds or per synchronous write basis (unless there is no change). Users can select significant versions among continuously created checkpoints, and can change them into snapshots which will be preserved until they are changed back to checkpoints. There is no limit on the number of snapshots until the volume gets full. Each snapshot is mountable as a read-only file system. It is mountable concurrently with a writable mount and other snapshots, and this feature is convenient to make consistent backups during use. Possible uses of NILFS include versioning, tamper detection, SOX compliance logging, data loss recovery. The current major version of NILFS is version 2, which is referred to as NILFS2. NILFS2 implements online garbage collection to reclaim disk space with keeping multiple snapshots. Other NILFS features include: B-tree based file and inode management. Immediate recovery after system crash. 64-bit data structures; support many files, large files and disks. 64-bit on-disk timestamps which are free of the year 2038 problem. Current status Supported features Basic POSIX file system features Snapshots Automatically and continuously taken No limit on the number of snapshots until the volume gets full Mountable as read-only file systems Mountable concurrently with the writable mount (convenient to make consistent backups during use) Quick listing Background Garbage Collection (GC) Can maintain multiple snapshots Selectable GC Policy, which is given by a userland daemon. Quick crash recovery on-mount Read-ahead for meta data files as well as data files Block sizes smaller than page size (e.g. 1KB or 2KB) Online resizing (since Linux-3.x and nilfs-utils 2.1) Related utilities (by contribution of Jiro SEKIBA) grub2 util-linux (blkid, libblkid, uuid mount) udisks, palimpsest Filesystem label (nilfs-tune) Additional features Fast write and recovery times Minimal damage to file data and system consistency on hardware failure 32-bit checksums (CRC32) on data and metadata for integrity assurance (per block group, in segment summary) Correctly ordered data and meta-data writes Redundant superblock Internal data is processed in 64-bit wide word size Can create and store huge files (8 EiB) OS compatibility NILFS was merged into the Linux kernel 2.6.30. On distributions where NILFS is available out-of-the-box, the user needs to download the nilfs-utils (or nilfs-tools) package, following the instructions from . A separate, BSD licensed implementation, currently with read-only support, is included in NetBSD. Relative performance In the January 2015 presentation SD cards and filesystems for embedded systems at Linux.conf.au, it was stated: License The NILFS2 filesystem utilities are made available under the GNU Public License version 2, with the exception of the lib/nilfs libraries and their header files, which are made available under the GNU Lesser General Public License Version 2.1. Developers The Japanese primary authors and major contributors to the nilfs-utils who worked or are working at labs of NTT Corporation are: Ryusuke Konishi (Primary maintainer, 02/2008-Present) Koji Sato Naruhiko Kamimura Seiji Kihara Yoshiji Amagai Hisashi Hifumi and Satoshi Moriai. Other major contributors are: Andreas Rohner Dan McGee David Arendt David Smid dexen deVries Dmitry Smirnov Eric Sandeen Jiro SEKIBA Matteo Frigo Hitoshi Mitake Takashi Iwai Vyacheslav Dubeyko See also ZFS Btrfs F2FS, another log-structured file system implementation List of file systems Comparison of file systems Log-structured File System (BSD) Sprite operating system References External links NILFS: A File System to Make SSDs Scream Manjaro tutorial NILFS: A filesystem designed to minimize the likelyhood [sic] of data loss Disk file systems File systems supported by the Linux kernel Persistence
92074
https://en.wikipedia.org/wiki/Love%20County%2C%20Oklahoma
Love County, Oklahoma
Love County is a county on the southern border of the U.S. state of Oklahoma. As of the 2010 census, the population was 9,423. Its county seat is Marietta. The county was created at statehood in 1907 and named for Overton Love, a prominent Chickasaw farmer, entrepreneur and politician. For tourism purposes, the Oklahoma Department of Tourism includes Love County in 'Chickasaw Country'. Love County is also part of the Texoma region. History The Louisiana Purchase, effected in 1803, included all of the present state of Oklahoma except the Panhandle. Explorers and traders began travelling extensively through the area, intending to find trade routes to Santa Fe. The Quapaw were the principal Native Americans living south of the Canadian River. The Quapaws ceded their land to the American government in 1818, and were replaced by the Choctaws in the early 1830s. The Chickasaws were assigned land in the middle of Choctaw territory during 1837–8. Overton Love was one of the earliest Chickasaws who settled in present-day Love County. He was twenty years old when he arrived in Indian Territory from Mississippi in 1843. His settlement became known as Love's Valley (about east of the present town of Marietta). He later became one of the largest Chickasaw landowners and cattle raisers in the area, working of Red River Bottomland. Eventually, he became a member of both houses of the Chickasaw National Council, a county and district judge, and a member of the Dawes Commission. Prior to statehood, the area now known as Love County was part of Pickens County, Chickasaw Nation, Indian Territory. It had three incorporated towns: Marietta (the county seat, founded in 1887), Leon (established 1883) and Thackerville (established 1882). It also contained two unincorporated postal areas: Burneyville (post office established 1879) and Overbrook (post office established 1887). The settlement of Courtney at the mouth of Mud Creek was settled ca. 1872 by Henry D. Courtney. Geography According to the U.S. Census Bureau, the county has an area of , of which is land and (3.5%) is water. It is the fifth-smallest county in Oklahoma by land area. Love County is within the Red River Plains physiographic region, with a rolling to hilly topography. The Red River and its tributaries Simon Creek, Walnut Bayou, Hickory Creek and Mud Creek drain the county. Lake Murray is on the northeastern border and Lake Texoma is on the southern border. Adjacent counties Carter County (north) Marshall County (east) Cooke County, Texas (south) Montague County, Texas (southwest) Jefferson County (northwest) Demographics As of the census of 2000, there were 8,831 people, 3,442 households, and 2,557 families residing in the county. The population density was 17 people per square mile (7/km2). There were 4,066 housing units at an average density of 8 per square mile (3/km2). The racial makeup of the county was 84.15% White, 2.19% Black or African American, 6.41% Native American, 0.26% Asian, 0.01% Pacific Islander, 3.58% from other races, and 3.41% from two or more races. 7.01% of the population were Hispanic or Latino of any race. There were 3,442 households, out of which 31.70% had children under the age of 18 living with them, 60.40% were married couples living together, 10.00% had a female householder with no husband present, and 25.70% were non-families. 22.90% of all households were made up of individuals, and 12.00% had someone living alone who was 65 years of age or older. The average household size was 2.54 and the average family size was 2.97. In the county, the population was spread out, with 25.70% under the age of 18, 7.00% from 18 to 24, 25.40% from 25 to 44, 25.70% from 45 to 64, and 16.20% who were 65 years of age or older. The median age was 39 years. For every 100 females, there were 98.20 males. For every 100 females age 18 and over, there were 94.90 males. The county's median household income was $32,558, and the median family income was $38,212. Males had a median income of $30,024 versus $20,578 for females. The county's per capita income was $16,648. About 8.80% of families and 11.80% of the population were below the poverty line, including 14.40% of those under age 18 and 13.80% of those age 65 or over. Politics Economy Love County is home to Winstar World Casino, across the Red River from the Texas-Oklahoma border. The casino is operated by the Chickasaw Nation, and is the county's largest private employer. Agriculture and ranching have been important to the county economy since its inception. Leading non-agricultural employers include the Marietta Bakery, Murray Biscuit Company, Marietta Sportswear, Robertson Hams, Rapistan Systems, Earth Energy Systems, and the Joe Brown Company. The county also produces natural gas and its co-products propane and butanes. Education The following school districts are in Love County: Turner Public Schools Marietta Public Schools Thackerville Public Schools Greenville Public Schools Transportation Major highways Interstate 35 U.S. Highway 77 State Highway 32 State Highway 76 State Highway 77S State Highway 89 State Highway 96 Airports Public-use airports in Love County: Falconhead Airport (37K) in Burneyville McGehee Catfish Restaurant Airport (T40) in Marietta (closed) McGehee Catfish Restaurant Airport (4O2) in Marietta (closed) Communities City Marietta (county seat) Towns Leon Thackerville Census-designated places Burneyville Greenville Other unincorporated places Courtney Enville Jimtown Orr Overbrook Rubottom See also National Register of Historic Places listings in Love County, Oklahoma References External links Pictures of Marietta Oklahoma Digital Maps: Digital Collections of Oklahoma and Indian Territory Ardmore, Oklahoma micropolitan area 1907 establishments in Oklahoma Populated places established in 1907
29762278
https://en.wikipedia.org/wiki/Hindawi%20Programming%20System
Hindawi Programming System
Hindawi Programming System (hereafter referred to as HPS) is a suite of open source programming languages. It allows non-English medium literates to learn and write computer programs. It is a scalable system which supports many programming paradigms. Shaili Prathmik or Indic BASIC and Indic LOGO are for beginners who want to start with computer programming. On the higher end it supports Shaili Guru (Indic C), Shaili Shraeni (Indic C++), Shaili Yantrik (Indic Assembly), Shaily Shabda (Indic Lex), Shaili Vyaakaran (Shaili Vyaaka/Indic Yacc), and Shaili Kritrim, which is an Indic programming language targeting JVM. Mechanism and algorithms HPS uses Romenagri transliteration to first convert the high level source code into a compiler acceptable format and then uses an existing compiler to produce machine code. History The original contributor to HPS is Abhishek Choudhary who also developed APCISR and Romenagri Initial public release - 15 August 2004 Release of version 2 by the ex-education minister of Bihar, Dr. Ram Prakash Mahto - 15 August 2005 Release of Linux port under Sarai fellowship - 16 August 2006 Awards and recognition Computer Society of India's National Young IT Professional Award 2005 Sarai / CSDS FLOSS fellowship Hindawi is recognised by TDIL, Government of India. Hindawi was shortlisted for Manthan Award 2007 References External links Hindawi Project on Sourceforge An independent review of Linux port of Hindawi An article on the need for Indic programming language refers to Hindawi Hindawi Linux (port) home page with training videos Indic computing Non-English-based programming languages BASIC programming language family Software industry in India
65263955
https://en.wikipedia.org/wiki/List%20of%202020%20cyberattacks%20on%20U.S.%20schools
List of 2020 cyberattacks on U.S. schools
List of cyberattacks on schools From 2016 to 2019 there have been 855 cyberattacks on U.S. School districts. Microsoft Security Intelligence has said there are more attacks on schools and school districts than any other industry. There were 348 reported cyberattacks on school districts in 2019. School districts are allocating millions of dollars for their computer systems to support virtual learning in the wake of the COVID-19 pandemic. The Miami-Dade Public Schools invested in a $15.3 million online learning system. In 2020 their system was hacked with a Denial of Service Cyber attack. The two main types of cyberattacks on schools are Distributed denial of service DDoS - an attack which overwhelms the targets internet bandwidth, and Ransomware - where the hacker takes control of the target's computer system and demands money. In 2020 because of reliance on distance learning, schools braced for cyberattacks. The average cost for organizations that do not pay the ransomware demands was $730,000. Legislation U.S. Representative Josh Harder introduced a bill in congress entitled: Protecting Students from Cybercrimes Act. The bill's goal is to give schools $25 million in grants to implement cyber security. In 2019 U.S. Senators Gary Peters and Rick Scott also authored a bill to safeguard school computer systems. The bill is called The K-12 Cybersecurity Act. Cyberattacks on schools alphabetical A Allegheny County Schools (NC) ransomeware attack Athens Independent School District (Texas) ransomware attack B Burke County Public Schools (NC) ransomeware attack Baugo Community Schools (Indiana) cyberattack C Conejo Valley Unified school district (California) DDoS Cherry Hill School District Philadelphia malware attack D E F G Gadsden Independent School District (Sunland Park NM) ransomware attack H Hartford Public Schools ransomware attack Hamden school district (Connecticut) malware attack Haywood County Schools ransomeware attack Humble Independent School District (Texas) DDoS attack Huntington Beach Unified High School District (California) ransomeware attack I J Jackson Public School District (Mississippi) malware attack Jay Public School District (Oklahoma) virus K King George County Schools ransomware attack L Lumberton Township Public Schools in Burlington County (New Jersey) Zoom malicious pornographic intrusion M Madison Public Schools (Connecticut) Zoombombing attack Miami-Dade Public Schools System cyberattack Mitchell County Schools (North Carolina) ransomware attack The Mountain View-Los Altos High School District (California) ransomware attack N Community School Corporation of New Palestine Indiana (DDoS) cyberattack O P Penncrest School District (Pennsylvania) ransomware attack (paid $10,000) Pittsburg Unified School District of Pennsylvania ransomeware attack Ponca City Public Schools (Oklahoma) ransomware attack Q R Richmond school district (Michigan) ransomware attack S Surry County Schools ransomeware attack South Adams Schools (Indiana) ransomware attack Southern Hancock School District (Indiana) DDoS attack St. Landry Parish schools (Louisiana) malware attack T Toledo Public School district (Ohio) cyberattack U V Ventura Unified school district (California) DDoS W X Y Z Colleges Capital University Law School in (Columbus, Ohio) Columbia College Chicago ransomware attack Michigan State University ransomware attack New Mexico State University and the school's foundation virus Regis University (Denver Colorado) ransomware attack paid ransom. University of California at San Francisco School of Medicine ransomware attack paid 1.14 million University of New Mexico School of Law (Albuquerque NM) Wallace State Community College (Alabama) virus See also References Cyberattacks Lists of school-related attacks
294131
https://en.wikipedia.org/wiki/Avinash%20Kak
Avinash Kak
Avinash C. Kak (born 1944) is a professor of Electrical and Computer Engineering at Purdue University who has conducted pioneering research in several areas of information processing. His most noteworthy contributions deal with algorithms, languages, and systems related to networks (including sensor networks), robotics, and computer vision. Born in Srinagar, Kashmir, he did his Bachelors in BE at University of Madras and Phd in Indian Institute of Technology Delhi. He joined the faculty of Purdue University in 1971. His brother is the computer scientist Subhash Kak and sister the literary theorist Jaishree Odin. Robotics and computer vision His contributions include the 3D-POLY, which is the fastest algorithm for recognizing 3D objects in depth maps In 1992, Kosaka and Kak published FINALE, which is considered to be a computationally efficient and highly robust approach to vision-based navigation by indoor mobile robots. In 2003, a group of researchers that included Kak developed a tool for content-based image retrieval that was demonstrated by clinical trials to improve the performance of radiologists. This remains the only clinically evaluated system for content-based image retrieval for radiologists. His book Digital Picture Processing, co-authored with Azriel Rosenfeld, is also considered a classic and has been one of the most widely referenced sources in literature dealing with digital image processing and computer vision. Kak is not a believer in Strong AI as evidenced by his provocative/amusing essay Why Robots Will Never Have Sex. This essay a rejoinder to those who believe that robots/computers will someday take over the world. Image reconstruction algorithms The SART algorithm (Simultaneous Algebraic Reconstruction Technique) proposed by Andersen and Kak in 1984 has had a major impact in CT imaging applications where the projection data is limited. As a measure of its popularity, researchers have proposed various extensions to SART: OS-SART, FA-SART, VW-OS-SART, SARTF, etc. Researchers have also studied how SART can best be implemented on different parallel processing architectures. SART and its proposed extensions are used in emission CT in nuclear medicine, dynamic CT, and holographic tomography, and other reconstruction applications. Convergence of the SART algorithm was theoretically established in 2004 by Jiang and Wang. His book Principles of Computerized Tomographic Imaging, now re-published as a classic in Applied Mathematics by SIAM (Society of Industrial and Applied Mathematics), is widely used in courses dealing with modern medical imaging. It is one of the most frequently cited books in the literature on image reconstruction. Software engineering and open source The three books written by Kak in the course of his 17-year-long Objects Trilogy Project cover object-oriented programming, object-oriented scripting, and object-oriented design. The first of these, Programming with Objects, presents a comparative approach to the teaching and learning of two large object-oriented languages, C++ and Java. This book is now used in several universities for teaching object-oriented programming with C++ and Java simultaneously. The second book, Scripting with Objects does the same with Perl and Python. The last book of the trilogy is Designing with Objects. Regarding the teaching of programming languages in universities, Kak is critical of programs that start the students off with relatively easier-to-learn languages like Java. Over the years, Kak has also contributed to several open-source projects. The software modules developed through these projects are widely used for data analytics and computer security. In addition, during the last decade, Kak has collaborated with people in industry and developed metrics for measuring the quality of large software systems and the usability of APIs (Application Programming Interfaces). Computer and network security In computer security research, together with Padmini Jaikumar he has presented a robust solution to the difficult problem of botnet detection in computer networks. He has authored popular online lecture notes that are updated regularly. These notes provide comprehensive overview of computer and network security. References External links Avi Kak's articles on Google Scholar Avi Kak's Personal Homepage Living people 1944 births American computer scientists Artificial intelligence researchers Indian computer scientists Theoretical computer scientists Indian emigrants to the United States 21st-century American engineers Purdue University faculty American technology writers American textbook writers American male non-fiction writers IIT Delhi alumni Modern cryptographers American academics of Indian descent 20th-century Indian mathematicians 20th-century American mathematicians Scientists from Jammu and Kashmir People from Srinagar
10247714
https://en.wikipedia.org/wiki/Liu%20Chuanzhi
Liu Chuanzhi
Liu Chuanzhi (; born 29 April 1944) is a Chinese businessman and entrepreneur. Liu is the founder of Lenovo, the largest computer maker in the world. He remains one of the leaders of the company. Business activities Lenovo By the early 1980s, Liu had achieved relative success as a computer scientist but still felt frustrated with his career. While his work on magnetic data storage was important, it lacked direct practical applications. He said, "We were the top computer technology research organization in China. We developed the first electron-tube computer and the first transistor computer. But we only produced one of each. Then we went on to develop something different. The work was just filed away." Liu was also anxious about his economic circumstances; in 1984, Liu had a growing family but an income of only 100RMB per month. Liu founded Lenovo (originally called Legend), in 1984 with a group of ten other engineers in Beijing with 200,000 yuan and an office roughly 20 square yards in size. Liu came up with the idea to start Lenovo in response to a lack of funding at the Chinese Academy of Sciences (CAS). Liu's superior arranged for the academy to loan him and the other co-founders the afore-mentioned 200,000 yuan. Of this time, Liu said, "It wasn't easy. The lowest thing you could do in the early '80s, as a scientist, was to go into business. China had a strict planned economy and there was barely room for a freewheeling company like ours." Liu emphasized developing an effective working relationship with his superiors at the CAS from the very start. Despite its rhetoric of market-oriented reform, the Chinese government was reluctant to relax state control of the economy. Liu feared that his company might fail due to government micro-management. Liu also worried about dealing with local government officials and party cadres. He said, "We were totally immersed in the environment of a planned economy. I didn't care that the investment was small, but I knew I must have control over finances, human resources and decision-making." Liu's superiors immediately granted his request for autonomy. Lenovo's founders, all scientists and engineers, faced difficulty from their lack of familiarity with market-oriented business practices, traditional Chinese ambivalence towards commerce, and anti-capitalist communist ideology. During this period many Chinese intellectuals felt that commerce was immoral and degrading. The fact that in the 1980s entrepreneurs were drawn from lower classes, and often dishonest as well, made the private sector even more unattractive. This was readily apparent to Liu and his collaborators due to their proximity to Zhongguancun, where the proliferation of fly-by-night electronics traders lead to the area being dubbed "Swindlers Valley." Their first significant transaction, an attempt to import televisions, failed. The group rebuilt itself within a year by conducting quality checks on computers for new buyers. Lenovo soon invested money in developing a circuit board that would allow IBM PCs to process Chinese characters. This product was Lenovo's first major success. In 1990, Lenovo started to assemble and sell computers under its original brand name, Legend. Lenovo also tried and failed to market a digital watch. Liu said, "Our management team often differed on which commercial road to travel. This led to big discussions, especially between the engineering chief and myself. He felt that if the quality of the product was good, then it would sell itself. But I knew this was not true, that marketing and other factors were part of the eventual success of a product." Lenovo's early difficulties were compounded by the fact that its staff had little business experience. "We were mainly scientists and didn't understand the market," Liu said. "We just learned by trial-and-error, which was very interesting—but also very dangerous," said Liu. Liu received government permission to open a subsidiary in Hong Kong and was allowed to move there along with five other employees. Liu's father, already in Hong Kong, supported his son's ambitions through mentoring and facilitating loans. Liu moved to Hong Kong in 1988. In order to save money during this period, Liu and his co-workers walked instead of taking public transportation. In order to keep up appearances they rented hotel rooms for meetings. Lenovo became a publicly traded company after listing in Hong Kong in 1994, raising nearly US$30 million. Prior to Lenovo's IPO, many analysts were optimistic. The company was praised for its good management, strong brand recognition, and growth potential. Analysts also worried about Lenovo's profitability. Lenovo's IPO was massively over-subscribed. During the first day of trading the company's stock price hit a high of HK$2.07 and closed at HK$2.00. Proceeds from the offering were used to finance sales offices in Europe, North America, and Australia; expand and improve production and research and development; and increase working capital. Lenovo's Hong Kong and Mainland China business units conducted a merger and secondary offering in 1997. When Lenovo was first listed, its managers thought the only purpose of going public was to raise capital. They had little understanding of the rules and responsibilities that went along with running a public company. Before Lenovo conducted its first secondary offering in 1997, Liu proudly announced the company's intent to mainland newspapers only to have its stock halted for two days by regulators to punish his statement. This occurred several times until Liu learned that he had to choose his words carefully in public. The first time Liu traveled to Europe on a "roadshow" to discuss his company's stock he was shocked by the skeptical questions he was subjected to and felt offended. Liu later came to understand that he was accountable to shareholders. He said, "Before I only had one boss, but CAS never asked me anything. I relied on my own initiative to do things. We began to think about issues of credibility. Legend began to learn how to become a truly international company." Liu claims Hewlett-Packard as a key source of inspiration for Lenovo. In an interview with The Economist he said, "Our earliest and best teacher was Hewlett-Packard." For more than ten years, Lenovo served as Hewlett-Packard's distributor in China. Speaking about Lenovo's later acquisition of IBM's personal computer unit Liu said, "I remember the first time I took part in a meeting of IBM agents. I was wearing an old business suit of my father's and I sat in the back row. Even in my dreams, I never imagined that one day we could buy the IBM PC business. It was unthinkable. Impossible." Lenovo's later takeover of IBM's personal computing business made him the first Chinese CEO to lead the takeover of a major American firm. During an interview, Liu acknowledged the major risks involved with the IBM deal. "We had three serious risks. Number one: After the acquisition, would clients buy from the new owner? Number two: Would employees continue to work for the new owner? Number three: Would there be potential conflicts between the Chinese management and the Western management?" Business ethics were a key challenge for Liu in establishing and expanding Lenovo. Liu says that at first he behaved "like a kind of dictator" and spent much time yelling. He had five corrupt executives imprisoned. Being late for a meeting could be punished by having to stand in silence before the group, a punishment that Liu accepted three times himself. Lenovo's culture gradually changed and Liu was able to relax his authoritarian style. Lenovo became an employer of choice for Chinese engineers and managers with overseas education. Legend Holdings In June 2012, Liu stepped down as chairman of Legend Holdings, the parent company of Lenovo. In the years just prior to his resignation, Liu focused on improving Legend's growth, building-up its core assets, and conducting a public stock offering between 2014 and 2016. Legend's major assets include Lenovo, Legend Capital, a real-estate venture called Raycom, Digital China, and Hony Capital. In an interview with the BBC, Liu said that his vision for Legend was for it "to become an industry in its own right" and that he wants it "to become the top enterprise across industries...not only in China, but in the whole world." Liu said that he realized that his vision was extremely ambitious and may have to be left to his successors to implement. Liu said in an interview with Forbes that he hoped to diversify away from IT-related businesses and that Legend Holding's assets would be mainly concentrated in information technology, real estate, services, coal processing, and agriculture. He said agriculture "will become the next strategic development area for Legend Holdings." Liu said agriculture is just one part of what he sees as "huge" opportunities in sectors that cater to Chinese consumers. In the same interview, Liu said that Legend's focus would continue to remain on China. "Whatever we do has to fit into something we are doing in China," said Liu when referring to plans for overseas investment. Liu said in July 2012 that he plans to take Legend Holdings public with a listing in Hong Kong. Forbes speculated that Liu may be trying to "create a Chinese version of General Electric." Liu led an initial public offering for Legend on the Hong Kong Stock Exchange in 2015. The offering received regulatory approval in June of that year. Liu said in an interview that he planned to float Legend's stock on a domestic Chinese exchange before his retirement. In 2019, Liu Chuanzhi, the 75-year-old founder of Lenovo Group, has resigned as chairman of Legend Holdings, the parent company of the world's largest personal computer supplier. Joyvio Joyvio is Legend Holdings' vehicle for investment in the food industry. Legend has invested more than 1 billion yuan in Joyvio. Joyvio will pursue complete vertical integration from farming all the way to retailing. Joyvio's first product was blueberries. In November 2013, its product range was expanded to kiwifruit. The kiwifruit has been named "Liu Kiwi," in honor of Liu Chuanzhi. Liu is responsible for Legend's move into food. Since 2011, Joyvio has acquired farms around the world including blueberry fields in Shandong and Chile and kiwifruit plantations in Sichuan, Shaanxi, Henan, and Chile. The firm had plans to start selling cherries and grapes from the United States, Australia, and Chile by the end of 2013. Joyvio took advantage of Legend's experience in information technology to set up a tracking system to ensure the quality and safety of its products. Customers can scan codes on its packages in order to find out what farm the product came from, who supervised production, what tests were performed, and information on soil and water quality. The company is also applying the OEM business model common in the technology industry in order to work effectively with small family farms and agricultural cooperatives. Joyvio holds training exercises with farmers in order to prepare them to deal quickly and effectively with bad weather. Joyvio entered the imported wine business in late 2013. Joyvio announced on 30 March 2014 that it had acquired a 60 percent share of Hangzhou Longguan Tea Industrial Company for about 30 million yuan or about US$4.83 million. Longguan was previously a fully state-owned enterprise operating under the jurisdiction of the Chinese Academy of Agricultural Sciences. Longguan produces Hangzhou's famous Longjing tea. The company had sales of 32.45 million yuan in 2012. According to the company's annual report it has been growing 40 to 50 percent per year. Joyvio has taken over day-to-day control of Longguan. Liu Chuanzhi said he aims to bring innovation and commercial methods to the tea industry. Liu said that Joyvio would invest in other well-known tea brands and related products. Other positions As of 2013, Liu served as a senior advisor at Kohlberg Kravis Roberts & Company. Education and early career After graduating from high school in 1962, Liu applied to be a military pilot and passed all the associated exams. Despite his father's revolutionary credentials, Liu was declared unfit for military service because a relative had been denounced as a rightist. In autumn of the same year, Liu entered the People's Liberation Army Institute of Telecommunication Engineering, now known as Xidian University. Due to his political and class background, Liu was deemed unsuitable for such sensitive subjects and assigned to study radar. During his studies Liu received an introduction to computing. Liu was labeled an "intellectual element" during the Cultural Revolution. In 1966, he told his classmates that the revolution was a terrible idea and was sent to a state-owned rice farm near Macau in Guangdong as a result. From there he was sent to a farm in Hunan dedicated to reform through hard labor. Liu returned to Beijing where he took up a post in 1970 as an engineer-administrator at the Computer Institute that had earlier developed the Number 104, Number 109, and Number 111 mainframe computers. Liu worked on the development of the Number 757 mainframe computer. In 1984, he resigned to become a cadre in the personnel office of the Chinese Academy of Sciences. He remained there until he co-founded Legend in 1984. Political opinions Unlike many other Chinese business leaders, Liu has often spoken out on political issues. Liu has criticized others' tendency for silence. He said, "when it comes to the inappropriate actions the government takes, businessmen don't have the courage or capacity to go against them; second, they lack the social responsibility to care for the whole world and what they do is to 'mind their own business.'" During an interview with Caijing in October 2012, he advocated for what he called "elite selection" of political leaders. He said, "I'm not sure how practical it would be to have universal voting in the near future. I hope the country's leaders could be elected by the elites of the society." Liu is also deeply concerned with the rule of law. He said, "My biggest worry is an unlawful world. I tell my employees to be careful all the time; don't disrespect the government or bribe anyone - even so, I'm still not at peace because there are always some corrupt individuals coming along to cause problems." In December 2013 Liu expressed optimism about economic reform in China. He said, "Now if the market is to play a central role and the government only plays the referee, only you will decide if you are going to win the race." Public service and awards As of 2002, Liu served as vice-chairman of the All-China Federation of Industry and Commerce. Since 2004, Liu has served as director of the Computer Technology Research Institute of the Chinese Academy of Sciences. Liu was a delegate to the 16th National Congress of Communist Party of China and a deputy to the 9th and 10th sessions of the National People's Congress, China's highest legislative body. Liu has been named "Asian Businessman," one of the "Global 25 Most Influential Business Leaders," and many other awards. He received Second National Technology Entrepreneurs Gold Award in 1990. He was named a Model Worker and Man of Reform in China, both in 1995. He was named one of the "Ten Most Influential Men of the Commercial Sector" in China in 1996. Liu was named one of the "Stars of Asia" by BusinessWeek magazine in 2000. In 2001, Liu was selected by Time Magazine as one of the "25 Most Influential Global Executives." In 2012, Forbes named Liu one of the 10 most important business leaders in China. Liu is a member of the China Entrepreneurs Club (CEC). The CEC was founded in 2006. Its purpose is to strengthen the sense of social responsibility among China's young entrepreneurs. Family Liu was born in 1945 in Zhenjiang, Jiangsu, where his paternal grandfather was head of a traditional Chinese bank. Liu's grandfather sent his father, Liu Gushu (), to study in Shanghai. Liu Gushu abandoned scholarship and passed an exam for employment with the Bank of China. Liu Gushu was a "patriotic capitalist" who worked secretly for the Communist Party before the revolution of 1949. He became a senior executive with the Bank of China and later became a patent lawyer and chairman of the China Technology Licensing Company. Liu Chuanzhi's maternal grandfather served as finance minister for the warlord Sun Chuanfang. After the Communist victory in 1949, Liu's family moved to Beijing, where they lived in a traditional courtyard home located on a hutong in the Wangfujing area. Liu's father continued his work with the Bank of China and joined the Chinese Communist Party. Liu's father developed a reputation as an honest and skilled banker. Liu is married and has three children. Liu's daughter Liu Qing () is an alumna of Peking University. Liu Qing joined Didi Chuxing as chief operating officer in July 2014 after working at Goldman Sachs for 12 years. See also Mary Ma, former CFO of Lenovo References External links Biography at China Vitae 1944 births Living people 20th-century Chinese businesspeople 21st-century Chinese businesspeople Businesspeople from Jiangsu Chinese computer scientists Chinese technology company founders Delegates to the 11th National People's Congress Lenovo people Scientists from Zhenjiang Zhongguancun Xidian University alumni
54247838
https://en.wikipedia.org/wiki/The%20Plot%20to%20Hack%20America
The Plot to Hack America
The Plot to Hack America: How Putin's Cyberspies and WikiLeaks Tried to Steal the 2016 Election is a non-fiction book by Malcolm Nance about what the author describes as Russian interference in the 2016 United States elections. It was published in paperback, audiobook, and e-book formats in 2016 by Skyhorse Publishing. A second edition was also published the same year, and a third edition in 2017. Nance researched Russian intelligence, working as a Russian interpreter and studying KGB history. Nance described the black propaganda warfare known as active measures by RT (Russia Today) and Sputnik News. He recounts Vladimir Putin's KGB rise, and details Trump campaign ties to Russia. Nance concludes that Putin managed the cyberattack by hacker groups Cozy Bear and Fancy Bear. The Wall Street Journal placed the book in its list of "Best-Selling Books" for the week of February 19, 2017, at seventh place in the category "Nonfiction E-Books". New York Journal of Books called it "an essential primer for anyone wanting to be fully informed about the unprecedented events surrounding the 2016 U.S. presidential election." Napa Valley Register described Nance's work as "the best book on the subject". The Huffington Post remarked Putin had played a Game of Thrones with the election. Newsweek wrote that the problem with disinformation tactics is that by the time they are debunked, the public has already consumed the falsehoods. Summary The book is dedicated to U.S. Army officer Humayun Khan and begins with a foreword by Spencer Ackerman. Nance details Russian interference in the 2016 U.S. elections and describes how, in March 2016, Democratic National Committee (DNC) servers were hacked by someone seeking opposition research on Donald Trump. Nance learnt of a hacker, Guccifer 2.0, who would release hacked DNC materials. Nance gives context including Trump's motivations to run for President after being made fun of at the 2011 White House Correspondents' Association Dinner, his criticism of Barack Obama, and his entry into the 2016 race for the White House. Nance discusses black propaganda techniques used by the Russian Federation, and characterizes RT (formerly Russia Today) and Sputnik News as agencies of disinformation. He asserts that President Vladimir Putin was intimately involved in the Russian intelligence operation to elect Trump, directing the entire covert operation himself. In "Trump's Agents, Putin's Assets", Nance delves further into links between Trump associates and Russian officials, asserting that multiple agents of Trump were assets for Putin, providing access to Trump. Nance identifies Putin's strategy for electing Trump as American president, referred to as "Operation Lucky-7: The Kremlin Plan to Elect a President", and describes this as a multitask effort involving hacking into the DNC to acquire the personal information of their members, as well as to seek out compromising material known as kompromat. "Battles of the CYBER BEARS" describes the two hacker entities tied to Russian intelligence: Cozy Bear and Fancy Bear. Cozy Bear is believed to be linked to the Russian Federal Security Service (FSB) or Foreign Intelligence Service (SVR), while Fancy Bear is associated with Russian military intelligence agency GRU. Nance describes how Russian intelligence attempted to make their releases of leaked DNC emails appear deniable. In "WikiLeaks: Russia's Intelligence Laundromat", he likens use of the whistleblower website WikiLeaks to money laundering. Nance asserts WikiLeaks willingly collaborated in the operation. "When CYBER BEARS Attack" describes the impact of Podesta emails and DNC email leaks on the 2016 Clinton campaign. Finally, in "Cyberwar to Defend Democracy", Nance reiterates that the U.S. was the target of cyberwarfare by Russian intelligence agencies GRU and FSB, as directly ordered by Putin. Nance writes that Russia succeeded in casting doubt of citizens in the strength of U.S. democracy. He posits that, were the U.S. populace at large to internalize future acts of cyberwarfare as dangerous attempts to subvert daily life, they could lead to actual war itself. Composition and publication Before beginning research for The Plot to Hack America, Nance gained counter-intelligence experience as a U.S. Navy Senior Chief Petty Officer in naval cryptology, where he served from 1981–2001. He garnered expertise within the fields of intelligence and counterterrorism. The author learned about Russian history as an interpreter for Russian, and began working in the intelligence field through research into the history of the Soviet Union and its spying agency the KGB. He devoted years of research to analyzing foreign relations of Russia. Prior to analyzing the Russian interference in the 2016 U.S. elections, Nance's background in counter-intelligence analysis included management of a think tank called Terror Asymmetrics Project on Strategy, Tactics and Radical Ideologies, consisting of Central Intelligence Agency and military intelligence officers with direct prior field experience. Nance's books on counter-terrorism include An End to al-Qaeda, Terrorist Recognition Handbook, The Terrorists of Iraq, Defeating ISIS, and Hacking ISIS. Nance began work on The Plot to Hack America incidentally, while already engaged in writing Hacking ISIS. During the course of research for Hacking ISIS, he discovered computer hacking of Germany's legislative body, the Bundestag, and French television station TV5Monde. At the time, the hacks were thought to be caused by ISIS, but instead they were traced back to Russian hacking group, the Cyber Bears. Nance knew this was a Russian intelligence GRU operated group, and realized the attribution to ISIS was a false flag operation to throw investigators off the trail. This gave Nance prior knowledge of Russian intelligence tactics, through the Cyber Bears, to infiltrate servers for purposes of disrupting government in the case of Germany, and injecting propaganda in the case of France. After the 2016 hack on the DNC, it was apparent to Nance that the identical foreign agency had carried out the attack, GRU. Nance's suspicions were borne out as accurate when security firm CrowdStrike determined Cozy Bear and Fancy Bear were behind the attack. Nance saw this as akin to the Watergate scandal, albeit a virtual attack instead of a physical break-in to Democratic facilities. Nance told C-SPAN that for the majority of his working life he has identified as a member of the U.S. Republican Party, describing himself as being from the "Colin Powell School of Republicanism", and The Plot to Hack America was written out of a desire as an intelligence expert to document the background behind the attack by a foreign power on U.S. democratic institutions. Nance realized the gravity of the attack because he considered that such an operation must have been sanctioned and managed by former KGB officer Vladimir Putin himself. Nance is a member of the board of directors for the International Spy Museum in Washington, D.C. Through this work at the museum, Nance befriended former KGB general Oleg Kalugin, who advised him "once KGB always KGB". Nance considered that Putin's objectives would not have been simply to harm Hillary Clinton but actually to attempt to achieve the ascendancy of Donald Trump to U.S. president. The Plot to Hack America was first published in an online format on September 23, 2016, the same day United States Intelligence Community assessments about Russian interference in the 2016 United States elections were delivered to President Barack Obama. The appendix to the book notes this timing, and points out, "Many of the conclusions that were included in the consensus opinion of the principal three intelligence agencies, the NSA, the CIA, and the FBI, are identical to The Plot to Hack America". Its first paperback format was published on October 10, 2016. A second edition was released the same year, in addition to an eBook format. Another edition was published in 2017, along with an audiobook narrated by Gregory Itzin. The author was the subject of hecklers when he appeared at an event to discuss his work at Books & Books in Miami, Florida in 2017. Reception The book was a commercial success, and The Wall Street Journal placed The Plot to Hack America in its list of "Best-Selling Books" for the week of February 19, 2017, at 7th place in the category "NonFiction E-Books". The book was included for reading in a course on political science at Pasadena City College. In a review for the New York Journal of Books, Michael Lipkin was effusive, writing: "Malcolm Nance's The Plot to Hack America is an essential primer for anyone wanting to be fully informed about the unprecedented events surrounding the 2016 U.S. presidential election." Lipkin wrote of the author's expertise on the subject matter: "He is a patriot and a highly experienced and respected intelligence expert bringing to bear his own deep and extensive knowledge and conclusions in perhaps one of the most important developments in American history." Lawrence Swaim gave Nance's work a favorable reception, in a book review for the Napa Valley Register. He wrote, "It's a quick read, and at present easily the best book on the subject." Swaim recommended resources at the back of the book, writing, "But what’s really killer about the Nance book is the appendix, which contains extremely revealing assessments made by American intelligence agencies, all presented in an unclassified format." Kenneth J. Bernstein wrote for Daily Kos "to convince you to read this important book", he echoed the warning in its conclusion about the dangers posed by cyberwarfare. Bernstein wrote that the book's argumentation was strengthened because, "Every single assertion Nance offers is backed by material ... clearly documented in end notes". Bernstein wrote favorably in addition of the book's foreword by The Guardian editor for national security, Spencer Ackerman. Italian language newspaper La Stampa called the book "molto bello". Writing for The Independent, Andy Martin, commented, "I suppose the only weak spot in the subtitle is the word 'tried'. Surely they did more than 'try'?" Maclean's wrote that The Plot to Hack America, "was prescient about Russia’s meddling in the 2016 U.S. election." Brian Lamb, founder and retired CEO of C-SPAN, commented that the book's titled choice seemed political in nature. Strategic Finance noted "Nance focuses on a new hybrid cyber warfare, Kompromat, which uses cyber assets", as a way to attack political enemies. TechGenix journalist Michael Adams wrote that Nance provides an in-depth analysis of an issue characterized by multiple commentators as a national controversy rivalling the Watergate scandal. Adams called the book an engaging tale of espionage including context on Russian intelligence and the background of Vladimir Putin in the KGB. Voice of America commented that Nance capably "outlined his evidence" in the book about his fears of Russian foreign manipulation in the 2016 election. Bob Burnett wrote for The Huffington Post that Nance described a Game of Thrones strategem by Vladimir Putin, using Donald Trump as a tool to embarrass Hillary Clinton and Barack Obama. Burnett observed Nance posited Trump was won over by Putin through a play to Trump's avarice and narcissism. Jeff Stein Newsweek wrote of the power of the disinformation tactics described in Nance's book: "The genius of the technique is that the correction takes days, or weeks, to catch up to the fiction. By then, gullible masses have digested the fabrications as truth." Aftermath After The Plot to Hack America was published in October 2016, Nance was interviewed in April 2017 on C-SPAN about his book, and the impact of media operations on American society. He argued that Russia Today's actions back up the notion that black propaganda operations are effective, referencing their impact on disinformation operations. Nance cited research by the Senate Intelligence Committee, House Intelligence Committee, and Director of National Intelligence on Russia Today's methods of publishing propaganda by propagating fake news. He traced a larger problem of echo chambers, wherein a false invented story by Sputnik News traveled through bloggers to Breitbart News, became believed as factual by Trump Administration officials, and then were eventually re-reported on again by Russia Today, falsely stating they were simply reporting events created by the White House itself. The author recalled to C-SPAN the days of the Soviet Union where the Soviet intelligence operation practice was to infiltrate and manage reporting agencies of the Communist Party in addition to political figures from both the right and left perspectives, in order to denigrate U.S. democratic interests. Nance warned that Russia under control of Vladimir Putin was motivated by the identical initiative, armed with greater tools and funding than the Soviet Union of the past. He lamented that prior to Putin's appointment as Prime Minister of Russia by Boris Yeltsin, the country was taking steps towards democracy. Nance traced Putin's rise with the descent of democracy in Russia in favor of an oligarchy ruling class of wealthy individuals managing an autocratic society. Nance said U.S. citizens become agents of Russia through employ of Russia Today due to naïveté about the nature of Russian propaganda operations geared to harm U.S. values of civil liberties. Nance placed utilization of propaganda by Russian intelligence agencies through Russia Today and other outlets including social media as part of a larger effort at global cyberwarfare. He characterized this a form of hybrid warfare blending traditional propaganda with computer tools and subversion of media organizations. As a case study he cited Aleksandr Dugin, a Russian neofascist political activist with views favored by Putin, whose tweets expound perspectives that U.S democratic institutions were not successful. See also The Case for Impeachment Dezinformatsia: Active Measures in Soviet Strategy Disinformation The KGB and Soviet Disinformation Timeline of Russian interference in the 2016 United States elections and Timeline of Russian interference in the 2016 United States elections (July 2016–election day) Trump: The Kremlin Candidate? Trump–Russia dossier References External links 2016 non-fiction books American non-fiction books Books about computer hacking Books about Donald Trump Books about the 2016 United States presidential election Books by Malcolm Nance Computer security books Hacker culture Hillary Clinton 2016 presidential campaign Internet manipulation and propaganda Non-fiction books about war Books about Russian interference in the 2016 United States elections Books about the Federal Security Service Works about computer hacking
49683572
https://en.wikipedia.org/wiki/Luo%20Tianyi
Luo Tianyi
Luo Tianyi (Chinese: 洛天依, Pinyin: Luò Tiānyī) is a Chinese Vocaloid released by Shanghai Henian Information Technology Co. Ltd. in 2012. She is the first Vocaloid Chinese singer. She was developed formerly by Bplats, Inc. under the Yamaha Corporation, but she has now been transferred to Shanghai Henian, which is already operating independently, the current brand is Vsinger. Set as a 15-year-old girl with a healing voice line, she has a certain amount of fan and popularity within the ACG since 2012, and is known for her songs. With one of the songs covered in the 2015 HunanTV New Year's concert by a well-known singer Li Yuchun, she was successfully promoted to the public's vision, and then began to appear on the mainstream platform many times, with a number of different fields of artists to perform, in the crowd has a wide range of visibility and influence, but also a number of endorsements of commercial brands, has a number of major concerts, so far there are seven official albums. Her image was adapted by ideolo and she is voiced by Shan Xin (山新) and Kano. The Vocaloid 3 software was developed by G.K. and released on July 12, 2012, and five years later, the Vocaloid 4 software was developed by Renxingtu, released on December 30, 2017, and the Japanese version was released on May 21, 2018. Background Vocaloid is a song synthesis technology developed by Yamaha, and its application products. Since the release of products such as Hatsune Miku and Megpoid, songs or related works written by enthusiasts have been published on Niconico and YouTube, causing a boom in music and its derived culture, and the scene has attracted the attention of Japanese cultural enthusiasts, who have gradually rebroadcast them in China through translation, especially among young people. At this time, more and more people were looking forward to the introduction of Vocaloid software in China, the provision of localization services and the Chinese version of the voice library. At that time, the Vocaloid marketing business was operated by Yamaha-directed Bplats, and in order to expand the Chinese market, Ren Li(任力), general manager of the company's business development department, was sent to Shanghai to set up a local sales company, Shanghai Henian, and obtained Yamaha's permission to carry out business in the name of Vocaloid™ China. Vocaloid™ China announced the launch of The China Project on November 20, 2011. The nature of the business is China-Japanese cooperation, the agent Shanghai Henian in the later role image and setting are integrated with the elements and connotations of traditional Chinese music and culture. In order to produce Chinese products that can be loved by the Chinese people, in December for the Chinese masses and professional voice actors held a collection of image and voice actors. Vocaloid™ China launched a product in Shanghai in January 2012, announcing the shortlist of five image works and the election of Chinese voice actor Shan Xin. Then Vocaloid ™ China adapted five official images based on the shortlisted works: Luo Tianyi, Yuezheng Ling, Yuezheng Longya, Zhiyu Moke and Mo Qingxian, and developed the voice library. She was influenced by the Hatsune Miku in design, but also to attract the music creators of Chinese with its large fan base. On July 12, 2012, Vocaloid ™ China released she's first Vocaloid software at the 8th China International Animation and Game Expo. The original plan was to launch the Japanese version of the software in the fall, however, it was later confirmed that it had been cancelled. The public's awareness of software copyright was weak, and the motivation and content of music creation were lacking, and the environmental differences between China and Japan and other factors, resulting in slow business development with little success. In 2013, the career manager, then general manager of Mercury Shanghai, Guidao Zechong(龟岛则充) began investing in the business out of a bullish outlook for virtual singers in China. On June 30, Ren Li conducted a management buyout, and Shanghai Henian operated as an independent company after July 1, Ten days later, under the planning of Mercury Shanghai, Vocaloid™ China released its second Vocaloid Chinese software Yanhe at the 9th China International Animation and Game Expo, and later established the brand Vocanese. At this time, the business had not yet started to develop, Yamaha withdrew from investment participation, Vocaloid™ China business ended on January 31, 2014, Guidao paid for the copyright of the main and related roles and officially moved from Yamaha to Shanghai Henian on February 1. After acquiring the copyright, Vocanese began planning to cash in, deal with the infringement of surrounding goods, and tried to be a commercial brand endorsement. At this time with the lovers of the sharp contradictions, Ren Li hoped that Luo Tianyi idolization and real mode of operation, to completely separate from UGC music. In May of the same year, Ren Li set up Tianshi Culture, and in June Guidao set up Shanghai Zelishi. Ren Li left his post on February 2, 2015, citing internal inadequacies and dissatisfaction with investors, after announcing the development plan for the Zhanyin Lorra voice library in collaboration with Netease Junyuan Studio on July 15, 2014. At this time, the development of the business was still fatigued, the company's financial problems were increasingly serious, Guidao sought the help of a friend Cao Pu(曹璞), through her contacts to make Zhou Xingchi film investment opportunities, and in the friendship introduction pulled to the investment of Ofei Entertainment, after Cao Pu at the request of investors to take over the company's chairman and general manager, and then set up the brand Vsinger, maintain UGC music and looked for operational breakthroughs. There was an opportunity in the new year 2015, when Bilibili gradually moved from a niche to a mainstream perspective as it grew in size. At that time, a song Ordinary Disco(普通Disco) by singer Li Yuchun in the 2015 HunanTV New Year's Concert after the adaptation caused a lot of enthusiasm, but also let the masses realized that as the original singer, Luo Tianyi had existence of the creative culture behind her. TV station saw this hot invitation to let her and the singer Yang Yuying to appear in the program 2016 HunanTV Spring Festival Gala, the broadcast attracted more attention, after the company let her gradually began to be active in the public media, in the visibility of the subsequent increase, she and the company's business development gradually turned bright, it has also made investments in companies such as Shanghai Huandian, a subsidiary of Bilibili, which favors the virtual idol industry. Vsinger then set out to develop a new voice library for virtual singers, with Luo Tianyi Vocaloid 4 and Japanese version software released on December 30, 2017 and May 21, 2018, respectively. Yuezheng Longya, Zhiyu Moke and Mo Qingxian were also supported by software release. In March 2017, Shanghai Zelishi was changed to Tianshi Culture. From the beginning of 2019, Shanghai Huandian had become the largest shareholder in Shanghai Henian in order to expand Bilibili's virtual idol industry by increasing its holdings several times. At the end of 2019, Shanghai Huandian acquired Chaodian Culture, which is responsible for Bilibili's offline activities and creator's brokerage business, and will be responsible for Bilibili's virtual idol business in addition to maintaining its original business. In the end, Shanghai Henian was incorporated into the Chaodian Culture as a wholly owned subsidiary. Character Character Design The original case of the character was MOTH's painting Yayin Gongyu(雅音宫羽), which was named the winning work in the image collection event held in 2011, She is a highly recognizable image of the Chinese style - with Chinese hairstyles and clothes based on the girls' school uniforms of the China in the 1920s-30s era. The event received 12 votes and 83.3% rating from netizens at the end of the event. Although the first popular work in the event is Yu Caiyin and was officially selected, but for marketing reasons did not first launch it. The final design was completed by the Touhou Project fan works painter ideolo. The design was inspired by the way the monster sea dragon swims in the sea in the game Monster Hunter. Unlike previous loose sets on Japanese Vocaloid characters, Luo Tianyi was given detailed and worldview by Vocaloid™ China. Tianyi is a fifteen-year-old girl, born far away from the human world, for a mission to come to the human world and meet other partners such as Yuezheng Ling. Tianyi has an elf called Tiandian(天钿) from a young age, it is set to be ten years old and thirty centimeters long, can innate into a microphone, will replace Tianyi when she is unable to express emotions. The symbol of Tianyi is 羽(pinyin: yǔ, literally means feather) in China's traditional pentatonic scale(五声音阶), representing the symbol is the Sanskrit "प", the set of ART COLLECTIONS contains the first draft of the design and more details. After the final design was announced, it caused controversy among fans, and the difference between the original case and led to some regret among fans of Yayin Gongyu, which Vocaloid™ China later issued a statement saying was an attempt to promote The Vocaloid culture with Chinese cultural characteristics in the hope of gaining fans understanding and support. The official worldview has had an impact on the creation of people, and after the brand was changed to Vsinger, the operator said it would no longer be implemented and established. Vsinger opened up about the new image at the 2016 Firefly Anime Game Carnival, which fans generally rejected as a far cry from the original style, and eventually announced the decision, only as a costume for a later concert. The official new image of the Vocaloid 4 software and the Japanese version of the software was later announced, and TID to set it up. His first collaboration with the authorities was when he was invited to draw illustrations for ART COLLECTIONS. Before the first official concert, it was announced that Luo Tianyi official color is sky blue#66ccff. Meaning of the Name Project producer jarryestel (Wang Gao, 王杲) cites "Yu(羽) is the sound of water, living in the north, time series is winter"(羽为水声,居北方,时序为冬) in Book of Han, to reflect the character of Luo Tianyi. He also explained the meaning of the name to fans on Baidu Tieba: 洛(pinyin: Luò) is a Chinese character with Chinese geography (Luoyang, Luo River) and Chinese culture (Luo Shen, the god of Luo River), but also a common Chinese family name. 天(pinyin: Tiān, means the sky) is also commonly used by Chinese people to refer to their own (such as the Celestial Empire(天朝)), Luo Tianyi's own setting is also a singer from heaven. 依(pinyin: yī) has the same pronunciation with "一", which represents that she is the first Chinese virtual singer. It also has the meaning of relying on mutual support and yiren(伊人), an appellation to very beautiful girl in Chinese culture. Software The software was originally developed using the Vocaloid 3 engine, which was released in 2011. Vocaloid uses song synthesis techniques known as "bridge synthesis types", The song is synthesized by stitching together the composition of the song in the frequency domain by selecting the fragment data from the real-life song recording. Vocaloid's system consists of three parts: a "score editor" in which users enter lyrics and tunes, a "singing database" that includes fragment data, and a "synthetic engine" that synthesizes songs by connecting song clips. The score information entered by the user in the score editor is transmitted to the composition engine, which selects the required fragment data from the song database and then connects it and outputs it into a song. The score editor is embedded with the synthesis engine as the host software for the song database. Initially only Japanese and English are supported, and since Vocaloid 3 has Chinese functionality. With the Chinese singing database installed, the score editor recognizes the Chinese pinyin and phonetic information entered by the user as the corresponding X-SAMPA information based on the Chinese dictionary, but does not recognize Chinese characters and initials. To bring the song closer to reality, users can add emotional parameters such as strong and weak ups and downs, vibrato, and breathing to the editor. Available editors are Vocaloid Editor, Cubase plug-in edition Vocaloid Editor for Cubase and Crypton Piapro Studio. VOCALOID™3 Library Luo Tianyi Launched on July 12, 2012 and produced and released by Bplats subsidiary Shanghai Henian. According to officials, it has a healing soundline. Code-named V3LB0011CN, the main developer is G.K, a former director of music production in Shanghai Henian, and another person assisted in the completion. G.K. was responsible for recording, sampling editing, and sealing most of the work. The singing data was provided by Chinese voice actor Shan Xin, who was selected by the voiceover, and when asked to describe the voice of the first Vocaloid™ China character in her heart, she replied, "I hope it's a voice of emptiness, serenity, healing, strength and courage". Then, under the supervision, discussion and reflection of the employees, she was selected from among the 100 candidates. When recording to ensure that the finished product can adapt to a variety of musical styles and restrain most of the emotions, she because of professional reasons, hoping to give a part of the soul to quietly convey "strength", "courage" and "love" feelings to everyone. Jarryestel points out that because of the wide variety of dialects in China, many people will feel a little uncomfortable with the way Luo Tianyi bites in modern standard Chinese (Beijing voice). {| class="wikitable" !Voice Library!!BPM!!Range |- ! Tianyi_CHN | 80~170 | A2~D4 |} VOCALOID4 Library Luo Tianyi Launched on December 30, 2017, produced and released by Shanghai Henian. It includes Meng(萌) and Ning(凝) two tones of the voice library, according to the product packaging, the two tones are characterized by "warm healing" and "bright and powerful". Codenation is LTYV4CN, the voiceover is still Shan Xin. The main developer is the current Shanghai Henian music director Renxingtu(Pan Jian). {| class="wikitable" !Voice Library!!BPM!!Range!!XSY Group |- ! Luotianyi_CHN_Meng | 80~170 | A2~D4 | rowspan="2" | Luotianyi_CHN |- ! Luotianyi_CHN_Ning | 70~160 | C3~F4 |} VOCALOID4 Library Luo Tianyi Japanese Launched on May 21, 2018, produced and released by Shanghai Henian. It includes Normal and Sweet two tones of the voice library, according to the product packaging, the two tones are characterized by "cure" and "sweet". Codenation is LTYV4JPN,developed by Renxingtu. Except Shan Xin, Japanese voice actor Kano provided the voice data. {| class="wikitable" !Voice Library!!BPM!!Range!!XSY Group |- ! Luotianyi_JPN_Normal | 80~170 | A2~D4 | rowspan="2" | Luotianyi_JPN |- ! Luotianyi_JPN_Sweet | 80~170 | A2~D4 |} VOCALOID Luo Tianyi It was made public on 31 December 2020 and a new database will be added in addition to the engine upgrade of the database. Fan Works With more than a thousand works since its first release in 2012, Bilibili is the main contributor to music creators today. Best known for her song Ordinary Disco, ilem was inspired by a teacher's words in class, where magical melodies and funny lyrics attracted millions of hits and led to several well-known singers singing in variety shows or concerts. After the software was officially released, The account name "Nightmare of Christmas Eve"(平安夜的噩梦) released a music video for H.K. Jun's song "Millennium Recipes Carol"(千年食谱颂), which included many of China's traditional food. Then Tianyi's fans called her "the world No.1 foodaholic your highness". Official Album Sing Sing Sing As part of the release of the commemorative album, it was released in the limited edition of the premiere of VOCALOID3 Library Luo Tianyi. On July 22, 2012, the Vocaloid Store uploaded a promotional video on Niconico and YouTube in which the fans work was included in ART COLLECTIONS. Dance Dance Dance Released at the 11th Comic Park Show on December 23, 2012. On December 18, 2012, Vanguard Sound uploaded a cross-listening video in Bilibili. Unlike the style of the previous work, the album as a whole adopted the style of electronic music, rhythmic with electronic dance style. Voice library developer G.K. also produced two songs for the album. The Seven Sides of the Dream (梦的七次方) Covered from the SV01 SeeU's Compilation Album, a collection of self-made Korean virtual singer SeeU, was released a day after Dance Dance Dance. Star (星) Covered from the Korean creator Dr. Yun's album, Dr. Yun's 1st Album★. Released on march 23, 2013 at fantasia 04 anime exhibition. Fantasy Wonderland (虚拟游乐场) Released on the same day of Vsinger Live Luo Tianyi 2017 Holographic Concert, which was produced in collaboration with Tianshi Henian and Taihe Music. Lost in Tianyi Divided into ordinary, gift, deluxe version of the package, which contains a Japanese version of the song singing tracks, released on March 1, 2019. Moments Released on July 12, 2020, the theme song One in Ten Thousand Light was also one of the performance tracks of the day's Luo Tianyi Birthday 2020 program. Game Vocanova Vocaloid™ China's Chinese interactive music game with virtual idol Luo Tianyi on the iOS platform. Players can follow her song and dance. Vocanova was launched on the App Store on July 13, 2012 and currently only supports iPad 2 and New iPad. In the game, players can according to the rhythm of the song, on the screen according to the rhythm symbol of the fall of the click, slide, drag and other operations. Concert Bilibili Macro Link 2016 On July 23, 2016,as a guest sang 66CCFF, CONNECT~Connection to the Heart~(CONNECT~心的连接~) and Night Dance(夜舞). Xu Song 2017 Shanghai Concert On March 18, 2017, as a guest sang Light-Chasing Messenger, and sang Late Night Bookstore with the singer. Vsinger Live 2017 Luo Tianyi Holographic Concert Held on Shanghai Mercedes-Benz Arena on June 17, 2017, the concert consumed all the profits from The Mermaid, a Tianshi Cultural investment film. Luo Tianyi&Yanhe 2018 Birthday On July 14, 2018, at The Yangpu Grand Theater in Shanghai. Bilibili Macro Link-Visual Release × Vsinger Live On July 20, 2018, it was merged with Bilibili Macro Link-Visual Release. Nation Wind Bliss Night (国风极乐夜) On November 3, 2018, a concert by Netease Music at the National Stadium in Beijing performed the song Rule the World(权御天下) written by Turtle sui, and Japan Lagtera provided picture technology. Luo Tianyi&Lang Lang Holographic Concert On February 23, 2019, it was held at the Shanghai Mercedes-Benz Arena. It was the first cross-border collaboration between Luo Tianyi and the famous pianist Lang Lang. 2019 Luo Tianyi Birthday On July 12, 2019, it was held on the 94th floor of the Shanghai World Financial Center. Bilibili Macro Link-Visual Release 2019 On July 19, 2019, As a guest to perform, this was the first time with Hatsune Miku to perform on the same stage. DIVE XR FESTIVAL September 22, 2019, the first public performance in Japan,sang 66CCFF, Millennium Recipes Carol, and Draw(ドロー), T.A.O written by Jin. Luo Tianyi Birthday 2020 On July 12, 2020, via Bilibili Live. 2020 Vsinger Live On August 21, 2020, due to the COVID-19, it was postponed and changed to live online in cooperation with Youku. References Vocaloids introduced in 2012 Fictional singers
326107
https://en.wikipedia.org/wiki/Ashton-Tate
Ashton-Tate
Ashton-Tate (Ashton-Tate Corporation) was a US-based software company best known for developing the popular dBASE database application. Ashton-Tate grew from a small garage-based company to become a multinational corporation. Once one of the "Big Three" software companies, which included Microsoft and Lotus, the company stumbled in the late 1980s and was sold to Borland in September 1991. History The history of Ashton-Tate and dBASE are intertwined and as such, must be discussed in parallel. Early history: dBASE II (1981–1983) In 1978, Martin Marietta programmer Wayne Ratliff wrote Vulcan, a database application, to help him make picks for football pools. Written in Intel 8080 assembly language, it ran on the CP/M operating system and was modeled on JPLDIS, a Univac 1108 program used at JPL and written by fellow programmer Jeb Long. Ashton-Tate was launched as a result of George Tate and Hal Lashlee having discovered Vulcan from Ratliff in 1981 and licensing it (there never was any Ashton). The original agreement was written on one page, and called for simple, generous royalty payments to Ratliff. Tate and Lashlee had already built three successful start-up companies by this time: Discount Software (whose president was Ron Dennis, and was one of the first companies to sell PC software programs through the mail to consumers), Software Distributors (CEO Linda Johnson / Mark Vidovich) and Software Center International, the first U.S. software store retail chain, with stores in 32 states. (Glenn Johnson was co-founder along with Tate & Lashlee. SCI was later sold to Wayne Green Publishing.) Vulcan was sold by SCDP Systems. The founders needed to change the name of the software, because Harris Corporation already had an operating system called Vulcan. Hal Pawluk, who worked for their advertising agency, suggested "dBASE", including the capitalisation. He also suggested that the first release of the product "II" would imply that it was already in its second version, and therefore would be perceived as being more reliable than a first release. The original manual was too complex from Pawluk's perspective, so he wrote a second manual, which was duly included in the package along with the first. Pawluk created the name for the new publishing company by combining George's last name with the fictional Ashton surname, purportedly because it was felt that "Ashton-Tate" sounded better, or was easier to pronounce, than "Lashlee-Tate". Contrary to rumor at the time, George Edwin Tate did not have a pet parrot named Ashton until after Hal Pawluk named the company. Because people kept calling the company asking to speak to Mr Ashton, this hidden tidbit of information became a PC industry insider joke. dBASE II had an unusual guarantee. Customers received a crippleware version of the software and a separate, sealed disk with the full version; they could return the unopened disk for a refund within 30 days. The guarantee likely persuaded many to risk purchasing the $700 application. In 1981 the founders hired David C. Cole to be the chairman, president and CEO of their group of companies. The group was called "Software Plus." It did not trade under its own name, but was a holding company for the three startups: Discount Software, Software Distributors, and Ashton-Tate. Cole was given free rein to run the businesses, while George Tate primarily remained involved in Ashton-Tate. Lashlee was somewhat less involved on a day-to-day basis in Ashton-Tate by this time, although he was always aware of and up to speed on all three of the businesses, and was an active board member and officer of SPI. In June 1982 Cole hired Rod Turner as the director of OEM sales for Ashton-Tate. In a few weeks Turner solved a sales commission plan issue, that had been bothering George Tate for some time, with the top performing salesperson (Barbara Weingarten-Guerra), and Tate and Cole promoted Turner to be Vice President of world-wide sales three weeks after his initial hire. Turner was approximately the 12th employee of Ashton Tate. Since the company was truly boot-strapped, using no external venture capital, the founders did not make a practice of hiring experienced veterans, and most of the team at Ashton-Tate were young and enthusiastic, but inexperienced. Jim Taylor was responsible for product management in the early days, and worked closely with Wayne Ratliff and the other key developers on dBASE II. In 1982 Perry Lawrence and Nelson Tso were the two developers who were employed at Ashton-Tate, while Wayne Ratliff employed Jeb Long from his royalty stream. IBM PC dBASE II was ported to the IBM PC (i.e. the MS-DOS operating system) and shipped in September 1982. Pawluk ran advertisements promoting dBASE II for the IBM PC for months before it shipped. When dBASE II for the IBM PC shipped, it was one of few major applications available on the PC, and that fact, combined with good promotion and sales in the US and internationally, caused dBASE II sales to grow rapidly. Turner expanded Ashton-Tate's international distribution efforts and encouraged exclusive distributors in major markets to translate dBASE II from English to non-English versions. The early presence of dBASE II in international markets, as IBM rolled out the PC in those markets, facilitated rapid growth in sales and market share for dBASE. At one point in 1983, the company's French distributor "La Commande Electronique" (whose owner was Hughes LeBlanc) claimed that "one in ten buyers of a PC in France is buying dBASE II." In the winter of 1982, Turner recruited the managing director (David Imberg, now David Inbar) for Ashton-Tate's first subsidiary, Ashton-Tate UK. Turner set a goal for Inbar of achieving 15% per month compound revenue growth in the first 18 months (using the prior UK distributor's volume as a starting point), which Inbar accomplished. He subsequently expanded Ashton-Tate's operations across Europe with subsidiaries in Germany and the Netherlands. When Turner brought Inbar to the Culver City, California, corporate headquarters of Ashton-Tate to be trained, the offices were so crowded that the only space available for Inbar was a small desk beside a large photocopier, with no phone line; the offices were so crowded that when Turner needed to conduct a confidential meeting, he would have it standing up in the nearby restroom. With the growing popularity of ever-larger hard drives on personal computers, dBASE II turned out to be a huge seller. For its time, dBASE was extremely advanced. It was one of the first database products that ran on a microcomputer, and its programming environment (the dBASE language) allowed it to be used to build a wide variety of custom applications. Although microcomputers had limited memory and storage at the time, dBASE nevertheless allowed a huge number of small-to-medium-sized tasks to be automated. The value-added resellers (VARs) who developed applications using dBASE became an important early sales channel for dBASE. By the end of the fiscal year ending in January 1982, the firm had revenues of almost $3.7 million with an operating loss of $313,000 dollars. Among Cole's early acts was to hire an accountant to set up a financial system, install a management structure, and introduce processes to manage operations and orders. Cole's mission was "to shift the balance of power from those who understand how computers work to those who need what computers can do." Cole licensed two products in 1982, building on his publishing background. These two unsuccessful products were launched in October 1982: The Financial Planner and The Bottom Line Strategist. The Financial Planner was a sophisticated financial modeling system that used its own internal language - but it was not as widely appealing as spreadsheets like SuperCalc. The Bottom Line Strategist was a template financial analysis system that had very limited flexibility and function. Both were released at the same price as dBASE II, but neither product was aggressively marketed, and both were put into a benign-neglect mode by Turner when it became clear that they did not have sizable potential. Ashton-Tate: IPO and dBASE III (1983–1985) By the end of January 1983, the company was profitable. In February 1983 the company released dBASE II RunTime, which allowed developers to write dBASE applications and then distribute them to customers without them needing to purchase the "full" version of dBASE. The growth in revenue was matched by a growth in employees. The company hired its first Human Resources manager, put together its first benefits package, and moved headquarters to 10150 West Jefferson Boulevard in Culver City. In May 1983 Cole changed the name of the SPI holding company to be Ashton Tate, which put the company in the position of having a mail order company "Discount Software" and "Software Distributors" as subsidiaries. The newly renamed holding company promptly sold Discount Software and Software Distributors. Cole negotiated an agreement with Wayne Ratliff in which Ratliff exchanged his future royalty stream on dBASE into equity in Ashton Tate, thereby significantly increasing the profitability of the company. Cole also took steps to control its technology by creating an in-house development organization (headed by Harvey Jean, formerly of JPL, as VP engineering), and to diversify by funding two outside development teams: Forefront Corporation (the developer of the product that would later be named "Framework") and Queue Associates. That Spring, Ashton Tate released Friday!. By the time of the November 1983 IPO, the company had grown to 228 employees. The IPO raised $14 million. When the fiscal year ended in January 1984, revenues had more than doubled to $43 million and net income had jumped from $1.1 million (fiscal 1983) to $5.3 million. By early 1984 InfoWorld estimated that Ashton-Tate was the world's sixth-largest microcomputer-software company. dBASE II reportedly had 70% of the microcomputer-database market, with more than 150,000 copies sold. Ashton-Tate published a catalog listing more than 700 applications written in the language, and more than 30 book, audio, video, and computer tutorials taught dBASE. Other companies produced hundreds of utilities that worked with the database, which Ratliff believed contributed to Ashton-Tate's success; "You might say it's because the software is incomplete. There are 'problems' with dBASE—omissions for other software developers to fill". He noted that "If they weren't with us, they'd be against us", and Cole promised to always notify third parties before announcing a new product or changing dBase's marketing. In May the company announced, and in July shipped, dBASE III as the successor to dBASE II. July also saw the release of Framework, an integrated office suite developed by Forefront Corporation and funded by Ashton-Tate. These were the company's first products released with copy protection schemes in an attempt to stop software piracy. dBASE III was the first release written in the C programming language to make it easier to support and port to other platforms. To facilitate the rewrite, an automatic conversion program was used to convert the original Vulcan code from CP/M Z-80 and DOS 8088 assembly language code into C, which resulted in the beginnings of a difficult to maintain legacy code base that would haunt the company for many years to come. This also had the side effect of making the program run somewhat slower, which was of some concern when it first shipped. As newer machines came out the problem was erased through increased performance of the hardware, and the "problem" simply went away. In fall 1984 the company had over 500 employees and was taking in $40 million a year in sales (with approximately $15 million in Europe), the vast majority of it from dBASE or related utilities. Ed Esber Ashton Tate held a large company wide convention aboard the Queen Mary in Long Beach, California, in early August, 1984 and presented the new products like Friday! to hundreds of clients, and staff. Immediately after the Queen Mary convention, George Tate suddenly died of a heart attack at the age of 40 on August 10, 1984. David Cole on October 29 announced his resignation and left for Ziff-Davis, leaving Ed Esber to become CEO. Cole hired Esber because he was the marketing expert who launched VisiCalc and who built the first distribution channels for personal computer software. (VisiCalc was the first spreadsheet and is credited for sparking the personal computer revolution and was the first commercially successful personal computer software package.) During Esber's seven-year tenure, Ashton Tate had its most prosperous years and a few of its most controversial. It is also when Ashton-Tate became one of the "Big Three" personal computer software companies who had weathered the early 1980s "shakeout", and was considered an equal of Microsoft and Lotus Development. Under his leadership Ashton-Tate sales grew over 600% from $40M to over $318M. In November, shortly after Esber took over, dBASE III version 1.1 was released to correct some of the numerous bugs found in the 1.0 release. As soon as the 1.1 release shipped, development focus turned to the next version, internally referred to as dBASE III version 2.0. Among other things, the 2.0 release would have a new kernel for increased speed, and new functions to improve application development. Esber's relationship with Wayne Ratliff, however, was tumultuous, and Ratliff quit several months later. Eventually a group of sales and marketing employees left to join Ratliff at Migent Corporation to compete with Ashton Tate. Later (January 1987), Ashton-Tate would sue Migent for alleged misappropriation of trade secrets. Ratliff would eventually approach Esber about rejoining Ashton-Tate and insisting on reporting directly to him. Jeb Long took over as dBASE's main architect in Ratliff's absence. In October 1985 the company released dBASE III Developer's Edition. Internally this release was known as version 1.2. It had some of the new features expected to be in the upcoming 2.0 release, including the new kernel and features primarily useful to application developers. Version 1.2 was one of the most stable dBASE versions that Ashton-Tate ever released, if not the most stable. It was also one of the least known and most often forgotten. Mostly, it was a release to appease developers waiting for 2.0 (dBASE III+). In late 1985 the company moved its headquarters to the final location at 20101 Hamilton Avenue in Torrance. Development was spread throughout California, although dBASE development was centered at the Glendale offices. dBASE III+ and third party clones (1986–1987) dBASE III+, a version including character-based menus for improved ease-of-use, had troubles maturing and had to be recalled just prior to its release in early 1986 due to an incorrect setting in the copy-protection scheme. However the company handled this with some aplomb, and although some customers were affected, Ashton-Tate's handling of the problems did much to improve customer relations rather than sour them. dBASE III+ would go on to be just as successful as dBASE II had been, powering the company to $318 million in sales in 1987. dBASE had grown unwieldy over the years, so Esber started a project under Mike Benson to re-architect dBASE for the new world of client–server software. It was to be a complete rewrite, designed as the next generation dBASE. dBASE was a complex product, and a thriving third-party industry sprung up to support it. A number of products were introduced to improve certain aspects of dBASE, both programming and day-to-day operations. As Ashton-Tate announced newer versions of dBASE, they would often decide to include some of the functionality provided by the third parties as features of the base system. Predictably, sales of the third-party version would instantly stop, whether or not the new version of dBASE actually included that feature. After a number of such vaporware announcements, third-party developers started becoming upset. One particularly important addition to the lineup of third-party add-ons was the eventual release of dBASE compilers, which would take a dBASE project and compile it and link it into a stand-alone runnable program. This not only made the resulting project easy to distribute to end users, but it did not require dBASE to be installed on that machine. These compilers essentially replaced Ashton-Tate's own solution to this problem, a $395 per-machine "runtime" copy of dBASE, and thereby removed one source of their income. The most successful such compiler was Clipper, from Nantucket Software. Eventually a number of these were developed into full-blown dBASE clones. Esber was upset with the companies that cloned dBASE products but was always supportive of third-party developers, whom he viewed as an important part of the dBASE ecosystem. He neither believed nor supported companies that cloned dBASE, in the process leveraging the millions of dollars his shareholders had paid to market dBASE. Starting with minor actions, he eventually went to great lengths to stop cloners with cease-and-desist letters and threats of legal action. At one industry conference he even stood up and threatened to sue anyone who made a dBASE clone, shouting "Make my day!". This sparked great debates about the ownership of computer languages and chants of "innovation not litigation". As a result of this continued conflict, the third-party community slowly moved some of their small business customers away from dBASE. Fortunately for Ashton-Tate, large corporations were standardizing on dBASE. dBASE IV: Decline and fall (1988–1990) Ashton-Tate had been promising a new version of the core dBASE product line starting around 1986. The new version was going to be more powerful, faster, and easier to create databases with. It would have improved indexes and networking, support SQL internally as well as interacting with SQL Server, and include a compiler. Ashton-Tate announced dBASE IV in February 1988 with an anticipated release set for July of that year. dBASE IV was eventually released in October 1988 as two products: Standard and Developer's editions. Unfortunately, dBASE IV was both slow and very buggy. Bugs are not at all that surprising in a major product update, something that would normally be fixed with a "dot-one" release before too much damage was done. This situation had occurred with dBASE III for instance, and Ashton-Tate had quickly fixed the problems. However a number of issues conspired to make the dBASE IV 1.0 release a disaster. For one, while dBASE IV did include a compiler, it was not what the developer community was expecting. That community was looking for a product that would generate stand-alone, executable code, similar to Clipper. The dBASE IV compiler did produce object code, but still required the full dBASE IV product to run the result. Many believed that Ashton-Tate intended dBASE IV to compete with and eliminate the third-party developers. The announcement alone did much to upset the livelihood of the various compiler authors. More problematic however was the instability of the program. The full scale of the problem only became obvious as more people attempted to use the product, especially those who upgraded to the new version. The bugs were so numerous that most users gave up, resigned to wait for a dot-one release. As word got out, sales slumped as existing users chose to hold off on their upgrades, and new users chose to ignore the product. Neither of these issues would, by themselves, kill the product. dBASE had an extremely large following and excellent name recognition. All that was needed was an update that addressed the problems. At the time of its release, there was a general consensus within Ashton-Tate that a bug-fix version would be released within six months of the 1.0 release. If that had happened, the loyal users might have been more accepting of the product. Rather than do that, Ashton-Tate management instead turned their attention to the next generation of applications, code named Diamond. Diamond was to be a new, integrated product line capable of sharing large sets of data across applications. This effort had been underway for years and was already consuming many of the resources in the company's Glendale, Torrance, Walnut Creek and Los Gatos (Northern California Product Center) offices. However, once it became apparent that Diamond was years away from becoming a product, and with poor reviews and slipping sales of dBASE IV 1.0, Ashton-Tate returned its focus to fixing dBASE IV. It was almost two years before dBASE IV 1.1 finally shipped (in July 1990). During this time many customers took the opportunity to try out the legions of dBASE clones that had appeared recently, notably FoxBase and Clipper. Sales of dBASE had plummeted. The company had about 63% of the overall database market in 1988, and only 43% in 1989. In the final four quarters as a company, Ashton-Tate lost close to $40 million. In August 1989, the company laid off over 400 of its 1,800 employees. The Microsoft partnership for a version called the Ashton-Tate/Microsoft SQL Server also came to nothing, as Ashton-Tate's sales channels were not prepared to sell what was then a high-end database. The first version of SQL Server also only ran on IBM OS/2, which also limited its success. A version of dBASE that communicated directly with SQL Server, called dBASE IV Server Edition, was released in 1990, and was reviewed as the best available client for SQL Server (in both Databased Advisor and DBMS magazines), but the product never gained traction and was one of the casualties of the Borland acquisition. Microsoft eventually released Access in this role instead. Sale to Borland (1991) Esber had been trying to grow the company for years via acquisitions or combining forces with other software companies, including merger discussions with Lotus in 1985 and again in 1989. Ashton-Tate's strategically inept board passed up numerous opportunities for industry-changing mergers. Other merger discussions that Ashton-Tate's board rejected or reached an impasse on included Cullinet, Computer Associates, Informix, Symantec and Microsoft. (Microsoft would later acquire Fox Software after Borland acquired Ashton-Tate and the United States Department of Justice forced Borland to not assert ownership of the dBASE language.) In 1990 Esber proposed a merger with Borland. During the first discussions, the board backed out and dismissed Esber thinking him crazy to entertain a merger of equals (combining the companies at existing market valuations) with the smaller competitor Borland, and on February 11, 1991, replaced him as CEO with William P. "Bill" Lyons. Lyons had been hired to run the non-dBASE business and heretofore had been unsuccessful. Lyons would ship dBASE IV 1.1, a product Esber managed and was already in beta when let go. After giving the board a merger compensation package (including individual bonuses of $250 thousand) and giving the management team repriced options and golden parachutes, the board and Lyons reinitiated discussions with Borland, but this time structured as a take-over of Ashton-Tate with a significant premium over Ashton-Tate's current market valuation but substantially below the price Esber had negotiated. Wall Street liked the deal and Borland stock would reach new highs shortly before and after the merger. Some considered the $439 million in stock they paid to be too much. Philippe Kahn, CEO of Borland, apparently did not consult with his management team prior to committing to acquire Ashton Tate over a weekend visit to Los Angeles. The Borland merger was not a smooth one. Borland had been marketing the Paradox database specifically to compete with dBASE, and its programmers considered their system to be far superior to dBASE. The Paradox group was extremely upset whenever Kahn so much as mentioned dBASE, and an intense turf war broke out within the company. Borland was also developing a competitor product called The Borland dBase Compiler for Windows. This product was designed by Gregor Freund who led a small team developing this fast, object-oriented version of dBASE. It was when Borland showed the product to the Ashton-Tate team that they finally conceded that they had lost the battle for dBASE. Nevertheless, Kahn was observant of the trends in the computer market, and decided that both products should be moved forward to become truly Microsoft Windows-based. The OO-dBASE compiler was no more able to run under Windows than was dBASE IV, causing Borland to abandon both code bases in 1993 and spin up a new team to create a new product, eventually delivered as dBASE for Windows in 1994. Meanwhile, Paradox was deliberately downplayed in the developer market since dBASE was now the largest Borland product. Microsoft introduced Access in late 1992, and eventually took over almost all of the Windows database market. Further, in the summer of 1992 Microsoft acquired Ohio-based Fox Software, makers of the dBASE-like products FoxBASE+ and FoxPro. With Microsoft behind FoxPro, many dBASE and Clipper software developers would start working in FoxPro instead. By the time dBASE for Windows was released, the market hardly noticed. Microsoft appears to have neglected FoxPro subsequent to the acquisition, perhaps because they also owned and promoted Microsoft Access, a direct competitor to dBASE. Certainly, the PC database market became a great deal less competitive as a result of their deal to buy FoxPro. When Borland eventually sold its Quattro Pro and Paradox products to Novell, where they would be joined with WordPerfect in an attempt to match Microsoft Office, Borland was left with InterBase, which Esber had purchased in the late 1980s and had its origins as a derivative of the RDB database work at DEC. Borland's ongoing strategy was to refocus its development tools on the corporate market with client–server applications, so Interbase fitted in as a low-end tool and a good generic SQL database for prototyping. This proved to be the longest lasting and most positive part of the Ashton-Tate acquisition, ironic since it was almost an oversight and little known to Borland until they acquired Ashton-Tate. Overall, the Ashton-Tate purchase proved to be unsuccessful. Several years later, Philippe Kahn would leave Borland amidst declining financial performance, including many years of losses. Downfall While Ashton-Tate's downfall can be attributed to several factors, chief among them were: the over-reliance on a single-product line (dBASE) the appalling quality of dBASE IV on release, compounded by the complete failure to take any action to rectify this when it was needed a focus on future products without addressing the needs of the current customers the departure of Wayne Ratliff Any one of these would have been a surmountable problem, but combined they brought about the swift decline of the company. Ashton-Tate's dependence on dBASE is understandable. It was one of the earliest killer applications in the CP/M world, along with WordStar and (on other platforms) VisiCalc, and was able to make the transition to the IBM PC to maintain its dominance. Its success alone is what created and sustained the company through the first nine years. However, the overreliance on dBASE for revenue had a catastrophic effect on the company when dBASE IV sales tanked. In the end, the poor quality and extremely late release of dBASE IV drove existing customers away and kept new ones from accepting it. This loss of revenue for the cash cow was too much for the company to bear, and combined with management missteps, eventually led to the sale to the upstart Borland International. Non-dBASE products Through the mid-1980s Esber increasingly looked to diversify the company's holdings, and purchased a number of products to roll into the Ashton-Tate lineup. By and large most of these acquisitions failed and did not result in the revenue expected. This experience is another illustration of the difficulty of integrating acquired companies and products in a rapidly changing technological market. Friday! Friday was a product conceived during the David Cole era at Ashton-Tate. Named after Robinson Crusoe's man Friday, because, by using the program, one could supposedly "get everything done by Friday!", this was a simple personal information manager (PIM) program written around 1983, years before that acronym became popular. It used a customized version of dBASE II, predating the dBASE III product. Several design flaws surfaced in beta-testing that required a major design and rewriting of code. These changes were made in-house by Jim Lewis who soon after joined Ashton-Tate as a lead developer and product manager. After a significant advertising campaign and modest interest, Friday! was eventually withdrawn from the market. (See also, Microsoft Bob.) Javelin On April 10, 1986, Ashton-Tate signed a marketing deal with Javelin Software to distribute their financial modeling software named Javelin outside of the US and Canada. Framework Their most successful attempt at a breakout was with Framework. Framework, like dBASE before it, was the brainchild of a single author, Robert Carr, who felt that integrated applications offered huge benefits over a selection of separate apps doing the same thing. In 1983 he had a runnable demo of his product, and showed it to Ashton-Tate who immediately signed a deal to support development in exchange for marketing rights. Framework was an integrated DOS-based office suite that combined a word processor, spreadsheet, mini-database application, outliner, charting tool, and a terminal program. Later versions also added e-mail support. Framework also had the distinction of being available in over 14 languages, and it was more successful in Europe than in North America. Although DOS based, Framework supported a fully functional GUI based on character graphics (similar to Borland's OWL). Framework eventually got locked into an industry battle, primarily with Lotus Symphony, and later with Microsoft Works. The market was never large to begin with, as most customers chose to purchase the large, monolithic versions of applications even if they never used the extra functionality. Borland later sold Framework to Selections & Functions, who continue to sell it today. MultiMate MultiMate was a word processor package created to copy the basic operation of a Wang dedicated word processor workstation on the PC. In the early 1980s many companies used MultiMate to replace these expensive single-purpose systems with PCs, MultiMate offering them an easy migration path. Although it wasn't clear at the time, this migration was largely complete by the time Ashton-Tate bought the company in December 1985. Sales had plateaued, although they were still fairly impressive at the time. What was originally a deliberate attempt to copy the Wang system now made the product seem hopelessly outdated, and it would require a major upgrade to remain useful. WordPerfect took advantage of these issues and took market share to a degree essentially lethal for MultiMate. The Master series of products Ashton-Tate purchased Decision Resources of Westport, Connecticut, in 1986. Decision Resources had created the Chart Master, Sign Master, Map Master and Diagram Master programs. These were simple, effective business charting/drawing programs that counted on various spreadsheet programs being so poor at charting that people would gladly pay for another program to improve on them. By the time Ashton-Tate purchased the company it was clear that newer generations of spreadsheet programs would improve their charting abilities to the point where the Decision Resource products wouldn't be needed, but the company was also working on a new drawing package that was more interesting in the long run. After the purchase was completed it became clear that the drawing product was inadequate. Although it was released as Draw Applause it never sold well. Byline Byline was an early desktop publishing program developed by the company SkiSoft and distributed and marketed by Ashton-Tate. When it was introduced sometime around 1987, it was both fairly inexpensive and easy to use, and gained a small but devoted following. Users designed a page by filling out an onscreen form that described page characteristics: margins, columns, font and size, and so on. The program created the page and an onscreen preview. This method of working was in contrast to the more directly interactive WYSIWYG approach taken by Aldus PageMaker and Ventura Publisher, which became more popular as windowing systems and GUIs became more common. Also, as time went on more and more so-called desktop publishing features were added to popular word processing software, probably reducing the market for such a low-end desktop publishing program. Oddly, Byline was written in the Forth programming language. RapidFile A flat-file database program launched in October 1986 that was commonly used to create mailing labels and form letters on PCs running the DOS operating system. RapidFile was also adept at organizing and manipulating data imported from other software programs. It was designed to be a fast, easy-to-use and less-expensive database for those who did not require the sophisticated capabilities of dBASE. It achieved moderate success for Ashton-Tate, but a version for Microsoft Windows was never developed. RapidFile is unusual in that it was developed in the programming language Forth. Rapidfile version 1.2 was released in 1986, with versions available in several languages including English, French and Dutch. Although Rapidfile was created for the DOS operating system, information is available to show that it can be persuaded to work reasonably well in the DOS box of Microsoft Windows 95, 98, 2000 and XP, and also under Linux using the DOSemu emulation software. Apple Macintosh products When Apple Computer was introducing the Macintosh ("Mac") in the early 1980s, Ashton-Tate was one of the "big three" software companies that Apple desperately wanted to support their new platform. When approached, Ashton-Tate indicated an interest in becoming a major player in the new market. As early as the winter of 1984, only a few months after the Macintosh introduction, the company purchased a small Macintosh database developer and moved them to their Glendale development center to work on what would later be known as dBASE Mac. Soon after this, in early 1985, they agreed to fund development of a spreadsheet program being developed by Randy Wigginton, former project lead of MacWrite. Years later they added a "high-end" word processor from Ann Arbor Softworks, who were in the midst of a rather public debacle while trying to release FullWrite Professional, which was now almost a year late. Ed Esber and Apple Computer chairman John Sculley jointly announced Ashton Tate's family of Mac products in Palo Alto, California. dBASE Mac finally shipped in September 1987, but it was dBASE in name only. Users were dismayed to learn that in order to interact with their major investment in dBASE on the PC, their applications would have to be re-written from scratch. Adding to their frustration was the fact that it crashed frequently and was extremely slow. Given that the program was really a completely new Mac-only system, it had to compete with other Mac-only database systems like 4th Dimension, Helix and FileMaker. FullWrite and Full Impact were released in 1988. Both were liked by reviewers and had leading edge features. FullWrite was an outstanding product, while Full Impact had the bad luck of being timed just after a major new release of Microsoft Excel and the release of Informix Wingz. All three products were excellent at their core, but were not viewed as a family and needed to link together more cleanly. They all also needed a solid follow-up release to address some of the bugs and performance issues. However, no major upgrades were ever shipped for either FullWrite or dBASE Mac, and the only major upgrade to FullImpact shipped a full two years after release. Releases of Microsoft Word and Excel soon closed some of the feature gaps, and as the Mac OS changed the products became increasingly difficult to run. Microsoft embarked on a campaign in earnest to discredit and kill Ashton-Tate's products, at one time exaggerating the system requirements for FullWrite, and going so far as to delete Ashton Tate software from Mac dealers' demonstration computers. FullWrite was later sold off by Borland in 1994 to Akimbo Systems, but by that time Microsoft Word had achieved market domination and they, too, eventually gave up on it. dBASE Mac was sold off in 1990 and re-released as nuBASE, but it was no more successful and was gone within a year. Full Impact simply disappeared. SQL Server One problem with dBASE and similar products is that it was not based on the client–server model. That means that when a database is used by a number of users on a network, the system normally relies on the underlying network software to deliver entire files to the user's desktop machine where the actual query work is carried out. This creates heavy load on the network, as each user "pulls down" the database files, often to do the same query over and over. In contrast, a client–server system receives only small commands from the user's machine, processes the command locally on the server, and then returns only those results the user was looking for. Overall network use is dramatically lowered. A client–server database is a fundamentally different sort of system than a traditional single-user system like dBASE, and although they share many features in common, it is typically not a simple task to take an existing single-user product and turn it into a true client–server system. As the business world became increasingly networked, Ashton-Tate's system would become irrelevant without updating to the client server era. Ed Esber and Bill Gates introduced SQL Server to the world in a joint New York press conference. The basic idea was to use SQL Server as a back-end and dBASE as the front-end, allowing the existing dBASE market to use their forms and programming knowledge on top of a SQL system. SQL Server was actually a product developed by Sybase corporation, which Microsoft had licensed. From a business perspective this had little direct effect on the company, at least in the short term. dBASE continued to sell well, and the company eventually peaked at $318M in yearly sales. During this period, Esber hired some of the most brilliant database engineers in the industry, including Dr. Moshe Zloof from IBM, Harry Wong, and Mike Benson (who would later head Esber's efforts to rebuild a new dBASE). Tate Publishing The Tate Publishing division of Ashton-Tate initially published books about Ashton-Tate's software; in October 1988 it branched out to third-party software. Lawsuits Esber had earlier threatened a group of dBASE users who were attempting to define a standard dBASE file format. With this standard, anyone could create a dBASE compatible system, something Esber simply wouldn't allow. But as soon as they were issued the cease-and-desist, they simply changed their effort to create a "new" standard known as "xBase". Esber had previously decided to sue one of the clone companies involved, then known as Fox Software. By the time the case worked its way to court in 1990, Fox Software had released FoxPro and was busy increasing market share. If the court case was successful, Ashton-Tate could stop FoxPro and use the precedent to stop the other clones as well, allowing dBASE to regain a footing and recover from the dBASE IV incident. These hopes came to an end when the case was thrown out of court. During the initial proceedings it was learned that dBASE's file format and language had been based on a mainframe product used at JPL, where Ratliff had been working when he first created Vulcan. The credibility of Ratliff was jeopardized by his alternate claims of ownership while at Ashton Tate and then supporting the roots at JPL after he left. All the facts were never sorted out and Ashton-Tate's competitors had a self-interest motivated field day in writing amicus briefs. When the federal judge reviewed the work of his clerks he overturned his earlier ruling, and decided to hear the case on whether or not Ashton-Tate owned the language. In April 1991, the judge vindicated Esber's decision to protect Ashton-Tate's investment of several hundred million in the development and marketing of dBase, by ruling that Ashton-Tate did own the language. Unfortunately, his earlier ruling had already done considerable damage. Eventually, as part of the merger with Borland, the US Justice Department required that Borland not assert copyright claims to menu commands and the command language of dBASE. This paved the way for Microsoft to buy Fox Software. Products dBASE Framework – integrated word processor, outliner and spreadsheet application InterBase – purchased from Groton Database Systems MultiMate – DOS-based word processor RapidFile – database application written in MMSForth References Further reading Ashton-Tate – from Ed Esber's official website contains host of articles and financial performance Interview with Wayne Ratliff – contains many notes on the early history of dBASE Ashton-Tate copyright shield for dBASE line stripped by court order – details the court case in which dBASE's history lost them the ability to claim copyright. Software companies established in 1980 Defunct software companies of the United States Borland Defunct companies based in Greater Los Angeles Defunct companies based in California Companies based in Torrance, California 1991 mergers and acquisitions Software companies disestablished in 1991 1980 establishments in California 1991 disestablishments in California
411893
https://en.wikipedia.org/wiki/United%20States%20Naval%20Research%20Laboratory
United States Naval Research Laboratory
The United States Naval Research Laboratory (NRL) is the corporate research laboratory for the United States Navy and the United States Marine Corps. It conducts basic scientific research, applied research, technological development and prototyping. The laboratory's specialties include plasma physics, space physics, materials science, and tactical electronic warfare. NRL is one of the first US government scientific R&D laboratories, having opened in 1923 at the instigation of Thomas Edison, and is currently under the Office of Naval Research. NRL is a Navy Working Capital Fund activity, which means it is not a line-item in the US Federal Budget. Instead of direct funding from Congress, all costs, including overhead, are recovered through sponsor-funded research projects. NRL's research expenditures are approximately $1 billion per year. Research The Naval Research Laboratory conducts a wide variety of basic research and applied research relevant to the US Navy. NRL scientists and engineers author over 1200 openly published research papers in a wide range of conferences, symposia, and journals each year. It has a history of scientific breakthroughs and technological achievements dating back to its foundation in 1923. In some instances the laboratory's contributions to military technology have been declassified decades after those technologies have become widely adopted. In 2011, NRL researchers published 1,398 unclassified scientific and technical articles, book chapters and conference proceedings. In 2008, the NRL was ranked No. 3 among all U.S. institutions holding nanotechnology-related patents, behind IBM and the University of California. Current areas of research at NRL include, for example: Advanced radio, optical and infrared sensors Autonomous systems Computer science, cognitive science, and artificial intelligence Communications Technology (e.g., radio, networking, optical transmission) Directed energy technology Electronic electro-optical device technology Electronic warfare Enhanced maintainability, reliability and survivability technology Environmental effects on naval systems Human-robot interaction Imaging research and systems Information Security Marine geosciences Materials Meteorology Ocean acoustics Oceanography Plasma physics Space systems and technology Surveillance and sensor technology Undersea technology In 2014, the NRL was researching: armor for munitions in transport, high-powered lasers, remote explosives detection, spintronics, the dynamics of explosive gas mixtures, electromagnetic railgun technology, detection of hidden nuclear materials, graphene devices, high-power extremely high frequency (35–220 GHz) amplifiers, acoustic lensing, information-rich orbital coastline mapping, arctic weather forecasting, global aerosol analysis & prediction, high-density plasmas, millisecond pulsars, broadband laser data links, virtual mission operation centers, battery technology, photonic crystals, carbon nanotube electronics, electronic sensors, mechanical nano-resonators, solid-state chemical sensors, organic opto-electronics, neural-electronic interfaces and self-assembling nanostructures. The laboratory includes a range of R&D facilities. 2014 additions included the NRL Nanoscience Institute's Class 100 nanofabrication cleanroom; quiet and ultra-quiet measurement labs; and the Laboratory for Autonomous Systems Research (LASR). Notable accomplishments Space sciences The Naval Research Laboratory has a long history of spacecraft development. This includes the second, fifth and seventh American satellites in Earth orbit, the first solar-powered satellite, the first surveillance satellite, the first meteorological satellite and the first GPS satellite. Project Vanguard, the first American satellite program, tasked NRL with the design, construction and launch of an artificial satellite, which was accomplished in 1958. , Vanguard I and its upper launch stage are still in orbit, making them the longest-lived man-made satellites. Vanguard II was the first satellite to observe the Earth's cloud cover and therefore the first meteorological satellite. NRL's Galactic Radiation and Background I (GRAB I) was the first U.S. intelligence satellite, mapping out Soviet radar networks from space. The Global Positioning System (GPS) was invented at NRL and tested by NRL's Timation series of satellites. The first operational GPS satellite, Timation IV (NTS-II) was designed and constructed at NRL. NRL pioneered the study of the sun Ultraviolet and X-Ray spectrum and continues to contribute to the field with satellites like Coriolis (satellite) launched in 2003. NRL is also responsible for the Tactical Satellite Program with spacecraft launched in 2006, 2009 and 2011. The NRL designed the first satellite tracking system, Minitrack, which became the prototype for future satellite tracking networks. Prior to the success of surveillance satellites, the iconic parabolic antenna atop NRL's main headquarters in Washington, D.C. was part of Communication Moon Relay, a project that utilized signals bounced off the Moon both for long-distance communications research and surveillance of internal Soviet transmissions during the Cold War. NRL's spacecraft development program continues today with the TacSat-4 experimental tactical reconnaissance and communication satellite. In addition to spacecraft design, NRL designs and operates spaceborne research instruments and experiments, such as the Strontium Iodide Radiation Instrumentation (SIRI) and RAM Angle and Magnetic field sensor (RAMS) aboard STPSat-5, the Wide-field Imager for Solar PRobe (WISPR) aboard the Parker Solar Probe, and the Large Angle and Spectrometric Coronagraph Experiment (LASCO) aboard the Solar and Heliospheric Observatory (SOHO). NASA's Fermi Gamma-ray Space Telescope (FGST) [formerly called Gamma-ray Large Area Space Telescope (GLAST)] was tested at NRL spacecraft testing facilities. NRL scientists have most recently contributed leading research to the study of novas and gamma ray bursts. Meteorology The Marine Meteorology Division (Naval Research Lab–Monterey, NRL–MRY), located in Monterey, California, contributes to weather forecasting in the United States and around the world by publishing imagery from 18 weather satellites. Satellite images of severe weather (e.g. hurricanes and cyclones) that are used for advanced warning often originate from NRL–MRY, as seen in 2017 during hurricane Harvey. NRL is also involved in weather forecasting models such as the Hurricane Weather Research and Forecasting model released in 2007. Materials science NRL has a long history of contributions to materials science, dating back to the use of Industrial radiography with gamma rays for the nondestructive inspection of metal casings and welds on Navy vessels beginning in the 1920s. Modern mechanical fracture mechanics were pioneered at NRL and were subsequently applied to solve fracture problems in Navy vessels, commercial aircraft and Polaris missiles. That knowledge is in widespread use today in applications ranging from design of nuclear reactors to aircraft, submarines and toxic material storage tanks. NRL developed the synthesis of high-purity GaAs crystals used in a myriad of modern high frequency transceivers including cellular phones, satellite communication systems, commercial and military radar systems including those aboard all US combat aircraft and ARM, Phoenix, AIM-9L and AMRAAM missiles. NRL's GaAs inventions were licensed by Rockwell, Westinghouse, Texas Instruments and Hughes Research. High-purity GaAs is also used for high-efficiency solar cells like those aboard NASA's Spirit and Opportunity rovers currently on Mars. Fundamental aspects of stealth technology were developed at NRL, including the radar absorption mechanisms in ferrite-containing materials. Metal bearing surface treatments using Cr ion implantation researched at NRL nearly tripled the service life of Navy turbine engine parts and was adopted for Army helicopter parts as well. Fluorinated polyurethane coatings developed at NRL are used to line fuel storage tanks throughout the US Navy, reducing leakage and environmental and fuel contamination. The same polymer films are used in Los Angeles-class submarine radomes to repel water and enable radar operation soon after surfacing. Scientists at NRL frequently contribute theoretical and experimental research on novel materials, particularly magnetic materials and nanomaterials and thermoplastic. Radar The first modern U.S. radar was invented and developed at NRL in Washington, DC in 1922. By 1939, NRL installed the first operational radar aboard the USS New York, in time for radar to contribute to naval victories of the Coral Sea, Midway and Guadalcanal. NRL then further developed over-the-horizon radar as well as radar data displays. NRL's Radar Division continues important research & development contributing to US Navy and US Department of Defense capabilities. Tactical electronic warfare NRL's Tactical Electronic Warfare (TEW) Division is responsible for research and development in support of the Navy's tactical electronic warfare requirements and missions. These include electronic warfare support measures, electronic countermeasures, and supporting counter-countermeasures, as well as studies, analyses, and simulations for determining and improving the performance of Electronic Warfare systems. NRL TEW includes aerial, surface, and ground EW within its scope. NRL is responsible for the identification, friend or foe (IFF) system and a number of other advances. Information security The Information Technology Division features an information security R&D group, which is where the IETF's IP Security (IPsec) protocols were originally developed. The Encapsulating Security Payload (ESP) protocol developed at NRL is widely used for virtual private network (VPN) connections worldwide. The projects developed by the laboratory often become mainstream applications without public awareness of the developer; an example in computer science is onion routing, the core principle of the anonymizing Tor software. Nuclear research Nuclear power research was initiated at NRL as early as 1939, six years before the first atomic bomb, for the purpose of powering submarines. Uranium enrichment methods sponsored by NRL over the course of World War II were adopted by the Manhattan Project and guided the design of Oak Ridge National Laboratory's Uranium enrichment plant. NRL is currently developing laser focusing techniques aimed at inertial confinement fusion technology. Physical sciences The static discharger seen on trailing edges of virtually all modern aircraft was originally developed by NRL scientists during World War II. After the war, the laboratory developed modern synthetic lubricants initially for use in the Navy's jet aircraft but subsequently adopted by the commercial jet industry. In the late 1960s, NRL researched low-temperature physics, achieving for the first time a temperature within one millionth of a degree of absolute zero in 1967. In 1985 two scientists at the laboratory, Herbert A. Hauptman and Jerome Karle, won the Nobel Prize for devising direct methods employing X-ray diffraction analysis in the determination of crystal structures. Their methods form the basis for the computer packages used in pharmaceutical labs and research institutions worldwide for the analysis of more than 10,000 new substances each year. NRL has most recently published research on quantum computing, quantum dots, plasma shockwaves, thermodynamics of liquids, modeling of oil spills and other topics. NRL operates a small squadron of research aircraft termed Scientific Development Squadron (VXS) 1. Its missions include, for example, Rampant Lion, which used sophisticated airborne instrumentation (gravimeters, magnetometers and hyperspectral cameras) to collect precise 3D topography of two-thirds of Afghanistan and locate natural resources (underground gas and mineral deposits, vegetation types, etc.) there and in Iraq and Colombia. Plasma science The Division of Plasma Physics conducts research and development into ionized matter. NRL currently holds the world record for most energetic rail gun projectile () and fastest man-made projectile (). Artificial intelligence NRL established the Navy Center for Applied Research in Artificial Intelligence in 1981, which conducts basic and applied research in artificial intelligence, cognitive science, autonomy, and human-centered computing. Among its achievements are advances in cognitive architectures, human-robot interaction, and machine learning. Organization The laboratory is divided into four research directorates, one financial directorate, and one executive directorate. All the directorates are headquartered in Washington, D.C. Many directorates have other facilities elsewhere, primarily at either the Stennis Space Center in Bay St Louis, Mississippi or in Monterey, California. Staff Most NRL staff are civilians in the civil service, with a relatively small number of Navy enlisted personnel or officers. Virtually all NRL staff are US citizens and are not dual-nationals. In addition, there are some support contractors that work on-site at NRL. As of 31 December 2015, across all NRL locations, NRL had 2540 civilian employees (i.e., not including civilian contractors). On the same date, there were 35 military officers on-board NRL and 58 enlisted on-board NRL, most of whom are with NRL's VXS-1 Scientific Flight Detachment, which is located at the Patuxent River ('Pax River') Naval Air Station (NAS) in southern Maryland. NRL has special authority to use a Pay-Band pay system instead of using the traditional General Schedule (GS) pay system for its civilian employees. This gives NRL more ability to pay employees based on performance and merit, rather than time-in-grade or some other seniority metric. There are several different pay-band groups at NRL, each being for different categories of civilian employees. As of 31 December 2015, NRL had 1615 civilian scientists/engineers in the NP pay system, 103 civilian technicians in the NR pay system, 383 civilian administrative specialists/professionals in the NO pay system, and 238 civilian administrative support staff in the NC pay system. NRL scientists & engineers typically are in the (NP) pay group in NRL's Pay Band system. The NP-II pay band is equivalent to GS-5 Step 1 through GS-10 Step 10. The NP-III pay band is equivalent to GS-11 Step 1 through GS-13 Step 10. NRL's Pay Band IV corresponds to the GS-14 Step 1 to GS-15 Step 10 pay grades, inclusive, while NRL's Pay Band V can pay above GS-15 Step 10 and corresponds to the Senior Technologist (ST) pay grade elsewhere in the civil service. For new graduates, someone with a Bachelor of Science degree typically is hired at a salary in the GS-7 range; someone with a Master of Science degree typically is hired at a salary in the GS-11 range; someone with a PhD typically is hired at a salary in the GS-12 range. NRL has the flexibility to offer partial student loan repayments for new hires. According to the NRL Fact Book (2016), of NRL civilian full-time permanent employees, 870 had a doctorate, 417 had a master's, and 576 had a bachelor's as their highest degree. The laboratory also hosts post-doctoral researchers and was voted #15 in the Best Places to Work PostDocs 2013 survey. Research directorates The four research directorates within NRL are: The Systems Directorate (Code 5000) is responsible for performing a range of activities from basic research through engineering development to expand the operational capabilities of the US Navy. There are four research divisions: Radar, Information Technology, Optical Sciences, and Tactical Electronic Warfare. The Materials Science and Component Technology Directorate (Code 6000) carries out a range of materials research with the aim of better understanding of the materials in order to develop improved and advanced materials for use by the US Navy. There are seven research divisions: Laboratory for the Structure of Matter, Chemistry, Material Science & Technology, Laboratory for Computational Physics and Fluid Dynamics, Plasma Physics, Electronics Science & Technology, and the Center for Biomolecular Science & Engineering. The Ocean and Atmospheric Science and Technology Directorate (Code 7000) performs research in the fields of acoustics, remote sensing, oceanography, marine geosciences, marine meteorology, and space science. There are six research divisions: Acoustics, Remote Sensing, Oceanography, Marine Geosciences, Marine Meteorology, and Space Science. The Naval Center for Space Technology (Code 8000) is a focal point and integrator for NRL technologies used in space systems. It provides system engineering and technical assistance for development and acquisition of space systems. There are two research departments: Space Systems Development and Spacecraft Engineering. Support directorates The two support directorates are: The Executive Directorate operations are directed by the Commander of the NRL, who typically is a US Navy Captain. Scientific Development Squadron ONE (VXS-1), located at Naval Air Station Patuxent River, Maryland, which provides airborne research facilities to NRL and other agencies of the US Government, is run out of the Executive Directorate. The Business Operations Directorate provides program management for the business programs which support the scientific directorates of NRL. It provides contracting, financial management and supply expertise to the scientific projects. Nanoscience Institute In April 2001, in a departure from traditional working relationships between NRL scientists, the Institute for Nanoscience was established to conduct multidisciplinary research in the fields of materials, electronics and biology. Scientists may be part of the Nanoscience Institute while still performing research for their respective divisions. Laboratory for Autonomous Systems Research Opened March 2012, the Laboratory for Autonomous Systems Research (LASR) is a 50,000 square foot facility that supports basic and applied research in autonomous systems. The facility supports a wide range of interdisciplinary basic and applied research in autonomous systems to include research in autonomous systems, intelligent autonomy, human-autonomous system interaction and collaboration, sensor systems, power and energy systems, networking and communications, and platforms. LASR provides unique facilities and simulated environmental high bays (littoral, desert, tropical, and forest) and instrumented reconfigurable high bay spaces to support integration of science and technology components into research prototype systems. Locations The main campus of NRL is in Washington, D.C., near the southernmost part of the District. It is on the Potomac River and is immediately south of (but is not part of) Joint Base Anacostia-Bolling. This campus is immediately north of the Blue Plains site of the DC Water Authority. Exit 1 of northbound I-295 leads directly to Overlook Avenue and the NRL Main Gate. The U.S. Postal Service operates a post office on the NRL main campus. In addition, NRL operates several field sites and satellite facilities: NRL-South is located at NASA's Stennis Space Center in Bay St. Louis, Mississippi, and specializes in oceanography, marine geology, geophysics, geoacoustics, and geotechnology. NRL-Monterey is located east of the Naval Postgraduate School in Monterey, California, sharing a campus with the Fleet Numerical Meteorology and Oceanography Center and the San Francisco Bay Area/Monterey local forecast office of the National Weather Service. NRL-Monterey is dedicated to meteorology and atmospheric research. The Scientific Development Squadron (VXS) 1 is located at the Patuxent River Naval Air Station in Lexington Park, Maryland, and operates a wide range of research aircraft. The Chesapeake Bay Detachment (CBD) in Chesapeake Beach, Maryland is 168-acre site for research in radar, electronic warfare, optical devices, materials, communications, and fire research. This facility is often used in combination with the Multiple Research Site on Tilghman Island, Maryland just across the Chesapeake Bay. The Midway Research Center in Quantico, Virginia, Free Space Antenna Range in Pomonkey, Maryland, and Blossom Point Satellite Tracking and Command Station in Blossom Point, Maryland are used by NRL's Naval Center for Space Technology. The Marine Corrosion Facility located on Fleming Key at Naval Air Station Key West in Florida is used by the Center for Corrosion Science & Engineering. NRL operates several synchrotron radiation beamlines and the Extreme-Ultraviolet and X-Ray Calibration Facility at the National Synchrotron Light Source at the Brookhaven National Laboratory. History Early history Artifacts found on the NRL campus, such as stone tools and ceramic shards, show that the site had been inhabited since the Late Archaic Period. Cecil Calvert, 2nd Baron Baltimore, granted the tract of land which includes the present NRL campus to William Middleton in 1663. It became part of the District of Columbia in 1791, and was purchased by Thomas Grafton Addison in 1795, who named the area Bellevue and built a mansion on the highlands to the east. Zachariah Berry purchased the land in 1827, who rented it out for various purposes including a fishery at Blue Plains. The mansion was demolished during the Civil War to build Fort Greble. In 1873 the land was purchased by the federal government as the Bellevue Annex to the Naval Gun Factory, and several buildings were constructed including the Commandant's house, "Quarters A", which is still in use today. Foundation The Naval Research Laboratory came into existence from an idea that originated from Thomas Edison. In a May 1915 editorial piece in the New York Times Magazine, Edison wrote; "The Government should maintain a great research laboratory... In this could be developed...all the technique of military and naval progression without any vast expense." This statement addressed concerns about World War I in the United States. Edison then agreed to serve as the head of the Naval Consulting Board that consisted of civilians who had achieved expertise. The focus of the Naval Consulting Board was as advisor to the U.S. Navy pertaining to science and technology. The board brought forward a plan to create a modern facility for the Navy. In 1916 Congress allocated $1.5 million for implementation. However, construction was delayed until 1920 because of the war and internal disagreements within the board. The U.S. Naval Research Laboratory, the first modern research institution created within the United States Navy, began operations at 1100 on 2 July 1923. The Laboratory's two original divisions – Radio and Sound – performed research in the fields of high-frequency radio and underwater sound propagation. They produced communications equipment, direction-finding devices, sonar sets, and the first practical radar equipment built in the United States. They performed basic research, participating in the discovery and early exploration of the ionosphere. The NRL gradually worked towards its goal of becoming a broadly based research facility. By the beginning of World War II, five new divisions had been added: Physical Optics, Chemistry, Metallurgy, Mechanics and Electricity, and Internal Communications. World War II years and growth Total employment at the NRL jumped from 396 in 1941 to 4400 in 1946, expenditures from $1.7 million to $13.7 million, the number of buildings from 23 to 67, and the number of projects from 200 to about 900. During World War II, scientific activities necessarily were concentrated almost entirely on applied research. Advances were made in radio, radar, and sonar. Countermeasures were devised. New lubricants were produced, as were antifouling paints, luminous identification tapes, and a sea marking dye to help save survivors of disasters at sea. A thermal diffusion process was conceived and used to supply some of the U-235 isotope needed for one of the first atomic bombs. Also, many new devices that developed from booming wartime industry were type tested and then certified as reliable for the Fleet. After WWII As a result of the scientific accomplishments of the WWII, the United States emerged into the postwar era determined to consolidate its wartime gains in science and technology and to preserve the working relationship between its armed forces and the scientific community. While the Navy was establishing the Office of Naval Research (ONR) in 1946 as a liaison with and supporter of basic and applied scientific research, the Navy encouraged NRL to broaden its scope since it was the Navy Department's corporate research laboratory. NRL was placed under the administrative oversight of ONR after ONR was created. NRL's Commanding Officer reports to the Navy's Chief of Naval Research (CNR). The Chief of Naval Research leads the Office of Naval Research, which primarily is located in the Ballston area of Arlington, Virginia. The reorganization also caused a parallel shift of the Laboratory's emphasis to one of long-range basic and applied research in the full range of the physical sciences. However, rapid expansion during the war had left NRL improperly structured to address long-term Navy requirements. One major task – neither easily nor rapidly accomplished – was that of reshaping and coordinating research. This was achieved by transforming a group of largely autonomous scientific divisions into a unified institution with a clear mission and a fully coordinated research program. The first attempt at reorganization vested power in an executive committee composed of all the division superintendents. This committee was impracticably large, so in 1949, a civilian director of research was named and given full authority over the program. Positions for associate directors were added in 1954. Modern era In 1992, the previously separate Naval Oceanographic and Atmospheric Research Laboratory (NOARL), with centers in Bay St. Louis, Mississippi, and Monterey, California, was merged into NRL. Since then, NRL is also the lead Navy center for research in Oceanographic and Atmospheric Sciences, with special strengths in physical oceanography, marine geosciences, ocean acoustics, marine meteorology, and remote oceanic and atmospheric sensing. See also United States Marine Corps Warfighting Laboratory (MCWL) Office of Naval Research (ONR) Air Force Research Laboratory (AFRL) United States Army Research, Development and Engineering Command (RDECOM) Defense Advanced Research Projects Agency (DARPA) Naval Research Laboratory Flyrt — Flying Radar Target History of radar Robert Morris Page — One of the main American radar scientists Interactive Scenario Builder — 3-D modeling and simulation application for studying the radio frequency (RF) environment NRLMSISE-00 — Model of the Earth's atmosphere from ground to space SIMDIS — 3-D Analysis and Display Toolset Clementine spacecraft National Research Libraries Alliance Fleet Electronic Warfare Center (FEWC) National Oceanic and Atmospheric Administration (NOAA) University-National Oceanographic Laboratory System (UNOLS) List of auxiliaries of the United States Navy TransApps - rapid development and fielding of secure mobile apps in the battlefield References Sterling, Christopher H. (2008) Military Communications: From Ancient Times to the 21st Century ABC-CLIO pg 326 External links Military facilities in Washington, D.C. United States Navy Research installations of the U.S. Department of Defense Collier Trophy recipients
2254896
https://en.wikipedia.org/wiki/Helen%20of%20Troy%20%28film%29
Helen of Troy (film)
Helen of Troy is a 1956 Warner Bros. WarnerColor epic film in CinemaScope, based on Homer's Iliad and Odyssey. It was directed by Robert Wise, from a screenplay by Hugh Gray and John Twist, adapted by Hugh Gray and N. Richard Nash. The music score was by Max Steiner and the cinematography by Harry Stradling Sr. The film stars Rossana Podestà, Stanley Baker, Sir Cedric Hardwicke and Jacques Sernas, with Niall MacGinnis, Maxwell Reed, Nora Swinburne, Robert Douglas, Torin Thatcher, Harry Andrews, Janette Scott, Ronald Lewis, Eduardo Ciannelli, Esmond Knight and a young Brigitte Bardot as Andraste, Helen's handmaiden, her first film production shot outside France. Plot The film retells the story of the Trojan War in 1100 B.C., albeit with some major changes from the Iliad's storyline; Paris of Troy (Jacques Sernas) sails to Sparta to secure a peace treaty between the two powerful city-states. His ship is forced to return to Troy in a storm after he has been swept overboard on the shore of Sparta. Paris is found by Helen, queen of Sparta (Rossana Podestà) with whom he falls in love. He goes to the palace where he finds Helen's husband, king Menelaus (Niall MacGinnis), Agamemnon (Robert Douglas), Odysseus (Torin Thatcher), Achilles (Stanley Baker) and many other Greek kings debating whether to go to war with Troy. Menelaus, who is denied by Helen, sees that his wife and Paris are in love and, pretending friendship, plots Paris' death. Warned by Helen, Paris flees and, after they are both nearly caught by the Spartans, takes Helen with him to Troy. Under the pretense of helping Menelaus regain his honor, the Greeks unite, and the siege of Troy begins. Much blood is shed in the long ordeal, with the Trojans blaming their plight on Paris and Helen until it turns out that the Greeks are solely after Troy's riches, not Helen. The siege culminates in Greek victory through the ruse of the legendary Trojan Horse. While trying to flee, Helen and Paris are cornered by Menelaus. Paris faces the Spartan king in single combat, but just as he wins the upper hand he is stabbed from behind, denying him a fair trial by arms. Helen is forced to return with Menelaus, but she is serene in the knowledge that in death she will be reunited with Paris in Elysium. Cast Rossana Podestà as Helen of Troy Jacques Sernas as Paris Sir Cedric Hardwicke as Priam Stanley Baker as Achilles Niall MacGinnis as Menelaus Robert Douglas as Agamemnon Nora Swinburne as Hecuba Torin Thatcher as Odysseus Harry Andrews as Hector Ronald Lewis as Aeneas Brigitte Bardot as Andraste Marc Lawrence as Diomedes, ruler of Aetolia Maxwell Reed as Ajax, Prince of Salamis Robert Brown as Polydorus, the youngest son of Priam Barbara Cavan as Cora Patricia Marmont as Andromache Guido Notari as Nestor Tonio Selwart as Alephous George Zoritch as Singer Esmond Knight as High Priest Terence Longdon as Patroclus Janette Scott as Cassandra Eduardo Ciannelli as Andros Production The film was made in Rome's Cinecittà Studios and in Punta Ala, Grosseto. The scene of the Greeks' initial assault on the walls of Troy features a series of shots that are directly copied from a sequence in the Persian attack on Babylon in D. W. Griffiths' silent film classic Intolerance. Some shots from this sequence would in turn be reused in the introductionary scenes of the 1963 film Jason and the Argonauts. This project makes several departures from the original story, including showing Paris as a hero and great leader, and most of the Greek lords as treacherous and opportunistic pirates who are using Helen's flight as an excuse to win the treasures of Troy. The 2003 miniseries sharing its name with this film would later partially re-employ this plot device. Reception Bosley Crowther of The New York Times wrote that some parts were "well done", including the Greek descent from the Trojan horse which "has the air of great adventure that one expects from this tale", but "the human drama in the legend ... is completely lost or never realized in the utter banalities of the script, in the clumsiness of the English dialogue and in the inexcusable acting cliches." Variety wrote, "The word 'spectacular' achieves its true meaning when applied to Warner Bros.' 'Helen of Troy.' The retelling of the Homeric legend, filmed in its entirety in Italy, makes lavish use of the Cinemascope screen ... Action sequences alone should stir word-of-mouth." Edwin Schallert of the Los Angeles Times wrote, "'Helen of Troy' qualifies as a mighty film impression of history and legend ... The Warner film satisfies the demands for beauty, and also attains triumphant effects, which give real life to ancient battles with spear, bow and arrow, fire, sword and javelin. In their magnitude attacks on the walled city of Troy are awe-inspiring." Richard L. Coe of The Washington Post reported, "The popcorn set and I had a glorious time at this epic ... I don't suppose the genteel set will go much for this one, but, boy, those crowd scenes, warriors falling to destruction, flames flaming, javelins nipping into a chest here and there." Harrison's Reports declared, "The massiveness and opulence of the settings, the size of the huge cast, and the magnitude of the battle between the Greeks and the Trojans are indeed eye-filling ... Unfortnately, the breathtaking quality of the production values is not matched by the stilted story, which takes considerable dramatic license with the Homer version of the events leading up to the Trojan war, and which are at best only moderately interesting." The Monthly Film Bulletin found the film "uninterestingly dialogued and characterised ... The battle scenes, however, are vigorously and sometimes excitingly staged." John McCarten of The New Yorker wrote that the film "hasn't enough life to hold your interest consistently. That's too bad, for toward the end there are those battle scenes and a fine impersonation of that wooden horse." Comic book adaption Dell Four Color #684 (March 1956). Full-color photo-cover • 34 pages, 33 in full-color • Drawn by John Buscema • Copyright 1956 by Warner Bros., Inc. ( Remarkably faithful to the look of the film. However – unlike both the film and the legend – it has a happy ending for Paris and Helen. He survives, and they remain together. ) See also List of American films of 1956 List of historical drama films Greek mythology in popular culture Troy (film) References External links Helen of Troy (1956) at DBCult Film Institute 1956 drama films 1956 films American epic films American films Films scored by Max Steiner Films based on multiple works Films based on the Iliad Films directed by Robert Wise Films set in ancient Greece French films Italian films Films with screenplays by N. Richard Nash Trojan War films Warner Bros. films Siege films Films adapted into comics Cultural depictions of Helen of Troy Agamemnon CinemaScope films
2732574
https://en.wikipedia.org/wiki/Initiative%20for%20Open%20Authentication
Initiative for Open Authentication
Initiative for Open Authentication (OATH) is an industry-wide collaboration to develop an open reference architecture using open standards to promote the adoption of strong authentication. It has close to thirty coordinating and contributing members and is proposing standards for a variety of authentication technologies, with the aim of lowering costs and simplifying their functions. Terminology The name OATH is an acronym from the phrase "open authentication", and is pronounced as the English word "oath". OATH is not related to OAuth, an open standard for authorization. See also HOTP: An HMAC-Based One-Time Password Algorithm (RFC 4226) TOTP: Time-Based One-Time Password Algorithm (RFC 6238) OCRA: OATH Challenge-Response Algorithm (RFC 6287) Portable Symmetric Key Container (PSKC) (RFC 6030) Dynamic Symmetric Key Provisioning Protocol (DSKPP) (RFC 6063) FIDO Alliance References External links List of OATH members OATH Specifications Computer security organizations Computer access control
2732718
https://en.wikipedia.org/wiki/Technology%20integration
Technology integration
Technology integration is the use of technology tools in general content areas in education in order to allow students to apply computer and technology skills to learning and problem-solving. Generally speaking, the curriculum drives the use of technology and not vice versa. Technology integration is defined as the use of technology to enhance and support the educational environment. Technology integration in the classroom can also support classroom instruction by creating opportunities for students to complete assignments on the computer rather than with normal pencil and paper. In a larger sense, technology integration can also refer to the use of an integration platform and APIs in the management of a school, to integrate disparate SaaS (Software As A Service) applications, databases, and programs used by an educational institution so that their data can be shared in real-time across all systems on campus, thus supporting students' education by improving data quality and access for faculty and staff. "Curriculum integration with the use of technology involves the infusion of technology as a tool to enhance the learning in a content area or multidisciplinary setting... Effective integration of technology is achieved when students are able to select technology tools to help them obtain information in a timely manner, analyze and synthesize the information, and present it professionally to an authentic audience. The technology should become an integral part of how the classroom functions—as accessible as all other classroom tools. The focus in each lesson or unit is the curriculum outcome, not the technology." Integrating technology with standard curriculum can not only give students a sense of power, but also allows for more advanced learning among broad topics. However, these technologies require infrastructure, continual maintenance and repair – one determining element, among many, in how these technologies can be used for curricula purposes and whether or not they will be successful. Examples of the infrastructure required to operate and support technology integration in schools include at the basic level electricity, Internet service providers, routers, modems, and personnel to maintain the network, beyond the initial cost of the hardware and software. Standard education curriculum with an integration of technology can provide tools for advanced learning among a broad range of topics. Integration of information and communication technology is often closely monitored and evaluated due to the current climate of accountability, outcome-based education, and standardization in assessment. Technology integration can in some instances be problematic. A high ratio of students to technological device has been shown to impede or slow learning and task completion. In some, instances dyadic peer interaction centered on integrated technology has proven to develop a more cooperative sense of social relations. Success or failure of technology integration is largely dependent on factors beyond the technology. The availability of appropriate software for the technology being integrated is also problematic in terms of software accessibility to students and educators. Another issue identified with technology integration is the lack of long-range planning for these tools within the educative districts they are being used. Technology is contributes to global development and diversity in classrooms while helping to develop upon the fundamental building blocks needed for students to achieve more complex ideas. In order for technology to make an impact within the educational system, teachers and students must access to technology in a contextual matter that is culturally relevant, responsive and meaningful to their educational practice and that promotes quality teaching and active student learning. History The term 'educational technology' was used during the post World War II era in the United States for the integration of implements such as film strips, slide projectors, language laboratories, audio tapes, and television. Presently, the computers, tablets, and mobile devices integrated into classroom settings for educational purposes are most often referred to as 'current' educational technologies. It is important to note that educational technologies continually change, and once referred to slate chalkboards used by students in early schoolhouses in the late nineteenth and early twentieth centuries. The phrase 'educational technology', a composite meaning of technology + education, is used to refer to the most advanced technologies that are available for both teaching and learning in a particular era. In 1994 federal legislation for both the Educate America Act and the Improving America's School's Act (IASA) authorized funds for state and federal educational technology planning. One of the principal goals listed in the Educate America Act is to promote the research, consensus building, and systemic changes needed to ensure equitable educational opportunities and high levels of educational achievement for all students (Public Law 103-227). In 1996 the Telecommunications Act provided a systematic change necessary to ensure equitable educational opportunities of bringing new technology into the education sector. The Telecomm Act requires affordable access and service to advanced telecom services for public schools and libraries. Many of the computers, tablets, and mobile devices currently used in classrooms operate through Internet connectivity; particularly those that are application based such as tablets. Schools in high-cost areas and disadvantaged schools were to receive higher discounts in telecom services such as Internet, cable, satellite television, and the management component. A chart of "Technology Penetration in U.S. Public Schools" report states 98% percent of schools reported having computers in the 1995–1996 school year, with 64% Internet access, and 38% working via networked systems. The ratio of students to computers in the United States in 1984 stood at 15 students per 1 computer, it now stands at an average all-time low of 10 students to computer. From the 1980s on into the 2000s, the most substantial issue to examine in educational technology was school access to technologies according to the 1997 Policy Information Report for Computers and Classrooms: The Status of Technology in U.S. Schools. These technologies included computers, multimedia computers, the Internet, networks, cable TV, and satellite technology amongst other technology-based resources. More recently ubiquitous computing devices, such as computers and tablets, are being used as networked collaborative technologies in the classroom. Computers, tablets and mobile devices may be used in educational settings within groups, between people and for collaborative tasks. These devices provide teachers and students access to the World Wide Web in addition to a variety of software applications. Technology education standards National Educational Technology Standards (NETS) served as a roadmap since 1998 for improved teaching and learning by educators. As stated above, these standards are used by teachers, students, and administrators to measure competency and set higher goals to be skillful. The Partnership for 21st Century Skills is a national organization that advocates for 21st century readiness for every student. Their most recent Technology Plan was released in 2010, "Transforming American Education: Learning Powered by Technology". This plan outlines a vision "to leverage the learning sciences and modern technology to create engaging, relevant, and personalized learning experiences for all learners that mirror students' daily lives and the reality of their futures. In contrast to traditional classroom instruction, this requires that students be put at the center and encouraged to take control of their own learning by providing flexibility on several dimensions." Although tools have changed dramatically since the beginnings of educational technology, this vision of using technology for empowered, self-directed learning has remained consistent. Pedagogy The integration of electronic devices into classrooms has been cited as a possible solution to bridge access for students, to close achievement gaps, that are subject to the digital divide, based on social class, economic inequality, or gender where and a potential user does not have enough cultural capital required to have access to information and communication technologies. Several motivations or arguments have been cited for integrating high-tech hardware and software into school, such as (1) making schools more efficient and productive than they currently are, (2) if this goal is achieved, teaching and learning will be transformed into an engaging and active process connected to real life, and (3) is to prepare the current generation of young people for the future workplace. The computer has access to graphics and other functions students can use to express their creativity. Technology integration does not always have to do with the computer. It can be the use of the overhead projector, student response clickers, etc. Enhancing how the student learns is very important in technology integration. Technology will always help students to learn and explore more. Paradigms Most research in technology integration has been criticized for being atheoretical and ad hoc driven more by the affordances of the technology rather than the demands of pedagogy and subject matter. Armstrong (2012) argued that multimedia transmission turns to limit the learning into simple content, because it is difficult to deliver complicated content through multimedia. One approach that attempts to address this concern is a framework aimed at describing the nature of teacher knowledge for successful technology integration. The technological pedagogical content knowledge or TPACK framework has recently received some positive attention. Another model that has been used to analyze tech integration is the SAMR framework, developed by Ruben Puentedura. This model attempts to measure the level of tech integration with the 4 levels that go from Enhancement to Transformation: Substitution, Augmentation, Modification, Redefinition. Constructivism Constructivism is a crucial component of technology integration. It is a learning theory that describes the process of students constructing their own knowledge through collaboration and inquiry-based learning. According to this theory, students learn more deeply and retain information longer when they have a say in what and how they will learn. Inquiry-based learning, thus, is researching a question that is personally relevant and purposeful because of its direct correlation to the one investigating the knowledge. As stated by Jean Piaget, constructivist learning is based on four stages of cognitive development. In these stages, children must take an active role in their own learning and produce meaningful works in order to develop a clear understanding. These works are a reflection of the knowledge that has been achieved through active self-guided learning. Students are active leaders in their learning and the learning is student-led rather than teacher–directed. Many teachers use a constructivist approach in their classrooms assuming one or more of the following roles: facilitator, collaborator, curriculum developer, team member, community builder, educational leader, or information producer. Counter argument to computers in the classroom Is technology in the classroom needed, or does it hinder students' social development? We've all seen a table of teenagers on their phones, all texting, not really socializing or talking to each other. How do they develop social and communication skills? Neil Postman (1993) concludes: The role of the school is to help students learn how to ignore and discard information so that they can achieve a sense of coherence in their lives; to help students cultivate a sense of social responsibility; to help students think critically, historically, and humanely; to help students understand the ways in which technology shapes their consciousness; to help students learn that their own needs sometimes are subordinate to the needs of the group. I could go on for another three pages in this vein without any reference to how machinery can give students access to information. Instead, let me summarize in two ways what I mean. First, I'll cite a remark made repeatedly by my friend Alan Kay, who is sometimes called "the father of the personal computer." Alan likes to remind us that any problems the schools cannot solve without machines, they cannot solve with them. Second, and with this I shall come to a close: If a nuclear holocaust should occur some place in the world, it will not happen because of insufficient information; if children are starving in Somalia, it's not because of insufficient information; if crime terrorizes our cities, marriages are breaking up, mental disorders are increasing, and children are being abused, none of this happens because of a lack of information. These things happen because we lack something else. It is the "something else" that is now the business of schools. Tools Interactive whiteboards Interactive whiteboards are used in many schools as replacements for standard whiteboards and provide a way to allow students to interact with material on the computer. In addition, some interactive whiteboards software allow teachers to record their instruction. 3D virtual environments are also used with interactive whiteboards as a way for students to interact with 3D virtual learning objects employing kinetics and haptic touch the classroom. An example of the use of this technique is the open-source project Edusim. Research has been carried out to track the worldwide Interactive Whiteboard market by Decision Tree Consulting (DTC), a worldwide research company. According to the results, interactive Whiteboards continue to be the biggest technology revolution in classrooms, across the world there are over 1.2 million boards installed, over 5 million classrooms are forecast to have Interactive Whiteboards installed by 2011, Americas are the biggest region closely followed by EMEA, and Mexico's Enciclomedia project to equip 145,000 classrooms is worth $1.8 billion and is the largest education technology project in the world. Interactive whiteboards can accommodate different learning styles, such as visual, tactile, and audio. Interactive Whiteboards are another way that technology is expanding in schools. By assisting the teacher to helping students more kinestically as well as finding different ways to process there information throughout the entire classroom. Student response systems Student response systems consist of handheld remote control units, or response pads, which are operated by individual students. An infrared or radio frequency receiver attached to the teacher's computer collects the data submitted by students. The CPS (Classroom Performance System), once set, allows the teacher to pose a question to students in several formats. Students then use the response pad to send their answer to the infrared sensor. Data collected from these systems is available to the teacher in real time and can be presented to the students in a graph form on an LCD projector. The teacher can also access a variety of reports to collect and analyze student data. These systems have been used in higher education science courses since the 1970s and have become popular in K-12 classrooms beginning in the early 21st century. Audience response systems (ARS) can help teachers analyze, and act upon student feedback more efficiently. For example, with polleverywhere.com, students text in answers via mobile devices to warm-up or quiz questions. The class can quickly view collective responses to the multiple-choice questions electronically, allowing the teacher to differentiate instruction and learn where students need help most. Combining ARS with peer learning via collaborative discussions has also been proven to be particularly effective. When students answer an in-class conceptual question individually, then discuss it with their neighbors, and then vote again on the same or a conceptually similar question, the percentage of correct student responses usually increases, even in groups where no student had given the correct answer previously. Among other tools that have been noted as being effective as a way of technology integration are podcasts, digital cameras, smart phones, tablets, digital media, and blogs.Other examples of technology integration include translation memories and smart computerized translation programs that are among the newest integrations that are changing the field of linguistics. Mobile learning Mobile learning is defined as "learning across multiple contexts, through social and content interactions, using personal electronic devices". A mobile device is essentially any device that is portable and has internet access and includes tablets, smart phones, cell phones, e-book readers, and MP3 players. As mobile devices become increasingly common personal devices of K-12 students, some educators seek to utilize downloadable applications and interactive games to help facilitate learning. This practice can be controversial because many parents and educators are concerned that students would be off-task because teachers cannot monitor their activity. This is currently being troubleshooted by forms of mobile learning that require a log-in, acting as a way to track engagement of students. Benefits According to findings from four meta analyses, blending technology with face-to-face teacher time generally produces better outcomes than face-to-face or online learning alone. Research is currently limited on the specific features of technology integration that improve learning. Meanwhile, the marketplace of learning technologies continues to grow and vary widely in content, quality, implementation, and context of use. Research shows that adding technology to K-12 environments, alone, does not necessarily improve learning. What matters most to implementing mobile learning is how students and teachers use technology to develop knowledge and skills and that requires training. Successful technology integration for learning goes hand in hand with changes in teacher training, curricula, and assessment practices. An example of teacher professional development is profiled in Edutopia's Schools That Work series on eMints, a program that offers teachers 200 hours of coaching and training in technology integration over a two-year span. In these workshops teachers are trained in practices such as using interactive whiteboards and the latest web tools to facilitate active learning. In a 2010 publication of Learning Point Associates, statistics showed that students of teachers who had participated in eMints had significantly higher standardized test scores than those attained by their peers. It can keep students focused for longer periods of time. The use of computers to look up information/ data is a tremendous time saver, especially when used to access a comprehensive resource like the Internet to conduct research. This time-saving aspect can keep students focused on a project much longer than they would with books and paper resources, and it helps them develop better learning through exploration and research. Project-based activities Definition: Project Based Learning is a teaching method in which students gain knowledge and skills by working for an extended period of time to investigate and respond to an authentic, engaging and complex question, problem, or challenge. Project Based Activities is a method of teaching where the students gain knowledge and skills by involving themselves for the more period of time to research and respond to the engaging and complex questions, problems, or challenges. the students will work in groups to solve the problems which are challenging. The students will work in groups to solve the problems which are challenging, real, curriculum based and frequently relating to more than one branch of knowledge. Therefore, a well designed project based learning activity is one which addresses different student learning styles and which does not assume that all students can demonstrate their knowledge in a single standard way. Elements The project based learning activities involves four basic elements. An extended time frame. Collaboration. Inquiry, investigation and research. The construction of an artifact or performance of a consequential task. Examples of activities CyberHunt The term "hunt" refers to finding or searching for something. "CyberHunt" means an online activity which learners use the internet as tool to find answers to the question's based upon the topics which are assigned by someone else. Hence learners also can design the CyberHunt on some specific topics. a CyberHunt, or internet scavenger hunt, is a project-based activity which helps students gain experience in exploring and browsing the internet. A CyberHunt may ask students to interact with the site (e.g.: play a game or watch a video), record short answers to teacher questions, as well as read and write about a topic in depth. There are basically two types of CyberHunt: A simple task, in which the teacher develops a series of questions and gives the students a hypertext link to the URL that will give them the answer. A more complex task, intended for increasing and improving student internet search skills. Teachers ask questions for students to answer using a search engine. WebQuests It is an inquiry oriented activity in which most or all of the information used by the learners which are drawn out by the internet/web. It is designed to use learner 'time well', to focus on using information rather than on looking for it and to support the learners to think at the level of analysis, synthesis, and evaluation. It is the wonderful way of capturing student's imagination and allowing them to explore in a guided, meaningful manner. It allow the students to explore issues and find their own answers. There are six building blocks of webQuests: The introduction – capturing the student's interest. The task-describing the activities end product. The resources-web sites, students will use to complete the task. The evaluation-measuring the result of the activity. The conclusion-summing up of the activity. WebQuests are student-centered, web-based curricular units that are interactive and use Internet resources. The purpose of a webQuest is to use information on the web to support the instruction taught in the classroom. A webQuest consists of an introduction, a task (or final project that students complete at the end of the webQuest), processes (or instructional activities), web-based resources, evaluation of learning, reflection about learning, and a conclusion. WISE The Web-based Inquiry Science Environment (WISE) provides a platform for creating inquiry science projects for middle school and high school students using evidence and resources from the Web. Funded by the U.S. National Science Foundation, WISE has been developed at the University of California, Berkeley from 1996 until the present. WISE inquiry projects include diverse elements such as online discussions, data collection, drawing, argument creation, resource sharing, concept mapping and other built-in tools, as well as links to relevant web resources. It is the research-focused, open-source inquiry-based learning management system that includes the student- learning environment project authoring environment, grading tool, and tool and user/ course/ content management tools. Virtual field trip A virtual field trip is a website that allows the students to experience places, ideas, or objects beyond the constraints of the classroom. A virtual field trip is a great way to allow the students to explore and experience new information. This format is especially helpful and beneficial in allowing schools to keep the cost down. Virtual field trips may also be more practical for children in the younger grades, due to the fact that there is not a demand for chaperones and supervision. Although, a virtual field trip does not allow the children to have the hands on experiences and the social interactions that can and do take place on an actual field trip. An educator should incorporate the use of hands on material to further their understanding of the material that is presented and experienced in a virtual field trip.It is a guided exploration through the www that organizes a collection of pre- screened, its thematically based web pages into a structure online learning experience ePortfolio An ePortfolio is a collection of student work that exhibits the student's achievements in one or more areas over time. Components in a typical student ePortfolio might contain creative writings, paintings, photography, math explorations, music, and videos. And it is a collection of work developed across varied contexts over time. The portfolio can advance learning by providing students and/or faculty with a way to organize, archive and display pieces of work. References Educational technology
16429579
https://en.wikipedia.org/wiki/4501%20Eurypylos
4501 Eurypylos
'4501 Eurypylos is a Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 4 February 1989 by Belgian astronomer Eric Elst at ESO's La Silla Observatory in Chile. The dark Jovian asteroid has a short rotation period of 6.1 hours. It was named after the Thessalian king Eurypylus from Greek mythology. Orbit and classification Eurypylos is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the leading Greek camp at the Gas Giant's Lagrangian point, 60° ahead of its orbit . It is also a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.9–5.5 AU once every 11 years and 11 months (4,342 days; semi-major axis of 5.21 AU). Its orbit has an eccentricity of 0.05 and an inclination of 8° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar Observatory in December 1951, more than 36 years prior to its official discovery observation at La Silla. Physical characteristics Eurypylos is an assumed C-type, while most larger Jupiter trojans are D-type asteroids. Rotation period In March 2013, a rotational lightcurve of Eurypylos was obtained from photometric observations by Robert Stephens at the Center for Solar System Studies in Landers, California. Lightcurve analysis gave a rotation period of hours with a brightness amplitude of 0.24 magnitude (). Diameter and albedo According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Eurypylos measures 45.52 kilometers in diameter and its surface has an albedo of 0.065, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 46.30 kilometers based on an absolute magnitude of 10.4. Naming This minor planet was named from Greek mythology after the legendary king Eurypylus, the leader of the Thessalian contingent, who brought 40 ships to the siege of Troy. During the Trojan War, he was wounded by an arrow from Paris but was rescued by Patroclus. The official naming citation was published by the Minor Planet Center on 4 October 1990 (). Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center Asteroid 4501 Eurypylos at the Small Bodies Data Ferret 004501 Discoveries by Eric Walter Elst Minor planets named from Greek mythology Named minor planets 19890204
30154663
https://en.wikipedia.org/wiki/Comparison%20of%20Direct%20Connect%20software
Comparison of Direct Connect software
This article compares features and other data about client and server software for Direct Connect, a peer-to-peer file sharing protocol. Hub software Direct Connect hubs are central servers to which clients connect, thus the networks are not as decentralized as Gnutella or FastTrack. Hubs provide information about the clients, as well as file-searching and chat capabilities. File transfers are done directly between clients, in true peer-to-peer fashion. Hubs often have special areas of interest. Many have requirements on the total size of the files that their members share (share size), and restrictions on the content and quality of shares. A hub can have any arbitrary rule. Hubs can allow users to register and provide user authentication. The authentication is also in clear text. The hub may choose certain individuals as operators (similar to IRC operators) to enforce said rules if the hub itself cannot. While not directly supported by the protocol, hub linking software exists. The software allow multiple hubs to be connected, allowing users to share and/or chat with people on the other linked hubs. Direct connect hubs have difficulty scaling, due to the broadcast-centricity of the protocol. General Operating system support Client software While not mandated by the protocol, most clients send a "tag". This is part of the client's description and display information ranging from client name and version to number of total available slots to if the user is using a proxy server. It was originally added to DC++, due to its ability to be in multiple hubs with the same instance. The information is arbitrary. The original client's file list (a comprehensive list of the files a user shares) was compressed using Huffman's compression algorithm. Newer clients (among them DC++) serve an XML-based list, compressed with bzip2. General Other software Hub linking software links hubs' main chat, so that users can see and respond to chat that is in a hub they're not directly connected to. Often used to draw in users to hubs, or make private or small hubs more widely known. Whereas advertising a hub is "frowned upon" and is usually repercussion with floods or denial of service attacks, forming a more or less formal network by means of linking hub chat is a legitimate means for getting free publicity. Some Hub programs are able to support a more advanced form of linking which includes all the normal functions, chat, private messages, search and file transfers between users on different hubs can be supported through hub specific solutions or hub software neutral extensions using scripts/plug-ins. General Operating system support Interface and programming References Direct Connect network File sharing software Direct Connect
37196658
https://en.wikipedia.org/wiki/Database%20encryption
Database encryption
Database encryption can generally be defined as a process that uses an algorithm to transform data stored in a database into "cipher text" that is incomprehensible without first being decrypted. It can therefore be said that the purpose of database encryption is to protect the data stored in a database from being accessed by individuals with potentially "malicious" intentions. The act of encrypting a database also reduces the incentive for individuals to hack the aforementioned database as "meaningless" encrypted data is of little to no use for hackers. There are multiple techniques and technologies available for database encryption, the most important of which will be detailed in this article. Transparent/External database encryption Transparent data encryption (often abbreviated as TDE) is used to encrypt an entire database, which therefore involves encrypting "data at rest". Data at rest can generally be defined as "inactive" data that is not currently being edited or pushed across a network. As an example, a text file stored on a computer is "at rest" until it is opened and edited. Data at rest are stored on physical storage media solutions such as tapes or hard disk drives. The act of storing large amounts of sensitive data on physical storage media naturally raises concerns of security and theft. TDE ensures that the data on physical storage media cannot be read by malicious individuals that may have the intention to steal them. Data that cannot be read is worthless, thus reducing the incentive for theft. Perhaps the most important strength that is attributed to TDE is its transparency. Given that TDE encrypts all data it can be said that no applications need to be altered in order for TDE to run correctly. It is important to note that TDE encrypts the entirety of the database as well as backups of the database. The transparent element of TDE has to do with the fact that TDE encrypts on "the page level", which essentially means that data is encrypted when stored and decrypted when it is called into the system's memory. The contents of the database are encrypted using a symmetric key that is often referred to as a "database encryption key". Column-level encryption In order to explain column-level encryption it is important to outline basic database structure. A typical relational database is divided into tables that are divided into columns that each have rows of data. Whilst TDE usually encrypts an entire database, column-level encryption allows for individual columns within a database to be encrypted. It is important to establish that the granularity of column-level encryption causes specific strengths and weaknesses to arise when compared to encrypting an entire database. Firstly, the ability to encrypt individual columns allows for column-level encryption to be significantly more flexible when compared to encryption systems that encrypt an entire database such as TDE. Secondly, it is possible to use an entirely unique and separate encryption key for each column within a database. This effectively increases the difficulty of generating rainbow tables which thus implies that the data stored within each column is less likely to be lost or leaked. The main disadvantage associated with column-level database encryption is speed, or a loss thereof. Encrypting separate columns with different unique keys in the same database can cause database performance to decrease, and additionally also decreases the speed at which the contents of the database can be indexed or searched. Field-level encryption Experimental work is being done on providing database operations (like searching or arithmetical operations) on encrypted fields without the need to decrypt them. Strong encryption is required to be randomized - a different result must be generated each time. This is known as probabilistic encryption. Field-level encryption is weaker than randomized encryption, but it allows users to test for equality without decrypting the data. Filesystem-level encryption Encrypting File System (EFS) It is important to note that traditional database encryption techniques normally encrypt and decrypt the contents of a database. Databases are managed by "Database Management Systems" (DBMS) that run on top of an existing operating system (OS). This raises a potential security concern, as an encrypted database may be running on an accessible and potentially vulnerable operating system. EFS can encrypt data that is not part of a database system, which implies that the scope of encryption for EFS is much wider when compared to a system such as TDE that is only capable of encrypting database files. Whilst EFS does widen the scope of encryption, it also decreases database performance and can cause administration issues as system administrators require operating system access to use EFS. Due to the issues concerning performance, EFS is not typically used in databasing applications that require frequent database input and output. In order to offset the performance issues it is often recommended that EFS systems be used in environments with few users. Full disk encryption BitLocker does not have the same performance concerns associated with EFS. Symmetric and asymmetric database encryption Symmetric database encryption Symmetric encryption in the context of database encryption involves a private key being applied to data that is stored and called from a database. This private key alters the data in a way that causes it to be unreadable without first being decrypted. Data is encrypted when saved, and decrypted when opened given that the user knows the private key. Thus if the data is to be shared through a database the receiving individual must have a copy of the secret key used by the sender in order to decrypt and view the data. A clear disadvantage related to symmetric encryption is that sensitive data can be leaked if the private key is spread to individuals that should not have access to the data. However, given that only one key is involved in the encryption process it can generally be said that speed is an advantage of symmetric encryption. Asymmetric database encryption Asymmetric encryption expands on symmetric encryption by incorporating two different types of keys into the encryption method: private and public keys. A public key can be accessed by anyone and is unique to one user whereas a private key is a secret key that is unique to and only known by one user. In most scenarios the public key is the encryption key whereas the private key is the decryption key. As an example, if individual A would like to send a message to individual B using asymmetric encryption, he would encrypt the message using Individual B's public key and then send the encrypted version. Individual B would then be able to decrypt the message using his private key. Individual C would not be able to decrypt Individual A's message, as Individual C's private key is not the same as Individual B's private key. Asymmetric encryption is often described as being more secure in comparison to symmetric database encryption given that private keys do not need to be shared as two separate keys handle encryption and decryption processes. For performance reasons, asymmetric encryption is used in Key management rather than to encrypt the data which is usually done with symmetric encryption. Key management The "Symmetric & Asymmetric Database Encryption" section introduced the concept of public and private keys with basic examples in which users exchange keys. The act of exchanging keys becomes impractical from a logistical point of view, when many different individuals need to communicate with each-other. In database encryption the system handles the storage and exchange of keys. This process is called key management. If encryption keys are not managed and stored properly, highly sensitive data may be leaked. Additionally, if a key management system deletes or loses a key, the information that was encrypted via said key is essentially rendered "lost" as well. The complexity of key management logistics is also a topic that needs to be taken into consideration. As the number of application that a firm uses increases, the number of keys that need to be stored and managed increases as well. Thus it is necessary to establish a way in which keys from all applications can be managed through a single channel, which is also known as enterprise key management. Enterprise Key Management Solutions are sold by a great number of suppliers in the technology industry. These systems essentially provide a centralised key management solution that allows administrators to manage all keys in a system through one hub. Thus it can be said that the introduction of enterprise key management solutions has the potential to lessen the risks associated with key management in the context of database encryption, as well as to reduce the logistical troubles that arise when many individuals attempt to manually share keys. Hashing Hashing is used in database systems as a method to protect sensitive data such as passwords; however it is also used to improve the efficiency of database referencing. Inputted data is manipulated by a hashing algorithm. The hashing algorithm converts the inputted data into a string of fixed length that can then be stored in a database. Hashing systems have two crucially important characteristics that will now be outlined. Firstly, hashes are "unique and repeatable". As an example, running the word "cat" through the same hashing algorithm multiple times will always yield the same hash, however it is extremely difficult to find a word that will return the same hash that "cat" does. Secondly, hashing algorithms are not reversible. To relate this back to the example provided above, it would be nearly impossible to convert the output of the hashing algorithm back to the original input, which was "cat". In the context of database encryption, hashing is often used in password systems. When a user first creates their password it is run through a hashing algorithm and saved as a hash. When the user logs back into the website, the password that they enter is run through the hashing algorithm and is then compared to the stored hash. Given the fact that hashes are unique, if both hashes match then it is said that the user inputted the correct password. One example of a popular hash function is SHA (Secure Hash Algorithm) 256. Salting One issue that arises when using hashing for password management in the context of database encryption is the fact that a malicious user could potentially use an Input to Hash table rainbow table for the specific hashing algorithm that the system uses. This would effectively allow the individual to decrypt the hash and thus have access to stored passwords. A solution for this issue is to 'salt' the hash. Salting is the process of encrypting more than just the password in a database. The more information that is added to a string that is to be hashed, the more difficult it becomes to collate rainbow tables. As an example, a system may combine a user's email and password into a single hash. This increase in the complexity of a hash means that it is far more difficult and thus less likely for rainbow tables to be generated. This naturally implies that the threat of sensitive data loss is minimised through salting hashes. Pepper Some systems incorporate a "pepper" in addition to salts in their hashing systems. Pepper systems are controversial, however it is still necessary to explain their use. A pepper is a value that is added to a hashed password that has been salted. This pepper is often unique to one website or service, and it is important to note that the same pepper is usually added to all passwords saved in a database. In theory the inclusion of peppers in password hashing systems has the potential to decrease the risk of rainbow (Input : Hash) tables, given the system-level specificity of peppers, however the real world benefits of pepper implementation are highly disputed. Application-level encryption In application-level encryption, the process of encrypting data is completed by the application that has been used to generate or modify the data that is to be encrypted. Essentially this means that data is encrypted before it is written to the database. This unique approach to encryption allows for the encryption process to be tailored to each user based on the information (such as entitlements or roles) that the application knows about its users. According to Eugene Pilyankevich, "Application-level encryption is becoming a good practice for systems with increased security requirements, with a general drift toward perimeter-less and more exposed cloud systems". Advantages of application-level encryption One of the most important advantages of application-level encryption is the fact that application-level encryption has the potential to simplify the encryption process used by a company. If an application encrypts the data that it writes/modifies from a database then a secondary encryption tool will not need to be integrated into the system. The second main advantage relates to the overarching theme of theft. Given that data is encrypted before it is written to the server, a hacker would need to have access to the database contents as well as the applications that were used to encrypt and decrypt the contents of the database in order to decrypt sensitive data. Disadvantages of application-level encryption The first important disadvantage of Application-level encryption is that applications used by a firm will need to be modified to encrypt data themselves. This has the potential to consume a significant amount of time and other resources. Given the nature of opportunity cost firms may not believe that application-level encryption is worth the investment. In addition, application-level encryption may have a limiting effect on database performance. If all data on a database is encrypted by a multitude of different applications then it becomes impossible to index or search data on the database. To ground this in reality in the form of a basic example: it would be impossible to construct a glossary in a single language for a book that was written in 30 languages. Lastly the complexity of key management increases, as multiple different applications need to have the authority and access to encrypt data and write it to the database. Risks of database encryption When discussing the topic of database encryption it is imperative to be aware of the risks that are involved in the process. The first set of risks are related to key management. If private keys are not managed in an "isolated system", system administrators with malicious intentions may have the ability to decrypt sensitive data using keys that they have access to. The fundamental principle of keys also gives rise to a potentially devastating risk: if keys are lost then the encrypted data is essentially lost as well, as decryption without keys is almost impossible. References Cryptography Data security
8820973
https://en.wikipedia.org/wiki/GNU%20variants
GNU variants
GNU variants (also called GNU distributions or distros for short) are operating systems based upon the GNU operating system (the Hurd kernel, the GNU C library, system libraries and application software like GNU coreutils, bash, GNOME, the Guix package manager, etc). According to the GNU project and others, these also include most operating systems using the Linux kernel and a few others using BSD-based kernels. GNU users usually obtain their operating system by downloading GNU distributions, which are available for a wide variety of systems ranging from embedded devices (for example, LibreCMC) and personal computers (for example, Debian GNU/Hurd) to powerful supercomputers (for example, Rocks Cluster Distribution). Hurd kernel Hurd is the official kernel developed for the GNU system (before Linux-libre also became an official GNU package). Debian GNU/Hurd was discussed for a release as technology preview with Debian 7.0 Wheezy, however these plans were discarded due to the immature state of the system. However the maintainers of Debian GNU/Hurd decided to publish an unofficial release on the release date of Debian 7.0. Debian GNU/Hurd is not considered yet to provide the performance and stability expected from a production system. Among the open issues are incomplete implementation of Java and X.org graphical user interfaces and limited hardware driver support. About two thirds of the Debian packages have been ported to Hurd. Arch Hurd is a derivative work of Arch Linux, porting it to the GNU Hurd system with packages optimised for the Intel P6 architecture. Their goal is to provide an Arch-like user environment (BSD-style init scripts, pacman package manager, rolling releases, and a simple set up) on the GNU Hurd, which is stable enough for at least occasional use. Currently it provides a LiveCD for evaluation purposes and installation guides for LiveCD and conventional installation. Linux kernel The term GNU/Linux or GNU+Linux is used by the FSF and its supporters to refer to an operating system where the Linux kernel is distributed with a GNU system software. Such distributions are the primary installed base of GNU packages and programs and also of Linux. The most notable official use of this term for a distribution is Debian GNU/Linux. As of 2018, the only GNU variants recommended by the GNU Project for regular use are Linux distributions committed to the Free System Distribution Guidelines; most of which refer to themselves as "GNU/Linux" (like Debian), and actually use a deblobbed version of the Linux kernel (like the Linux-libre kernel) and not the mainline Linux kernel. BSD kernels Debian GNU/kFreeBSD is an operating system for IA-32 and x86-64 computer architectures. It is a distribution of GNU with Debian package management and the kernel of FreeBSD. The k in kFreeBSD is an abbreviation for kernel of, and reflects the fact that only the kernel of the complete FreeBSD operating system is used. The operating system was officially released with Debian Squeeze (6.0) on February 6, 2011. One Debian GNU/kFreeBSD live CD is Ging, which is no longer maintained. was an experimental port of GNU user-land applications to NetBSD kernel. No official release of this operating system was made; although work was conducted on ports for the IA-32 and DEC Alpha architectures, it has not seen active maintenance since 2002 and is no longer available for download. As of September 2020, the GNU Project does not recommend or endorse any BSD operating systems. OpenSolaris (Illumos) kernel Nexenta OS is the first distribution that combines the GNU userland (with the exception of libc; OpenSolaris' libc is used) and Debian's packaging and organisation with the OpenSolaris kernel. Nexenta OS is available for IA-32 and x86-64 based systems. Nexenta Systems, Inc initiated the project and sponsors its continued development. Nexenta OS is not considered a GNU variant, due to the use of OpenSolaris libc. Multiple Illumos distributions use GNU userland by default. Darwin kernel Windows NT kernel The Cygwin project is an actively-developed compatibility layer in the form of a C library providing a substantial part of the POSIX API functionality for Windows, as well as a distribution of GNU and other Unix-like programs for such an ecosystem. It was first released in 1995 by Cygnus Solutions (now Red Hat). In 2016 Microsoft and Canonical added an official compatibility layer to Windows 10 that translates Linux kernel calls into Windows NT ones, the reverse of what Wine does. This allows ELF executables to run unmodified on Windows, and is intended to provide web developers with the more familiar GNU userland on top of the Windows kernel. The combination has been dubbed "Linux for Windows", even though Linux (i.e. the operating system family defined by its common use of the Linux kernel) is absent. See also Comparison of Linux distributions GNU/Linux naming controversy References External links Arch Hurd Superunprivileged.org GNU/Hurd-based Live CD Debian GNU/kFreeBSD Debian GNU/NetBSD #debian-kbsd on OFTC Ging live CD Free software operating systems variants
16673942
https://en.wikipedia.org/wiki/Robert%20Istepanian
Robert Istepanian
Robert S. H. Istepanian is a visiting professor at the Faculty of Medicine, Institute of Global Health Innovation, Imperial College, London. Istepanian is widely recognized as the first scientist to coin the phrase m-Health. In 2012, Istepanian coined the new term 4G Health which is defined as "The evolution of m-health towards targeted personalized medical systems with adaptable functionalities and compatibility with the future 4G networks." Life He completed his studies and obtained his PhD from the Electronic and Electrical Engineering Department of Loughborough University, UK in 1994. Since then he held several academic and research academic posts in UK and Canada including a professorship of Data Communications for healthcare and the founding and Director of the Mobile 'Information and Network Technologies Research Centre' (MINT) at Kingston University, London (2003-2013). He was also a visiting Professor in the Division of Cellular and Molecular Medicine at St. George's University of London (2005 - 2008). His other academic tenures included senior lectureships in the University of Portsmouth and Brunel University in UK and was also an associate Professor in the Ryerson University, Toronto and adjunct Professor in the University of Western Ontario in Canada. Professor Istepanian served as the vice chair of ITU’s focus group on standardization of Machine to Machine (M2M) for e-health service layer applications. He also served on numerous experts panels for global national awarding grant bodies including: - Experts forum members - World leading diabetes expert’s forum, International Diabetes Federation- IDF, World Diabetes Congress’, Dubai, 4–8 December 2011. Expert Committee and evaluation panel member the Finnish Strategic Centres of Science, Technology and Innovation (SHOK) on advances on wellbeing programme, Finnish Academy of Science, Sept. 2012. Experts Committee and panel member for the Joint Dutch government (STW) and Philips partnership programme on ‘Healthy Life Style Solutions, 2011. Experts Committee member- Canada Foundation for Innovation's (CFI) of large scale strategic and leading edge projects for health services in Canada, 2009. Expert Member of the Experts Panel and Reviewer of Ireland’s large scale strategic projects - Science Foundation Ireland SFI- Strategic Research Cluster Grants, 2008–2011. He is investigator and co-investigator of many EPSRC and EU research grants on wireless telemedicine and other research /visiting grants from the British Council and Royal Society and the Royal Academy of Engineering. He was also the UK lead investigator of several EU -IST and e-Ten projects in the areas of mobile healthcare (m-health), including OTELO project (IST -2001-32516- 2001-04) and C-MONITOR (eTen- Contract C27256) on Chronic Disease Management (2002–04) and e-Dispute (2004–06). Professor Istepanian is a Fellow of the Institute of Engineering Technology (Formerly IEE) and Senior Member of the IEEE. He currently serves on several IEEE Transactions and international journals’ editorial boards including IEEE Transactions on Information Technology in Biomedicine, IEEE Transactions on NanoBioscience, IEEE Transactions on Mobile Computing and International Journal of Telemedicine and Applications. He has also served as guest editor of three special issues of the IEEE Transactions on Information Technology in Biomedicine (on seamless mobility for healthcare and m-health systems, 2005) and IEEE Transactions of NanoBioScience (on Microarray Image Processing, 2004). Professor Istepanian is currently the co-chair of the ITU working group on M2M service layer standardization of e-health applications. He was the co-chairman of the UK/RI chapter of the IEEE Engineering in Medicine and Biology in 2002. He also served on numerous technical committees, expert speaker and invited keynote speaker in several international conferences in UK and USA and Canada including, the Harvard and Partners Telemedicine conference on ‘Optimising Care Through Communication Technologies’ (Boston-2005) and the second International Conference on Smart homes and health Telematics, ICOST (Singapore-2004) and the ‘Building on Broadband Britain’ Conference (London-2005). He also presented papers and chaired sessions/tracks on several national and international IEEE conferences in these areas including the Telemed conferences of the Royal Society of Medicine, London, IEEE- Engineering in Medicine and Biology International Annual Conferences (IEEE-EMBS 97, 98, 99, 06), the 2000 World Medical Congress, Chicago all in the areas of mobile E-health systems. He was on the technical committee of the IEEE HealthComm International Workshops (Nancy, France 2002),(Los Angeles, 2003) and (Seoul, 2005) He was also the Co-Chair of the Technical Committee of the IEEE-EMBS Conference on Information Technology and Applications in Biomedicine (ITAB) in Birmingham, UK, 24–26 April 2003. He was also on the technical committee of the International Congress of Medical and Care Compunetics- ICMCC (La Hague, 2004 and 2005). Most recently, Professor Istepanian has been presenting several keynote lectures worldwide including: Lectures Keynote lecture: The Role of Emerging Wireless and Network Technologies for Personalized Healthcare Systems, MERCK - Diabetes and Obesity Therapeutic Expert Forum, Philadelphia, 30 April – 2 May 2010. Keynote lecture: 4G Health- The long term evolution of m-health: The case of enhanced mobile Diabetes management in the Middle East, Mobile Healthcare Track, 15th Annual Middle East World Teleco World Summit, 30 November – 1 December 2010, Dubai, UAE, 2010. Keynote lecture: Long Term E-health Evolution and Its Impact on Optimising Healthcare. Presented at IET Assisted Living 2009 Conference, London, 24–25 March 2009 Keynote lecture: Intelligent RFID (i-RFID) Technologies for m-health applications, 3rd RFID Spanish Conference, Bilbao, Spain 25–27 November 2009. He has been an invited lecturer and expert panellist for several conferences, symposiums and workshops including: Speaker and invited panellist: ‘ Industry panel discussion: How can accessibility for health/wellness be best built in at source’, Mobile health Industry Summit, 21–22 September 2010, London. Speaker and invited panellist: ‘ Embedded Connectivity for healthcare applications’, Embedded Connectivity Conference, 26–27 January 2010, London. Speaker and invited panellist: ‘Globalising Mobile Healthcare; Wireless Broadband Technologies for mobile healthcare services’, Mobile healthcare Industry Summit, 1–2 December 2009, London. Invited speaker: ‘WiMAX for Mobile Healthcare Applications’, IET WiMAX 2007 Conference, London, 25–26 April 2007. Invited speaker: ‘The Potential of Emerging Wireless Healthcare Systems for NHS Services ’, IET- London, Kingston Section Lecture, Kingston upon Thames, 6 November 2007. Invited speaker: ‘Talking Health: The Role of Bio-Communications in emerging personalised mobile healthcare Systems’, IET- Solent Lecture, University of Southampton, 15th, March 2007. Distinguished Speaker in BioEngineering: The Role of Genomic Signal Processing on Personalised Healthcare’, Institute of Biomaterials and Biomedical Engineering, University of Toronto, 29 April 2009. Invited speaker, ’Wikinomic Health: The role of emerging communication and computing Technologies for Personalisd Healthcare Systems’, Waterloo Institute for Health Informatics Research’ University of Waterloo, Canada, 17 September 2008 Invited speaker: ’Opportunities and Challenges in Using Mobile WiMAX for healthcare’, 6th. Annual Wireless Healthcare Summit, Toronto, Canada, 22 September 2008. He has published more than 200-refereed journal and conference papers and edited three books including chapters in the areas of biomedical signals processing and mobile communications for healthcare and m-health technologies. Professor Istepanian was awarded the IEE ‘Heaviside Premium Award’ for best IEE Control Theory and Applications proceedings paper in 1999. This work was from the research work on finite-precision control theory funded by the Royal Society, London. In 2010, Istepanian won IEEE Transactions on Information Technology in Biomedicine, 2009 Outstanding Paper Award for his paper called 'Introduction to the Special Section on M-Health: Beyond Seamless Mobility and Global Wireless Health-Care Connectivity'. This paper, published in IEEE journal Transactions on Information Technology in Biomedicine, December 2004, established a new research paradigm in this area and considered one of the most cited paper in the area. During the Mobile Healthcare Industry Summit Middle East took place alongside the main Telco World Summit event in Dubai on December 2010, Prof. Istepanian courted controversy with his presentation on "4G health: The Long Term Evolution of mobile health." Although Istepanian welcomed the advent of 4G, due to its all IP architecture and 100Mbit/s throughput, which offers many opportunities for m-health in terms of diagnostics potential, he championed WiMAX as the enabler of mhealth services. He stated that "The telecom community will be fighting for LTE, while the mhealth community is advocating the use of WiMAX," he said. "There are many applications for which LTE will be useful, but I believe WiMAX will be a more viable infrastructure, especially for the developing world. Its performance is close to that of LTE and enough for 4G health applications." Istepanian lives in Hampshire, England, with his mother, wife Helen and his two daughters Carolyn and Sarah Istepanian. See also Telemedicine References External links Kingston University, London –Mobile Information and Network Technologies Research Centre- Professor Robert Istepanian Mobile Information and Network Technologies (MINT) Health informaticians Alumni of Loughborough University Academics of the University of Portsmouth Academics of Brunel University London British people of Armenian descent Living people Senior Members of the IEEE Year of birth missing (living people) Academics of Kingston University
24620634
https://en.wikipedia.org/wiki/Drainage%20equation
Drainage equation
A drainage equation is an equation describing the relation between depth and spacing of parallel subsurface drains, depth of the watertable, depth and hydraulic conductivity of the soils. It is used in drainage design. A well known steady-state drainage equation is the Hooghoudt drain spacing equation. Its original publication is in Dutch. The equation was introduced in the USA by van Schilfgaarde. Hooghoudt's equation Hooghoudt's equation can be written as:. Q L2 = 8 Kb d (Dd - Dw) + 4 Ka (Dd - Dw)2 where: Q = steady state drainage discharge rate (m/day) Ka = hydraulic conductivity of the soil above drain level (m/day) Kb = hydraulic conductivity of the soil below drain level (m/day) Di = depth of the impermeable layer below drain level (m) Dd = depth of the drains (m) Dw = steady state depth of the watertable midway between the drains (m) L = spacing between the drains (m) d = equivalent depth, a function of L, (Di-Dd), and r r = drain radius (m) Steady (equilibrium) state condition In steady state, the level of the water table remains constant and the discharge rate (Q) equals the rate of groundwater recharge (R), i.e. the amount of water entering the groundwater through the watertable per unit of time. By considering a long-term (e.g. seasonal) average depth of the water table (Dw) in combination with the long-term average recharge rate (R), the net storage of water in that period of time is negligibly small and the steady state condition is satisfied: one obtains a dynamic equilibrium. Derivation of the equation For the derivation of the equation Hooghoudt used the law of Darcy, the summation of circular potential functions and, for the determination of the influence of the impermeable layer, de method of mirror images and superposition. Hooghoudt published tables for the determination of the equivalent depth (d), because the function (F) in d = F (L,Di-Dd,r) consists of long series of terms. Determining: the discharge rate (Q) from the recharge rate (R) in a water balance as detailed in the article: hydrology (agriculture) the permissible long term average depth of the water table (Dw) on the basis of agricultural drainage criteria the soil's hydraulic conductivity (Ka and Kb) by measurements the depth of the bottom of the aquifer (Di) the design drain spacing (L) can be found from the equation in dependence of the drain depth (Dd) and drain radius (r). Drainage criteria One would not want the water table to be too shallow to avoid crop yield depression nor too deep to avoid drought conditions. This is a subject of drainage research. The figure shows that a seasonal average depth of the water table shallower than 70 cm causes a yield depression The figure was made with the SegReg program for segmented regression. Equivalent depth In 1991 a closed-form expression was developed for the equivalent depth (d) that can replace the Hooghoudt tables: d = πL / 8 { ln(L/πr) + F(x) } where: x = 2π (Di − Dd) / L F(x) = Σ 4e−2nx / n (1 − e−2nx), with n = 1, 3, 5, . . . Extended use Theoretically, Hooghoudt's equation can also be used for sloping land. The theory on drainage of sloping land is corroborated by the results of sand tank experiments. In addition, the entrance resistance encountered by the water upon entering the drains can be accounted for. Amplification The drainage formula can be amplified to account for (see figure on the right): the additional energy associated with the incoming percolation water (recharge), see groundwater energy balance multiple soil layers anisotropric hydraulic conductivity, the vertical conductivity (Kv) being different from the horizontal (Kh) drains of different dimensions with any width (W) Computer program The amplified drainage equation uses an hydraulic equivalent of Joule's law in electricity. It is in the form of a differential equation that cannot be solved analytically (i.e. in a closed form) but the solution requires a numerical method for which a computer program is indispensable. The availability of a computer program also helps in quickly assessing various alternatives and performing a sensitivity analysis. The blue figure shows an example of results of a computer aided calculation with the amplified drainage equation using the EnDrain program. It shows that incorporation of the incoming energy associated with the recharge leads to a somewhat deeper water table. References External links Drainage Hooghoudt's equation calculator Drainage Hydrology Hydraulic engineering Soil Soil science Soil physics Agricultural soil science
7662281
https://en.wikipedia.org/wiki/Aveyond
Aveyond
Aveyond is a role-playing video game series by Aveyond Studios (formerly Amaranth Games or Aveyond Kingdom). It is set in a fantasy medieval world in which players attempt to save the world from evil beings, with a number of side quests available. There are eight games thus far in the series: the first two full games, the four "chapter" releases of the third game, the full fourth game, and the free prequel, Ahriman's Prophecy. All the games in the series were made with RPG Maker XP; Several of the games were subsequently released for Linux and Mac, along with Windows. Ahriman's Prophecy Ahriman's Prophecy is a freeware role-playing video game released in 2004, and is the prequel of the series. Borrowing elements from Dragon Warrior and the earlier Final Fantasy games for its gameplay, it offers an experience similar to Japanese role-playing games. Unlike the other titles in the Aveyond series, Ahriman's Prophecy was not developed with RPG Maker XP and instead was created with RPG Maker 2003. Ahriman's Prophecy starts as a young girl, Talia Maurva, is sent to be "named", a ritual set up by the people of her village to determine the profession of their adult life. Destiny changes for Talia, who, when looking into the seer's pool, sees a dark ceremony by the younger prince of a nearby kingdom, Candar. He and his dark priest were attempting to resurrect the dead warlock emperor, Ahriman, whose history in battle and warfare is legendary. The seer, sensing that Talia is different, sends her and her escort to a school of magic in the mainland city of Thais. Devin Perry, a friend of Talia, agrees to escort her to the mainland. Just after she completes her training three years later, Talia is summoned into a dream by her headmaster and a high priestess of the dreamland. They want her to slip across the continent to warn another order of priestesses that Ahriman is being resurrected, and the prophecy that surrounded his resurrection must be stopped before the thirteenth moon. Confused and disoriented, Talia nevertheless sets out on a journey that seems to pave the way to some peace in the world. Reception Ahriman's Prophecy was received well by the gaming community. Edward Zuk of Game Tunnel opined that "while Ahriman's Prophecy adds little that is new to the RPG genre, it's a pleasing mix of familiar elements." Download.com's editor's review stated "we found the game's twee period music a guilty pleasure that reminded us of our last visit to Ye Olde Renaissance Faire. And considering the game's free, you get a heck of a lot of adventuring for your money," awarding it a score of 4/5. Aveyond: Rhen's Quest The sequel to Ahriman's Prophecy, this game features a different map from its prequel except for two islands. Expanding on the success of its predecessor, it offers a wealth of quests, characters and endings, and has been hailed as "funny, innovative and wildly imaginative". Plot Devin Perry and Alicia Pendragon from Ahriman's Prophecy eventually married, as well as Talia and an unnamed Sun Priest. Sixteen years prior to Aveyond 1: Rhen's Quest, the forces of the demon Ahriman destroyed and sunk most of the surrounding areas and isles around Thais. This was because Alicia Pendragon, queen of Thais, was foretold to give birth to a child who would defeat a great demon and save the city if she reached adulthood. It was the foretelling of this birth that Ahriman wanted Thais destroyed. Tailor Darzon, a young but trusted general of Thais, offered to take the child to a safe place and raise her where the demon would not find her. As Thais and the queen fell, Tailor fled the kingdom with the child and escaped across the ocean to the Western Isle. He almost did not make it, but Talia Maurva, the Druid of Dreams, saved their lives. Tailor settled in the small mountain village of Clearwater. When the game starts, the protagonist, Rhen, gets teleported to a part of the Dreamland. A priestess, barely alive after the daeva Agas attacked her, asks Rhen to bring her back through the portal to Clearwater. Rhen's many questions were only partially answered by the stranger. The priestess, Talia, gives Rhen her ring and tells her to keep it close, and that it will protect her. Unfortunately, before she knew more, a case of mistaken identity causes Rhen, instead of the priestess, to be kidnapped by a slave trader and she was sold to a family residing on the Eastern Isle, an ocean away from Clearwater. This slave trader was employed by Ahriman as part of a scheme with the sun priest Dameon Maurva, Talia's son. A long and bitter family history prompts Dameon to forsake his duty as the Druid of Light to serve Ahriman, as his father, the previous Druid of Light, had. When they found out about the slave trader's mistake, Ahriman had the Dark Seer, Indra, read Rhen's part in the apocalypse. He learns then that Rhen is destined to destroy him, but he can't kill her or he will also be destroyed. So, he sends his minions to find her and turn her to his side, as Indra proclaimed. Meanwhile, Rhen is found to have a great aptitude for sword magic when she defends a child from the bullying of her master's son. She is released from slavery and sent to the eastern capital city to learn the art of sword singing. After she was raised to an apprentice however, she reunites with the priestess, who was actually Talia, who tells her that she must reunite all eight druids of the world so that an artifact of great importance could be revealed, and that it was her destiny to finally vanquish Ahriman once and for all. Along her journey, Rhen will discover secrets about her past and will have to make decisions that will determine the direction of her future as well as the fate of the world. Soundtrack The game's soundtrack was written and recorded by Aaron Walz. The score features many recorded symphonic instruments, a rare feature for an independent game. The soundtrack won Game Tunnel's Game of the Year: Sound award for 2006. Reception Independent gaming website Game Tunnel awarded Aveyond their Game of the Month and Gold Award in March 2006. On the other side, Game Chronicles reviewer Jason Porter highlighted awkward key mapping (which cannot be remapped) and criticized the main character's personalities, dialogues and evolution throughout the game. Platforms Initially Windows-only, the game was released for Linux in June 2016 and subsequently for Mac. Aveyond 2: Ean's Quest Ean's Quest is a sequel to Ahriman's Prophecy and Aveyond. It includes a few returning characters from the previous games. Aaron Walz returned to produce the soundtrack to Aveyond 2. Plot Ean (a male changeling) and Iya (a female song mage) are two young elves who live in a far away place called the Vale. One day, Ean wakes up to find that Iya, his best friend, has gone missing. Furthermore, none of the people of Vale remember who she is. Thus, Ean sets out on a quest to find his missing friend. On his quest, Ean will find that dear Iya has been swept away by the Snow Queen (who last appeared in Aveyond I: Rhen's Quest). Ean must save his friend, and Iya must learn to control her wild powers that the Snow Queen desires for herself. They must fight to stop the Snow Queen's plot to cover the world in ice. Reception Aveyond 2 had a good reception by the gaming community. Erin Bell of Gamezebo said "it's a great example of a 'casual' role-playing game that delivers a delightful and accessible fantasy adventure." Neal Chandran from RPGFan said "Aveyond 2 represents another wonderful independently developed RPG and is another feather in the cap of Amaranth Games" and that though it does not revolutionize the Aveyond series in any way, it "adds another immersive entry to this solid series." Aveyond 2 was second runner-up in the 2007 Game Tunnel Game of the Year: Player's Choice Award and RPG of the Year Award. Platforms The game is available for Windows, Linux and Mac. Aveyond 3: Orbs of Magic This game is unlike the other games as it is divided into four chapters, that is downloaded individually with a save game to be transferred from one game to the next. The games can also be played as a stand-alone, but that is not recommended and misses several key features this way. Orbs of Magic centers on Mel, a thief who steals a powerful heirloom. Unbeknownst to her, Mel is a descendant of Mordred Darkthrop, (an evil sorcerer who plotted to rule the world), and only a Darkthrop can remove the Orbs of Magic from their resting place. Accidentally handing the Orb of Darkness over to a megalomaniac vampire lord, she now has to run for her life and find a way to stop him from using the orb and destroying the light from the surface world. Chapter 1: The Lord of Twilight Plot Chapter 1 was released on June 5, 2009. Mel, a thief who dwells in the town of Harburg is introduced as the protagonist in this chapter. She is hired by an unknown man to steal an heirloom is great importance. Later, the man reveals to be Gyendal, the chief antagonist, a vampire lord wanting to plunge the world into darkness. Rescued by a vampress and sent to study in an academy in the city of Thais, Mel trains as a spy. As it is evident she can hide no longer, she sets out on a journey to the land of Naylith, where lie the answers to the puzzles. On her journey, Mel meets a prince, Edward, Stella, a gentle girl of mysterious origin and two classic characters from the first Aveyond, the Vampress Te'ijal and her husband Galahad. The game is available for Windows, Linux and Mac. Chapter 2: Gates of Night Plot Chapter 2 is a direct continuation of Chapter 1 and all items and spells are carried over. It continues the adventures to find the way to Naylith and have the final (or at least a final) confrontation with the Vampire Lord. Two more people join the party: Lydia, seen in Lord of Twilight, a powerful fighter with magical spells, and Ulf, an orcish scholar who you rescue from the orcish prison. The game is available for Windows, Linux and Mac. Chapter 3: The Lost Orb Plot This game is not a direct continuation of the previous. All items are removed, as are several of the characters because they considered the quest to have ended in The Gates of Night. This chapter was released on February 15, 2010. There are three new party members: June, a spell trickster, Yvette, a familiar, and Spook, a thief with a dark secret. Things ended pretty good in Chapter 2 but now it seems that Lydia is up to no good. What should have been the most romantic day in Mel's life (if Edward proposed to her) turns into a nightmare. Having nothing left for her in Thais Mel sets out to find the fourth and final lost orb, to prevent it from being used and prevent The Darkthrop Prophecy from happening. The game is available for Windows, Linux and Mac. Chapter 4: The Darkthrop Prophecy Plot This final game in the series was released on December 21, 2010. Mel has been living Harakauna for the last year after discovering that she had magic when she is finally found by the darklings that now know that she is the prophesied one. Before they can take her, she is rescued by Edward and two scholars from a land far away. (A land that hasn't been in the Aveyond series seen since Rhens quest). They offer her magical training at their academy but she only agrees to go if Edward will train with her. After arriving in Veldarah, she eventually accepts her magic and starts to love it. It all goes well until the attack. They came for her, but almost no one believes her. One unnamed professor wants to meet with her in a lone cabin far out in the woods, and Mel decides to check it out. When it is, unsurprisingly, a trap she is captured by her former nemesis, the former vampire lord. This game differs from the others in that there are two different parties, Mel's and Stella's. Players swap between them, but they never meet and merge. Players therefore have two completely different inventories. The game is available for Windows, Linux and Mac. Aveyond 4: Shadow of the Mist Released on December 10, 2015, after five years of hiatus. The main character of this game is Boyle, a renounced villain who once set out to rule the world, but was defeated and forced to live in a small town. After series of mishaps, he now has to take up the path of a hero and try to save the world. Plot The main character, Boyle, resides in a town for retired villains, along with others like him. Ingrid, a witch in the town, has cursed him to marry her. In a series of events that lead him to losing his dog, he must now set out to save his beloved animal by carrying out the heroic task of saving the son of the Mist Queen. On the way, he comes across numerous characters with vividly different personalities who join him on his epic quest. In the end, Boyle successfully saves everyone and is a hero, almost. Reception The game received mixed response, with the artwork being praised by almost everyone. It currently has an 8/10 rating on Steam. Platforms The game is available for Windows, Linux and Mac. References External links Official website 2004 video games Fantasy video games Role-playing video games RPG Maker games Linux games MacOS games Video games developed in the United States Video games featuring female protagonists Video game franchises introduced in 2004 Windows games
41706817
https://en.wikipedia.org/wiki/FanDraft
FanDraft
FanDraft is a fantasy sports software application created by FanSoft Media. The application acts as a digital alternative to the traditional "paper draft boards" utilized during many live fantasy football drafts. The software has over 27,000 downloads on Download.com, and has been featured on major media sites such as Wall Street Journal, USA Today Online, and Telegram.com, and boasts integration partnerships with RotoWire and MyFantasyLeague FanDraft provides software specifically designed to make a draft easier and more aesthetically pleasing. FanDraft is installed into your laptop, then the display can be hooked up to a big-screen TV or projector, and the entire league has a display for to view. When a pick is made, the commissioner clicks on the player's name, and he is added to the squad. He then joins the scrolling "bottom line" that updates picks while the clock starts on the next owner. The software also allows the commissioner to program personal music for each team, which will play when it's their turn to pick. Background FanDraft Football version 1 was first launched in May 2002. A rehauled update to version 2 was released a couple of months later. Starting in 2020, FanDraft converted from being a desktop software into an online application. FanDraft currently carries titles for 4 different fantasy sports that include football, baseball, basketball, and hockey. FanDraft released its first baseball application (FanDraft Baseball) in 2008. The Basketball version was first released in 2007, and continued to release updates until 2009. FanDraft Basketball was discontinued from 2010-2012, however relaunched in 2013. FanDraft Hockey was first released in 2009 on continued to release annual updates. FanDraft has also released a non-fantasy title called "FanDraft Youth Sports" in 2009, which serves as a draft board software for youth sports leagues. References External links Official site FanDraft Baseball Fantasy sports
9899788
https://en.wikipedia.org/wiki/Lodrick%20Stewart
Lodrick Stewart
Lodrick Stewart (born April 30, 1985) is a former American college basketball player. He played at the University of Southern California. Lodrick and twin brother Rodrick, attended Joyner Elementary School in Tupelo, Mississippi, before moving to Seattle, Washington. Personal life Stewart has a twin brother named Rodrick Stewart, and they have younger twin brothers, Hikeem and Kadeem Stewart, who played for University of Washington and Shoreline Community College, and their youngest brother Scotty Ewing played for South Puget Sound Community College and is now playing professionally in China. High school career Stewart attended basketball powerhouse Rainier Beach High School in Seattle. He played alongside his brother Rodrick, Nate Robinson, Terrence Williams, and C. J. Giles. As a senior, he led his basketball team to a 28–1 record and won the AAA state championship. College career Stewart played for the USC Trojans between 2003 and 2007. He was honorable mention Pac-10 his senior year and is the all-time 3 point leader for the Trojans. Stewart graduated from USC in 2007. Pro career Stewart played for the NBA Development League Anaheim Arsenal in 2007-2008. In 2008-2009, Stewart played for the Giants Nördlingen in Germany. Later he went to Lithuania, Marijampolės "Sūduva". On March 8, 2010, it was reported that Stewart and another American player, Rashaun Broadus, left the team without saying a word after a game on February 23, 2010 against Perlas. References External links NBA D-league bio Rashaun Broadus and Lodrick Stewart leave Suduva 1985 births Living people American expatriate basketball people in Germany American expatriate basketball people in Lithuania American men's basketball players Anaheim Arsenal players Basketball players from Seattle Identical twins Shooting guards Shoreline Dolphins men's basketball players Twin people from the United States Twin sportspeople USC Trojans men's basketball players
1109117
https://en.wikipedia.org/wiki/CFEngine
CFEngine
CFEngine is an open-source configuration management system, written by Mark Burgess. Its primary function is to provide automated configuration and maintenance of large-scale computer systems, including the unified management of servers, desktops, consumer and industrial devices, embedded networked devices, mobile smartphones, and tablet computers. History CFEngine 1 The CFEngine project began in 1993 as a way for author Mark Burgess (then a post-doctoral fellow of the Royal Society at Oslo University, Norway) to get his work done by automating the management of a small group of workstations in the Department of Theoretical Physics. Like many post-docs and PhD students, Burgess ended up with the task of managing Unix workstations, scripting and fixing problems for users manually. Scripting took too much time, the flavours of Unix were significantly different, and scripts had to be maintained for multiple platforms, drowning in exception logic. After discussing the problems with a colleague, Burgess wrote the first version of CFEngine (the configuration engine) which was published as an internal report and presented at the CERN computing conference. It gained significant attention from a wider community because it was able to hide platform differences using a domain-specific language. A year later, Burgess finished his post-doc but decided to stay in Oslo and took a job lecturing at Oslo University College. Here he realized that there was little or no research being done into configuration management, and he set about applying the principles of scientific modelling to understanding computer systems. In a short space of time, he developed the notion of convergent operators, which remains a core of CFEngine. CFEngine 2 In 1998 Burgess wrote "Computer Immunology", a paper at the USENIX/LISA98 conference. It laid out a manifesto for creating self-healing systems, reiterated a few years later by IBM in their form of Autonomic Computing. This started a research effort which led to a major re-write, CFEngine 2, which added features for machine learning, anomaly detection and secure communications. CFEngine 3 Between 1998 and 2004, CFEngine grew in adoption along with the popularity of Linux as a computing platform. During this time, Mark Burgess developed promise theory, a model of distributed cooperation for self-healing automation. In 2008, after more than five years of research, CFEngine 3 was introduced, which incorporated promise theory as "a way to make CFEngine both simpler and more powerful at the same time", according to Burgess. The most significant re-write of the project to date, CFEngine 3 also integrated knowledge management and discovery mechanisms—allowing configuration management to scale to automate enterprise-class infrastructure. Commercialization In June 2008 the company CFEngine AS was formed as a collaboration between author Mark Burgess, Oslo University College and the Oslo Innovation Centre in order to support users of CFEngine. In April 2009, the company launched the first commercial version of CFEngine - CFEngine Enterprise. The Enterprise version can be downloaded for free for up to 25 agents (clients). February 2011, the company received its first round of funding, from FERD Capital. The company has offices in Oslo, Norway and Mountain View, California, USA. In 2017, the company changed its name to Northern.tech, to reflect that it is working on multiple software products, not only CFEngine. Characteristics Portability CFEngine provides an operating system-independent interface to Unix-like host configuration. It requires some expert knowledge to deal with peculiarities of different operating systems, but has the power to perform maintenance actions across multiple hosts. CFEngine can be used on Windows hosts as well, and is widely used for managing large numbers of Unix hosts that run heterogeneous operating systems, e.g. Solaris, Linux, AIX, Tru64 and HP-UX. Research-based Shortly after its inception, CFEngine inspired a field of research into automated configuration management. The CFEngine project claims to attempt to place the problem of configuration management in a scientific framework. Its author Mark Burgess has developed a range of theoretical tools and results to talk about the problem, and has written several text books and monographs explaining them. Convergence One of the main ideas in CFEngine is that changes in computer configuration should be carried out in a convergent manner. This means that each change operation made by the agent should have the character of a fixed point. Rather than describing the steps needed to make a change, CFEngine language describes the final state in which one wants to end up. The agent then ensures that the necessary steps are taken to end up in this "policy compliant state". Thus, CFEngine can be run again and again, whatever the initial state of a system, and it will end up with a predictable result. CFEngine supports the item of statistical compliance with policy, meaning that a system can never guarantee to be exactly in an ideal or desired state, rather one approaches (converges) towards the desired state by best-effort, at a rate that is determined by the ratio of the frequency of environmental change to the rate of CFEngine execution. User base CFEngine is used in both large and small companies, as well as in many universities and governmental institutions. The largest reported datacenter under management of CFEngine is above a million servers, while sites as large as 40,000 machines are publicly reported (LinkedIn), while sites of several thousand hosts running under CFEngine are common. According to statistics from CFEngine AS, probably several million computers run CFEngine around the world, and users from more than 100 countries have been registered. Competitors Ansible Chef Otter Puppet Salt See also Comparison of open-source configuration management software Anomaly-based intrusion detection system Host-based intrusion detection system Rudder (software) References External links Orchestration software Free network management software Multi-agent systems Software using the GPL license System administration Unix package management-related software
5351096
https://en.wikipedia.org/wiki/Seamoby
Seamoby
The Seamoby Candidate Access Router Discovery, or CARD, is an experimental protocol outlined by RFC 4065 and RFC 4066. The protocol is designed to speed up the hand over of IP devices between wireless access routers. The protocol defines a mechanism that can be used by an access router to automatically discover its neighbor with help of mobile devices. Based on some trigger, mobile devices scan for a neighbor access points and report list of newly found access point identifiers to the connected access router. The connected access router performs reverse look up using AP id(s) to identify the candidate access routers that are connected to the newfound access points. The connected access router updates its neighbor list with IP address and capability of newly found access routers. The neighbor list can be used for inter-AR handover decision making. A similar idea is currently used by 3GPP SON protocol (aka ANR) for discovering candidate access points. However, ANR protocol extends RRC and X2 protocols to support CARD-like functionality for L2 network. Internet layer protocols
1228624
https://en.wikipedia.org/wiki/StrataCom
StrataCom
StrataCom, Inc. was a supplier of Asynchronous Transfer Mode (ATM) and Frame Relay high-speed wide area network (WAN) switching equipment. StrataCom was founded in Cupertino, California, United States, in January 1986, by 26 former employees of the failing Packet Technologies, Inc. StrataCom produced the first commercial cell switch, also known as a fast-packet switch. ATM was one of the technologies underlying the world's communications systems in the 1990s. Origins of the IPX at Packet Technologies Internet pioneer Paul Baran was an employee of Packet Technologies and provided a spark of invention at the initiation of the Integrated Packet Exchange (IPX) project (StrataCom's IPX communication system is unrelated to Novell's IPX Internetwork Packet Exchange protocol). The IPX was initially known as the PacketDAX, which was a play on words of Digital access and cross-connect system (or DACS). A rich collection of inventions were contained in the IPX, and many were provided by the other members of the development team. The names on the original three IPX patents are Paul Baran, Charles Corbalis, Brian Holden, Jim Marggraff, Jon Masatsugu, David Owen and Pete Stonebridge. StrataCom's implementation of ATM was pre-standard and used 24 byte cells instead of standards-based ATM's 53 byte cells. However, many of concepts and details found in the ATM set of standards were derived directly from StrataCom's technology, including the use of CRC-based framing on its links. The IPX development The IPX's first use was as a 4-1 voice compression system. It implemented Voice-Activity-Detection (VAD) and ADPCM, which together, gave 4-1 compression allowing 96 telephone calls to be fit into the space of 24. The IPX was also used as an enterprise voice-data networking system as well as a global enterprise networking system. McGraw-Hill's Data communications Magazine included the IPX in its list of "20 Most Significant Communications Products of the Last 20 Years" in a 1992 edition. The Beta test of the IPX was in Michigan Bell between Livonia, Plymouth, and Northville, 3 suburbs of Detroit. The first customer shipment was to the May Company between department stores in San Diego and Los Angeles. The most significant early use of the IPX was as the backbone of the Covia/United Airlines flight reservation system. It also was used in multiple corporate networks including those of CompuServe, Intel and Hewlett-Packard. The IPX's most successful use was as the first frame relay networking product. It formed the core of the AT&T and CompuServe frame relay networks. The BPX, which was produced in 1993, increased the speed and sophistication of the frame relay offering. It also supported the 53 byte cells of the ATM standard instead the IPX's 24 byte cells. The original IPX product was also enhanced and re-introduced as the IGX. The IPX product The cards in the original IPX system were: The PCC — The Processor Control Card — a Motorola 68000 based shelf control card The VDP — The Voice Data Processor — Implemented a VAD algorithm and packetized the voice The VCD — The Voice Compressor-Decompressor — Implemented ADPCM The TXR — The Transmit Receive Card — Implemented a T1 interface with packet queues The PIC — The Protection Interface Card — Allowed the T1 interfaces to be swapped The cards in the second wave were The SDP — The Synchronous Data Processor — added V.35, RS-422, and RS-232 data to the IPX (this card was a very early use of Xilinx FPGAs & StrataCom was their largest customer for a time) The CDP — The Circuit Data Processor — Added E1 and integrated echo cancellation The card in the third wave was The FRP — The Frame Relay Processor — Added Frame Relay to the IPX (implemented the SAR function in Motorola 56000 DSP's) StrataCom management and locations StrataCom's first CEO was Steve Campbell who later went to Packeteer. Dick Moley, who came from ROLM Corporation, served as its CEO for most of its existence. Dave Sant originally led sales, hiring Scott Kriens who later became CEO of Juniper Networks. Bill Stensrud was the founding Vice President of Marketing (and later became a managing partner of venture capital firm Enterprise Partners). The company was located in Cupertino on Bubb Rd., Campbell on Winchester near Hacienda, and San Jose at Meridian and Parkmoor. The company also had a manufacturing building in south San Jose built for it in 1996; this building was selected in 2006 by Nanosolar as the site of a large solar cell factory. StrataCom went public in the fall of 1992 under the ticker symbol STRM. Three of its executives later formed SToRM Ventures. Acquisition by Cisco Systems Cisco Systems acquired StrataCom in 1996 for US$4 billion. The acquired employees formed the core of Cisco's Multi-Service Switching Business Unit and helped to move Cisco more into the carrier equipment space. External links Transcript of a 2000 keynote address by Paul Baran describing the development of the IPX/PacketDAX Cisco acquisition Mention of StrataCom in article about Nanosolar's manufacturing plant 1996 mergers and acquisitions Defunct networking companies Cisco Systems acquisitions Computer companies established in 1986 Computer companies disestablished in 1996 Computer companies of the United States Companies based in Santa Clara County, California 1986 establishments in California 1996 disestablishments in California
1937926
https://en.wikipedia.org/wiki/Ethernet%20hub
Ethernet hub
An Ethernet hub, active hub, network hub, repeater hub, multiport repeater, or simply hub is a network hardware device for connecting multiple Ethernet devices together and making them act as a single network segment. It has multiple input/output (I/O) ports, in which a signal introduced at the input of any port appears at the output of every port except the original incoming. A hub works at the physical layer (layer 1) of the OSI model. A repeater hub also participates in collision detection, forwarding a jam signal to all ports if it detects a collision. In addition to standard 8P8C ("RJ45") ports, some hubs may also come with a BNC or an Attachment Unit Interface (AUI) connector to allow connection to legacy 10BASE2 or 10BASE5 network segments. Hubs are now largely obsolete, having been replaced by network switches except in very old installations or specialized applications. As of 2011, connecting network segments by repeaters or hubs is deprecated by IEEE 802.3. Physical layer function A layer 1 network device such as a hub transfers data but does not manage any of the traffic coming through it. Any packet entering a port is repeated to the output of every other port except for the port of entry. Specifically, each bit or symbol is repeated as it flows in. A repeater hub can therefore only receive and forward at a single speed. Dual-speed hubs internally consist of two hubs with a bridge between them. Since every packet is repeated on every other port, packet collisions affect the entire network, limiting its overall capacity. A network hub is an unsophisticated device in comparison with a switch. As a multiport repeater it works by repeating transmissions received from one of its ports to all other ports. It is aware of physical layer packets, that is it can detect their start (preamble), an idle line (interpacket gap) and sense a collision which it also propagates by sending a jam signal. A hub cannot further examine or manage any of the traffic that comes through it. A hub has no memory to store data and can handle only one transmission at a time. Therefore, hubs can only run in half duplex mode. Due to a larger collision domain, packet collisions are more likely in networks connected using hubs than in networks connected using more sophisticated devices. Connecting multiple hubs The need for hosts to be able to detect collisions limits the number of hubs and the total size of a network built using hubs (a network built using switches does not have these limitations). For 10 Mbit/s networks built using repeater hubs, the 5-4-3 rule must be followed: up to five segments (four hubs) are allowed between any two end stations. For 10BASE-T networks, up to five segments and four repeaters are allowed between any two hosts. For 100 Mbit/s networks, the limit is reduced to 3 segments (2 Class II hubs) between any two end stations, and even that is only allowed if the hubs are of Class II. Some hubs have manufacturer-specific stack ports allowing them to be combined in a way that allows more hubs than simple chaining through Ethernet cables, but even so, a large Fast Ethernet network is likely to require switches to avoid the chaining limits of hubs. Additional functions Most hubs detect typical problems, such as excessive collisions and jabbering on individual ports, and partition the port, disconnecting it from the shared medium. Thus, hub-based twisted-pair Ethernet is generally more robust than coaxial cable-based Ethernet (e.g. 10BASE2), where a misbehaving device can adversely affect the entire collision domain. Even if not partitioned automatically, a hub simplifies troubleshooting because hubs remove the need to troubleshoot faults on a long cable with multiple taps; status lights on the hub can indicate the possible problem source or, as a last resort, devices can be disconnected from a hub one at a time much more easily than from a coaxial cable. To pass data through the repeater in a usable fashion from one segment to the next, the framing and data rate must be the same on each segment. This means that a repeater cannot connect an 802.3 segment (Ethernet) and an 802.5 segment (Token Ring) or a 10 Mbit/s segment to 100 Mbit/s Ethernet. Dual-speed hub In the early days of Fast Ethernet, Ethernet switches were relatively expensive devices. Hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole network needed to run at 10 Mbit/s. Therefore, a compromise between a hub and a switch was developed, known as a dual-speed hub. These devices make use of an internal two-port switch, bridging the 10 Mbit/s and 100 Mbit/s segments. When a network device becomes active on any of the physical ports, the device attaches it to either the 10 Mbit/s segment or the 100 Mbit/s segment, as appropriate. This obviated the need for an all-or-nothing migration to Fast Ethernet networks. These devices are considered hubs because the traffic between devices connected at the same speed is not switched. Fast Ethernet 100 Mbit/s hubs and repeaters come in two different classes: Class I delay the signal for a maximum of 140 bit times. This delay allows for translation/recoding between 100BASE-TX, 100BASE-FX and 100BASE-T4. Class II hubs delay the signal for a maximum of 92 bit times. This shorter delay allows the installation of two hubs in a single collision domain. Gigabit Ethernet Repeater hubs are defined in the standards for Gigabit Ethernet but commercial products have failed to appear due to the industry's transition to switching. Uses Historically, the main reason for purchasing hubs rather than switches was their price. By the early 2000s, there was little price difference between a hub and a low-end switch. Hubs can still be useful in special circumstances: For inserting a protocol analyzer into a network connection, a hub is an alternative to a network tap or port mirroring. A hub with both 10BASE-T ports and a 10BASE2 port can be used to connect a 10BASE2 segment to a modern Ethernet-over-twisted-pair network. A hub with both 10BASE-T ports and an AUI port can be used to connect a 10BASE5 segment to a modern network. As hubs have lower latency and jitter compared to switches – as long as there are no collisions –, they may be better suited for real-time networks, e.g. Ethernet Powerlink. See also Router (computing) USB hub References External links Hub Networking hardware
26398761
https://en.wikipedia.org/wiki/Whitebox%20Geospatial%20Analysis%20Tools
Whitebox Geospatial Analysis Tools
Whitebox Geospatial Analysis Tools (GAT) is an open-source and cross-platform Geographic information system (GIS) and remote sensing software package that is distributed under the GNU General Public License. It has been developed by the members of the University of Guelph Centre for Hydrogeomatics and is intended for advanced geospatial analysis and data visualization in research and education settings. The package features a friendly graphical user interface (GUI) with help and documentation built into the dialog boxes for each of the more than 410 analysis tools. Users are also able to access extensive off-line and online help resources. The Whitebox GAT project started as a replacement for the Terrain Analysis System (TAS), a geospatial analysis software package John Lindsay wrote. The current release support raster and vector (shapefile) data structures. There is also extensive functionality for processing laser scanner (LiDAR) data containing LAS files. Whitebox GAT is extendible. Users are able to create and add custom tools or plugins using any JVM language. The software also allows scripting using the programming languages Groovy, JavaScript, and Python. Analysis tools Whitebox GAT contains more than 385 tools to perform spatial analysis on raster data sets. The following is an incomplete list of some of the more commonly used tools: GIS tools: Cost-distance analysis, buffer, distance operations, weighted overlays, multi-criteria evaluation, reclass, area analysis, clumping Image processing tools: k-means classification, numerous spatial filters, image mosaicing, NDVI, resampling, contrast enhancement Hydrology tools: DEM preprocessing tools, flow direction and accumulation (D8, Rho8, Dinf, and FD8 algorithms), mass flux analysis, watershed extraction Terrain analysis tools: surface derivatives (slope, aspect, and curvatures), hillshading, wetness index, relative stream power index, relative landscape position indices LiDAR tools: IDW interpolation, nearest neighbour interpolation, point density, removal of off-terrain objects (non-ground points) Software transparency The Whitebox GAT project has adopted a novel approach for linking the software's development and user communities, known as software transparency, or open-access software (considered an extension of open-source software). The philosophy of transparency in software states that the user 1) has the right to view the underlying workings of a tool or operation, and 2) should be able to access this information in a way that reduces, or ideally eliminates, any barriers to viewing and interpreting it. This concept was developed as a response to the fact that the code base of many open-source projects can be so massive and its organization so complex that individual users often find the task of interpreting the underlying code too daunting when they are interested in a small portion of the overall code base, e.g. if the user would like to know how a particular tool or algorithm operates. Furthermore, when the software's source code is written in an unfamiliar programming language, the task of interpreting the code is made even more difficult. For some open-source projects, these characteristics can create a divide between the development and user communities, often restricting future development to a few individuals that have been involved in the project during the earliest periods of development. The View Code button that is present on all Whitebox GAT tools is the embodiment of this software-transparency philosophy by pointing the user to the specific region of the source-code that is relevant to a particular tool, also allowing for code conversion to other programming languages. The Whitebox GAT logo is also representative of the open and transparent characteristic of the software, being a transparent glass cube, open on one face. References External links blog GIS software Free GIS software Remote sensing software
4658176
https://en.wikipedia.org/wiki/Media%20Delivery%20Index
Media Delivery Index
The Media Delivery Index (MDI) is a set of measures that can be used to monitor both the quality of a delivered video stream as well as to show system margin for IPTV systems by providing an accurate measurement of jitter and delay at network level (Internet Protocol, IP), which are the main causes for quality loss. Identifying and quantizing such problems in this kind of networks is key to maintaining high quality video delivery and providing indications that warn system operators with enough advance notice to allow corrective action. The Media Delivery Index is typically displayed as two numbers separated by a colon: the Delay Factor (DF) and the Media Loss Rate (MLR). Context The Media Delivery Index (MDI) may be able to identify problems caused by: Time distortion If packets are delayed by the network, some packets arrive in bursts with interpacket delays shorter than when they were transmitted, while others are delayed such that they arrive with greater delay between packets than when they were transmitted from the source (see figure below). This time difference between when a packet actually arrives and the expected arrival time is defined as packet jitter or time distortion. A receiver displaying the video at its nominal rate must accommodate the varying input stream arrival times by buffering the data arriving early and assuring that there is enough already stored data to face the possible delays in the received data (because of this the buffer is filled before displaying). Similarly, the network infrastructure (switches, routers,…) uses buffers at each node to avoid packet loss. These buffers must be sized appropriately to handle network congestion. Packet delays can be caused by multiple facts, among which there are the way traffic is routed through the infrastructure and possible differences between link speeds in the infrastructure. Moreover, some methods for delivering Quality of Service (QOS) using packet metering algorithms may intentionally hold back packets to meet the quality specifications in the transmission. The effects of all these facts on the amount of packets received by a specific point in the network can be seen in the next graphics: Packet loss Packets may be lost due to buffer overflows or environmental electrical noise that creates corrupted packets. Even small packet loss rates result in a poor video display. Description Packet delay variation and packet loss have been shown to be the key characteristics in determining whether a network can transport good quality video. These features are represented as the Delay Factor (DF) and the Media Loss Rate (MLR), and they are combined to produce the Media Delivery Index (MDI), which is displayed as: Components The different components of the Media Delivery Index (MDI) are explained in this section. Delay Factor (DF) The Delay Factor is a temporal value given in milliseconds that indicates how much time is required to drain the virtual buffer at the concrete network node and at a specific time. In other words, it is a time value indicating how many milliseconds’ worth of data the buffers must be able to contain in order to eliminate time distortions (jitter). It is computed as packets arrive at the node and is displayed/recorded at regular intervals (typically one second). It is calculated as follows: 1. At every packet arrival, the difference between the bytes received and the bytes drained is calculated. This determines the MDI virtual buffer depth: 2. Over a time interval, the difference between the minimum and maximum values of Δ is taken and then divided by the media rate: Maximum acceptable DF: 9–50 ms Media Loss Rate (MLR) The Media Loss Rate is the number of media packets lost over a certain time interval (typically one second). It is computed by subtracting the number of media packets received during an interval from the number of media packets expected during that interval and scaling the value to the chosen time period (typically one second): Maximum acceptable channel zapping MLR: 0 Maximum acceptable average MLR: SDTV: 0.004 VOD: 0.004 HDTV: 0.0005 It must be said that the maximum acceptable MLR depends on the implementation. For channel zapping, a channel is generally viewed for a brief period, so one would be bothered if any packet loss occurred. For this case the maximum acceptable MLR is 0, as stated before, because any greater a value would mean a loss of one or more packets in a small viewing timeframe (after the zap time). Use Generally, the Media Delivery Index (MDI) can be used to install, modify or evaluate a video network following the next steps: Identify, locate, and address any packet loss issues using the Media Loss Rate. Identify and measure jitter margins using the Delay Factor. Establish an infrastructure monitor for both MDI components to analyze any possible scenarios of interest. Given these results, measures must be taken to provide solutions to the problems found in the network. Some of them are: redefining system specifications, modifying the network components in order to meet the expected quality requirements (or number of users), etc. Other parameters Other parameters may also be desired in order to troubleshoot concerns identified with the MDI and to aid in system configuration and monitoring. Some of them are: Network Utilization. Tracking the instantaneous, minimum, and maximum overall network utilization is needed to verify that sufficient raw bandwidth is available for a stream on a network. High utilization level is also an indicator that localized congestion is likely due to queue behavior in network components. The DF provides a measure of the results of congestion on a given stream. Video stream statistics such as: Instantaneous Flow Rate (IFR) and Instantaneous Flow Rate Deviation (IFRD). The measured IFR and IFRD confirm a stream’s nominal rate and, if not constant over time, gives insight into how a stream is being corrupted. Average Rate in Mbit/s. This measure indicates whether the stream’s rate being analyzed conforms to its specified rate over a measurement time. This is the longer term measurement of IFR. Stream Utilization in percent of network bandwidth. This measure indicates how much of the available network bandwidth is being consumed by the stream being analyzed. References External links ITU IPTV Focus Group IPTV industry resources Internet protocols Digital television Streaming television
190901
https://en.wikipedia.org/wiki/Sibelius%20%28scorewriter%29
Sibelius (scorewriter)
Sibelius is a scorewriter program developed and released by Sibelius Software Limited (now part of Avid Technology). It is the world's largest selling music notation program. Beyond creating, editing and printing music scores, Sibelius can also play the music back using sampled or synthesised sounds. It produces printed scores, and can also publish them via the Internet for others to access. Less advanced versions of Sibelius at lower prices have been released, as have various add-ons for the software. Named after the Finnish composer Jean Sibelius, the company was founded in April 1993 by twin brothers Ben and Jonathan Finn to market the eponymous music notation program they had created. It went on to develop and distribute various other music software products, particularly for education. In addition to its head office in Cambridge and subsequently London, Sibelius Software opened offices in the US, Australia and Japan, with distributors and dealers in many other countries worldwide. The company won numerous awards, including the Queen's Award for Innovation in 2005. In August 2006 the company was acquired by Avid, to become part of its Digidesign division, which also manufactures the digital audio workstation Pro Tools. In July 2012, Avid announced plans to divest its consumer businesses, closed the Sibelius London office, and removed the original development team, despite extensive protests on Facebook and elsewhere. Avid subsequently recruited some new programmers to continue development of Sibelius, and Steinberg hired most of the former Sibelius team to create a competing software, Dorico. History Origins Sibelius was originally developed by British twins Jonathan and Ben Finn for the Acorn Archimedes computer under the name 'Sibelius 7', not as a version number, but reminiscent of Sibelius' Symphony No 7. The Finns said they could not remember why they used Jean Sibelius' name, but it was probably because he was also ‘a Finn' (i.e. Finnish), as well as being one of their favourite composers. Development in assembly language on the RISC OS started in 1986 after they left school, and continued while they were at Oxford and Cambridge universities, respectively. Both were music students, and said they wrote the program because they did not like the laborious process of writing music by hand. The program was released to the public in April 1993 on 3.5-inch floppy disk. It required considerably less than 1 MB of memory (as its files only occupied a few KB per page of music), and the combination of assembly language and the Archimedes' ARM processor meant that it ran very quickly. No matter how long the score, changes were displayed almost instantly. A unique feature of the Sibelius GUI at that time was the ability it gave the user to drag the entire score around with the mouse, offering a bird's eye of the score, as distinct from having to use the QWERTY input keyboard arrow keys, or equivalent, to scroll the page. The first ever user of Sibelius was the composer and engraver Richard Emsley, who provided advice on music engraving prior to the start of development, and beta tested the software before its release. The first concert performance from a Sibelius score was of an orchestral work by David Robert Coleman, copied by Emsley. The first score published using Sibelius was Antara by George Benjamin, also copied by Emsley, and published by Faber Music. Other early adopters included composer John Rutter, conductor Michael Tilson Thomas, and publisher Music Sales. As a killer application for the niche Acorn platform, Sibelius rapidly dominated the UK market. It also sold in smaller numbers in a few other countries, restricted by the availability of Acorn computers. 'Lite' versions were subsequently released, and these were successful in UK schools, where Acorns were widely used. Expansion In September 1998, the first version for Windows was released as 'Sibelius', with the version number reset to 1.0. A Mac version 1.2 was released a few months later, and the company thereafter used conventional version numbers for both platforms across subsequent upgrades. Scores created on one platform could be opened on the other, and were backward compatible. To produce these versions, the software was completely rewritten in C++, while retaining most of the original's functionality and user interface with numerous enhancements. The original Acorn names 'Sibelius 6' and 'Sibelius 7' were later re-used to denote versions 6 and 7 of Sibelius for Windows/Mac. Releasing Sibelius for more widely available computers brought it to a worldwide market, particularly the US, where Sibelius Software had opened an office in late 1996. Following the break-up of Acorn Computers shortly after Sibelius' Windows release, no further Acorn versions were developed. Sibelius Software later opened an office in Australia, also serving New Zealand, where Sibelius was widely used. In August 2006, Sibelius Software Ltd was acquired by Avid Technology, an American manufacturer of software and hardware for audio and video production. Avid continued publishing Sibelius as a stand-alone notation product, as well as integrating it with some of its existing software products. In July 2012, Avid announced plans to divest itself of its other consumer businesses, closed the Sibelius London office, and laid off the original development team, amid an outpouring of user protest, then recruited a new team of programmers to continue Sibelius development in Montreal, Canada and Kyiv, Ukraine. Timeline 1986: Founders Jonathan and Ben Finn start designing Sibelius 7 for Acorn computers. 1993: Sibelius Software founded to sell Sibelius 7 and related computer hardware/software in the UK. Early customers include Europe's largest publisher Music Sales, choral composer John Rutter, and the Royal Academy of Music. Sibelius 6 (educational version) also launched. 1994: Distribution in Europe, Australia and New Zealand commences. Sibelius 7 Student (educational version) launched. 1995: German versions of Sibelius launched. 1996: US office opened in California. Junior Sibelius (primary school program) launched. 1998: Sibelius for Windows launched worldwide. Company ceases selling hardware to concentrate on core software business. 1999: Sibelius for Mac, PhotoScore and Scorch launched. Sibelius forms US subsidiary, creating the Sibelius Group, which now has 25 employees. Quester VCT invests. 2000: Sibelius Internet Edition launched, and adopted for Internet publishing by leading European publishers Music Sales and Boosey & Hawkes. SibeliusMusic.com and Sibelius Notes (initially called Teaching Tools) launched. 2001: World's largest sheet music publisher Hal Leonard also adopts Sibelius Internet Edition. Sibelius Group reaches 50 employees. 2002: Sibelius is first major music program for Mac OS X. Company acquires music software company MIDIworks. 2003: Revenues beat competitor MakeMusic Inc. by 20%, confirming Sibelius as world market leader. Starclass, Instruments, G7 and G7music.net launched. Sibelius Group commences distributing Musition and Auralia. Sibelius in Japanese launched, distributed by Yamaha. 2004: Compass, Kontakt Gold, Sibelius Student Edition, Sibelius in French and Spanish launched. Company acquires SequenceXtra. Sibelius software used in more than 50% of UK secondary schools. 2005: Australian subsidiary formed after acquiring Australian distributor. Company reaches 75 employees. Wins prestigious Queen's Award for Innovation. Releases Rock & Pop Collection of sounds. Commences distributing O-Generator. 2006: Groovy Music and Coloured Keyboard launched. Sibelius Software bought by Avid Technology. 2007: Japanese office opened. 2012: Avid closes Sibelius' London office and lays off original development team, sparking the 'Save Sibelius' campaign. 2014: First release of a Sibelius version (7.5) by the new development team. 2018: Sibelius First (free, entry-level product), Sibelius (formerly Sibelius First) and Sibelius Ultimate (formerly Sibelius) launched together with a new year-based versioning system. 2021: Sibelius for iPad and iPhone is released Features Core functionality Sibelius' main function is to help create, edit and print musical scores. It supports virtually all music notations, enabling even the most complex of modern orchestral, choral, jazz, pop, folk, rock and chamber music scores to be engraved to publication quality. Further, it allows scores to be played back or turned into MIDI or audio files, e.g. to create a CD. A built-in sample player and a large range of sampled sounds are included. Sibelius supports any MIDI device, and allows Virtual Studio Technology (VST) and Audio Units plug-ins to be used as playback instruments, giving users access to third-party sample libraries. Score playback can also be synchronised to video, or to DAW software via the ReWire standard. By default, Sibelius plays a brief passage from a Jean Sibelius symphony as it launches, a feature that can be disabled in the application's Preferences if desired. Each version has used a different excerpt; e.g. Sibelius 7 appropriately uses the main theme from Sibelius' 7th Symphony. In Version 7.0, Avid rebuilt Sibelius as a 64 bit application, replacing the menu navigation system of previous versions with a Ribbon interface in the process. This met with considerable user resistance, however the Ribbon remains integral to the current GUI. Add-ons Add-ons for Sibelius that are currently or have previously been available include: Sound libraries such as Note Performer, Vienna Symphonic Library, Kontakt, Garritan, and Mark of the Unicorn's (MOTU) Symphonic Instrument, which can be added as Manual Sound Sets in the Playback Devices options from the Sibelius Play tab. Extra plug-in features. These are usually free of charge, and often created by Sibelius users, the most prolific of whom has been Bob Zawalich. Myriad's PDF to MusicXML transcribing application PDFtoMusic. Neuratron's Music OCR program PhotoScore (scanning), which can be used to scan and create a Sibelius score from printed music and PDF documents. A lite version is bundled with Sibelius. Neuratron's AudioScore, also bundled in a lite version, which claims to be able to turn singing or an acoustic instrument performance into a score, though many users have complained that this does not work. AudioScore currently holds a two-star rating on cnet.com. QWERTY Keyboards such as Logic Keyboard. Keyboard covers such as KB Covers. Mobile device VNC controllers such as iPad Sibelius Wizard and Sibelius Control for iPad, allowing the user to control Sibelius wirelessly via shortcuts set up within the Preferences. Cloud publishing Sibelius users can publish their scores directly from the software via the Internet using desktops, laptops or iPads. Anyone else using software called Sibelius Scorch (free for web browsers, charged for on iPads) can then view these scores, play them back, transpose them, change instruments, or print them from the web browser version. ScoreExchange.com is a website where any Sibelius user can upload scores they have composed, arranged or transcribed with Sibelius, so that anyone can access the music. The site began in 2001 as SibeliusMusic.com, and by June 2011 had amassed nearly 100,000 scores. The iPad version of Scorch also includes a store containing over 250,000 scores from publishers Music Sales, Hal Leonard, and Sibelius Scorch is used in the websites of various music publishers and individual musicians. Publishers can license the Sibelius Internet Edition for commercial online publishing. From October 2017, Scorch has been replaced by Sibelius Cloud Publishing, providing publishers with an API to automate the publishing and selling of digital sheet Music. It uses the same technology as Scorch to allow Sibelius users to share music online directly from within the program, and addresses compatibility issues. Education There are various education-specific features for Sibelius' large market of schools and universities. The Sibelius Educational Suite includes extensive built-in music teaching materials, and the ability to run and manage multiple copies of the software on a network at discounted educational pricing. In 2012, Sibelius Student was replaced by a new version of Sibelius First. Lite notation based on Sibelius is included in Avid's Pro Tools audio editing software. Network A network license is available for schools, colleges, universities and other multi-client situations. Awards Numerous awards for the software include: 2005 Queen's Awards for Enterprise - Innovation (Sibelius Software Ltd); Parents' Choice Award - Silver Award Winner (Sibelius Student); Parents' Choice Award for (Sibelius 3) 2006 Music Industry Awards - Best Music Software (Sibelius 4) See also List of scorewriters Scorewriter List of music software References External links Scorewriters RISC OS software Assembly language software Petitions Social media campaigns Music publishing
568873
https://en.wikipedia.org/wiki/Desmond%20Dekker
Desmond Dekker
Desmond Dekker (16 July 1941 – 25 May 2006) was a Jamaican ska, rocksteady and reggae singer-songwriter and musician. Together with his backing group The Aces (consisting of Wilson James and Easton Barrington Howard), he had one of the earliest international reggae hits with "Israelites" (1968). Other hits include "007 (Shanty Town)" (1967), "It Mek" (1969) and "You Can Get It If You Really Want" (1970). Early life Desmond Adolphus Dacres was born in Saint Andrew Parish (Greater Kingston), Jamaica, on 16 July 1941. Dekker spent his formative years in Kingston. From a young age he regularly attended the local church with his grandmother and aunt. This early religious upbringing, as well as Dekker's enjoyment of singing hymns, led to a lifelong religious commitment. Following his mother's death, he moved to the parish of St. Mary and later to St. Thomas. While at St. Thomas, Dekker embarked on an apprenticeship as a tailor before returning to Kingston, where he became a welder. His workplace singing had drawn the attention of his co-workers, who encouraged him to pursue a career in music. In 1961 he auditioned for Coxsone Dodd (Studio One) and Duke Reid (Treasure Isle), though neither audition was successful. The unsigned vocalist then auditioned for Leslie Kong's Beverley's record label and was awarded his first recording contract. Career Despite achieving a record deal, it was two years before Dekker saw his first record released. Meanwhile, Dekker spotted the talent of Bob Marley, a fellow welder, and brought the youth to Kong's attention. In 1962 "Judge Not" and "One Cup Of Coffee" became the first recorded efforts of Marley, who retained gratitude, respect and admiration for Dekker for the rest of his life. Eventually in 1963 Kong chose "Honour Your Mother and Father" (written by Dekker and the song that Dekker had sung in his Kong audition two years earlier), which became a Jamaican hit and established Dekker's musical career. This was followed by the release of the tracks "Sinners Come Home" and "Labour for Learning". It was during this period that Desmond Dacres adopted the stage-name of Desmond Dekker. His fourth hit, "King of Ska" (backing vocals by The Cherrypies, also known as The Maytals), made him into one of the island's biggest stars. Dekker then recruited four brothers, Carl, Patrick, Clive and Barry Howard, as his permanent backing vocalists to perform with him under the name Desmond Dekker and The Aces. The new group recorded a number of Jamaican hits, including "Parents", "Get Up Edina", "This Woman" and "Mount Zion". The themes of Dekker's songs during the first four years of his career dealt with the moral, cultural and social issues of mainstream Jamaican culture: respect for one's parents ("Honour Your Mother and Father"), religious morality ("Sinners Come Home") and education ("Labour for Learning"). In 1967 he appeared on Derrick Morgan's "Tougher Than Tough", which helped begin a trend of popular songs commenting on the rude boy subculture which was rooted in Jamaican ghetto life where opportunities for advancement were limited and life was economically difficult. Dekker's own songs did not go to the extremes of many other popular rude boy songs, which reflected the violence and social problems associated with ghetto life, though he did introduce lyrics that resonated with the rude boys, starting with one of his best-known songs, "007 (Shanty Town)". The song established Dekker as a rude boy icon in Jamaica and also became a favourite dance track for the young working-class men and women of the United Kingdom's mod scene. "007 (Shanty Town)" was a Top 15 hit in the UK and his UK concerts were attended by a large following of mods wherever he played. Dekker continued to release rude boy songs such as "Rude Boy Train" and "Rudie Got Soul", as well as mainstream cultural songs like "It's a Shame", "Wise Man", "Hey Grandma", "Unity", "If It Pays", "Mother's Young Girl", "Sabotage" and "Pretty Africa". Many of the hits from this era came from his debut album, 007 (Shanty Town). In 1968 Dekker's "Israelites" was released, eventually topping the UK Singles Chart in April 1969 and peaking in the Top Ten of the US Billboard Hot 100 in June 1969. Dekker was the first Jamaican artist to have a hit record in the US with Jamaican-style music. That same year saw the release of "Beautiful and Dangerous", "Writing on the Wall", "Music Like Dirt (Intensified '68)" (which won the 1968 Jamaica Independence Festival Song Contest), "Bongo Girl" and "Shing a Ling". 1969 saw the release of "It Mek", which became a hit both in Jamaica and the UK. Dekker also released "Problems" and "Pickney Gal", both of which were popular in Jamaica, although only "Pickney Gal" managed to chart in the UK Top 50. In 1969 Dekker took permanent residency in the UK. 1970s In 1970 Dekker released "You Can Get It If You Really Want", written by Jimmy Cliff, which reached No. 2 in the UK charts. Dekker was initially reluctant to record the track but was eventually persuaded to do so by Leslie Kong. Dekker's version uses the same backing track as Cliff's original. Kong, whose music production skills had been a crucial part of both Dekker's and Cliff's careers, died in 1971, affecting the careers of both artists for a short period of time. In 1972 the rude boy film The Harder They Come was released and Dekker's "007 (Shanty Town)" was featured on the soundtrack along with Cliff's version of "You Can Get It...", as well as other Jamaican artists' hits, giving reggae more international exposure and preparing the way for Bob Marley. In 1975 "Israelites" was re-released and became a UK Top 10 hit for a second time. Dekker had also begun working on new material with the production duo Bruce Anthony in 1974. In 1975 this collaboration resulted in the release of "Sing a Little Song", which charted in the UK Top Twenty; this was to be his last UK hit. 1980s and later The 1980s found Dekker signed to a new label, Stiff Records, an independent label that specialized in punk and new wave acts as well as releases associated with the 2 Tone label, whose acts instigated a short-lived but influential ska revival. He recorded an album called Black & Dekker (1980), which featured his previous hits backed by The Rumour, Graham Parker's backing band and Akrylykz (featuring Roland Gift, later of Fine Young Cannibals). A re-recorded version of "Israelites" was released in 1980 on the Stiff label, followed by other new recordings: Jimmy Cliff's "Many Rivers to Cross" and "Book of Rules". Dekker's next album, Compass Point (1981), was produced by Robert Palmer. Despite declining sales, Dekker remained a popular live performer and continued to tour with The Rumour. In 1984 he was declared bankrupt. Only a single live album was released in the late '80s. In 1990 "Israelites" was used in a Maxell TV advert that became popular and brought the song and artist back to the attention of the general public. He collaborated with The Specials on the 1993 album, King of Kings, which was released under Desmond Dekker and The Specials. King of Kings consists of songs by Dekker's musical heroes including Byron Lee; Theophilus Beckford, Jimmy Cliff, and his friend and fellow Kong label artist, Derrick Morgan. He also collaborated on a remix of "Israelites" with reggae artist Apache Indian. In 2003 a reissue of The Harder They Come soundtrack featured "Israelites" and "007 (Shanty Town)". Dekker died of a heart attack on 25 May 2006, at his home in Thornton Heath in the London Borough of Croydon, England, aged 64 and was buried at Streatham Park Cemetery. He was preparing to headline a world music festival in Prague. Dekker was divorced and was survived by his son and daughter. Tribute band The 2006 - 2015 line-up for Dekker's backing band, The Aces, who are still performing tribute concerts, includes: Delroy Williams – backing vocals/M.C. Gordon Mulrain – bass guitarist and session musician (Mulrain, also known as "Innerheart", is co-founder of the British record label Ambiel Music) Aubrey Mulrain – keyboard player and session musician Steve Roberts – guitarist and session musician (also a member of the British band Dubzone) Leroy Green – drums and session musician Stan Samuel – guitarist and session musician Charles Nelson – keyboard player and session musician This particular line-up also recorded with Dekker on some of his later studio sessions in the 1990s. The 2016 - current line up of musicians for Desmond Dekker's band The Aces featuring Delroy Williams & Guests Delroy Williams – Vocals (also featuring guests Winston 'Mr Fix It' Francis and Glenroy Oakley from Greyhound 'Black & White') Gordon Mulrain – bass guitarist and session musician Aubrey Mulrain – keyboard player and session musician Learoy Green – drums, backing vocals and session musician Bryan Campbell - Keyboard player and session musician Steve Baker - Guitarist, backing vocals, peripatetic guitar teacher and session guitarist. Also founder & MD of popular Reggae and Ska Tribute/backing band Zeb Rootz Paul Abraham - Guitarist and backing vocals Discography Albums Studio albums 007 Shanty Town (1967) – Doctor Bird (Desmond Dekker & The Aces) Action! (1968) (Desmond Dekker & The Aces) The Israelites (1969) – Pyramid Intensified (1970) – Lagoon You Can Get It If You Really Want (1970) – Trojan The Israelites (1975), Cactus – completely different album from the 1969 release Black And Dekker (1980) – Stiff Compass Point (1981) – Stiff King of Kings with The Specials (1993) – Trojan Records Halfway to Paradise (1999) – Trojan In Memoriam 1941 - CD Album -(2007) Secret Records King of Ska - Red Vinyl ( 2019 ) Burning Sounds Compilation albums This Is Desmond Dekkar (1969) – Trojan Records (UK #27), reissued on CD in 2006 with 19 bonus tracks Double Dekker (1973) – Trojan Dekker's Sweet 16 Hits (1979) – Trojan The Original Reggae Hitsound (1985) – Trojan 20 Golden Pieces of Desmond Dekker (1987) – Bulldog The Official Live and Rare (1987) – Trojan Greatest Hits (1988) – Streetlife The Best of & The Rest of (1990) – Action Replay Music Like Dirt (1992) – Trojan Rockin' Steady – The Best of Desmond Dekker (1992) – Rhino Crucial Cuts (1993) – Music Club Israelites (1994) – Laserlight Action (1995) – Lagoon Voice of Ska (1995) – Emporio Moving On (1996) – Trojan The Israelites (1996) – Marble Arch First Time for a Long Time (1997) – Trojan Desmond Dekker Archive (1997) – Rialto The Writing on the Wall (1998) – Trojan Israelites (1999) – Castle Pie Israelites: The Best Of Desmond Dekker (1963–1971) – Trojan (1999) The Very Best Of (2000) – Jet Set Israelites – Anthology 1963 To 1999 (2001) – Trojan 007 - The Best of Desmond Dekker (2011) – Trojan Live - Live at Dingwalls (2021) – Secret Singles Early solo singles "Honour Your Mother and Father" (1963) – Island (as Desmond Dekker & Beverley's Allstars) "Parents" (1964) – Island "King of Ska" (1964) – Island (as Desmond Dekkar and his Cherry Pies) "Dracula" (1964) – Black Swan (as Desmond Dekkar) Desmond Dekker and the Four Aces "Generosity" (1965) — Island "Get Up Edina" (1965) – Island "This Woman" (1965) – Island "Mount Zion" (1965) – Island Desmond Dekker and the Aces "007 (Shanty Town)" (1967) – Doctor Bird "Wise Man" (1967) – Pyramid "007 Shanty Town" (1967) – Pyramid "It's a Shame" (1967) – Pyramid "Rudy Got Soul" (1967) – Pyramid "Rude Boy Train" (1967) – Pyramid "Mother's Young Gal" (1967) – Pyramid "Unity" (1967) – Pyramid "Sabotage" (1967) – Pyramid "It Pays" (1967) – Pyramid "Beautiful and Dangerous" (1967) – Pyramid "Bongo Gal" (1967) – Pyramid "To Sir, With Love" (1967) – Pyramid "Mother Pepper" (1967) – Pyramid "Hey Grandma" (1967) – Pyramid "Music Like Dirt (Intensified '68)" (1967) – Pyramid "It Mek" (1968) – Pyramid "Israelites" (1968) – Pyramid (UK #1, US #9) "Christmas Day" (1968) – Pyramid "It Mek" (1969) – Pyramid (UK #7) "Pickney Gal" (1969) – Pyramid (UK #42) Later solo singles "You Can Get It If You Really Want" (1970) – Trojan (UK #2) "The Song We Used to Sing" (1970) – Trojan "Licking Stick" (1971) – Trojan "It Gotta Be So" (1972) – Trojan "Beware" (1972) – Rhino "Sing a Little Song" (1973) – Rhino "Everybody Join Hands" (1973) – Rhino "Busted Lad" (1974) – Rhino "Israelites (re-recording)" (1975) – Cactus (UK #10) "Sing a Little Song" (1975) – Cactus (UK #16) "Roots Rock" (1977) – Feelgood "Israelites (new mix)" (1980) – Stiff "Please Don't Bend" (1980) – Stiff "Many Rivers to Cross" (1980) – Stiff "We Can and Shall" (1981) – Stiff "Book of Rules" (1982) – Stiff "Hot City" (1983) – Stiff "Jamaica Ska" (1993) – Trojan References External links Official website "Reggae legend Desmond Dekker dies", BBC News, 26 May 2006 "Rockin' Steady: The Best Of Desmond Dekker" (Rhino 1992) "Desmond Dekker Came First" – tribute and Q&A with Delroy Williams, Complicated Fun, 2 June 2006 David Katz, "Desmond Dekker" (obituary), The Guardian, 27 May 2006 1941 births 2006 deaths First-wave ska groups Jamaican emigrants to the United Kingdom Rocksteady musicians Jamaican reggae musicians Jamaican songwriters 20th-century Jamaican male singers Island Records artists Trojan Records artists Uni Records artists Stiff Records artists People from Saint Andrew Parish, Jamaica Burials at Streatham Park Cemetery
1842942
https://en.wikipedia.org/wiki/IT%20Service%20Management%20Forum
IT Service Management Forum
The IT Service Management Forum (itSMF) is an independent, international, not-for-profit organization of IT service management (ITSM) professionals worldwide. Around the operation of IT services the itSMF collects, develops and publishes best practices, supports education and training, discusses the development of ITSM tools, initiates advisory ideas about ITSM and holds conventions. The itSMF is concerned with promoting ITIL, best practices in IT service management and has a strong interest in the international ISO/IEC 20000 standard. The itSMF publishes books covering various aspects of service management through a process of endorsing them as part of the itSMF Library. History The itSMF UK takes at this time the international coordination. With a growing number of national chapters a real international umbrella was needed. The itSMF International was created in 2004. International organization and activities Typical activities in the national chapters were: itSMF chapters were partner of conferences of other organizations (e.g. Gartner “Business Intelligence & Information Management Summit 2013” in Australia ). There were own studies or together with other, well known research organizations (e.g. “Drive Service Management Adjustments With Peer Comparisons” from the itSMF USA together with Forrester Research, Inc.) German chapters There were three books about “ITIL in the Public Sector” (“ITIL in der Öffentlichen Verwaltung”), “Organization Model for the IT in the Public Sector” (“Organizationsmodell für die IT in der Öffentlichen Verwaltung”)) and “Service Level Management in the Public Sector” (“Service Level Management in der Öffentlichen Verwaltung”)). Annually in December the German chapter celebrates a two-day congress. Topics were provided in different formats with typical keynotes, four or five parallel user sessions, which presents three 20-minute-speeches in a row and a joint discussion, and some open world café discussions. During the year typical two one day meetings were held – name itSMF Live! - with different, actual topics. A special event for the Public Sector is the event FIT-ÖV. The chapter award since 2009 the ITSM project of the year. The first awarded project was “ITIL 2010” of the Federal Employment Agency (Bundesagentur für Arbeit, Germany). Summary Information Technology Service Management are the activities that are performed by an organization to design, plan, deliver, operate and control information technology services offered to customers. To create and sustain the global SMF Member Community. To promote Service Management practices. To raise the awareness of Service Management and how Service Management professionals benefit our global communities. Service management is a customer focused approach to delivering information technology. Provides value to the customer and also on the customer relationship. Provides a framework to structure IT related task and interaction with IT personnel and clients. Notes External links itSMF International Bracknell Information technology organisations based in the United Kingdom ITIL Organisations based in Berkshire Professional associations based in the United Kingdom Science and technology in Berkshire
5087694
https://en.wikipedia.org/wiki/PHP%20License
PHP License
The PHP License is the software license under which the PHP scripting language is released. The PHP License is designed to encourage widespread adoption of the source code. Redistribution is permitted in source or binary form with or without modifications, with some caveats. Version 3 of PHP used a dual license—PHP 3's source is available under either the PHP License or the GNU General Public License (GPL). This practice was discontinued as of PHP 4, with PHP's developers citing the restrictions on reuse associated with the GPL's copyleft enforcement as being the reason for dropping it. The Zend Engine, the core of the PHP interpreter, is separately licensed under the similar Zend Engine License, which contains similar naming restrictions to the PHP license (applying to the names "Zend" and "Zend Engine"), and a clause requiring advertising materials to mention its use. Criticism The PHP License is an open source license according to the Open Source Initiative, and a non-copyleft free software license according to the Free Software Foundation. The license is GPL-incompatible due to restrictions on the usage of the term PHP. Debian maintainers have had a long-standing discussion (since at least 2005) about the validity of the PHP license. Expressed concerns include that the license "contains statements about the software it covers that are specific to distributing PHP itself", which, for other software than PHP itself therefore would be "false statements". Debian has a specific policy for the license (and requires a statement in debian/copyright file when it is used): "The PHP license must only be used for PHP and PHP add-ons." See also Apache License Software using the PHP license (category) References External links Official PHP License Information Zend Grant Documents Free and open-source software licenses PHP
1467304
https://en.wikipedia.org/wiki/PCMag
PCMag
PC Magazine (shortened as PCMag) is an American computer magazine published by Ziff Davis. A print edition was published from 1982 to January 2009. Publication of online editions started in late 1994 and continues to this day. Overview Editor Bill Machrone wrote in 1985 that "we've distilled the contents of PC Magazine down to the point where it can be expressed as a formula: PC = EP2. EP stands for evaluating products and enhancing productivity. If an article doesn't do one or the other, chances are it doesn't belong in PC Magazine." PC Magazine provides reviews and previews of the latest hardware and software for the information technology professional. Articles are written by leading experts including John C. Dvorak, whose regular column and "Inside Track" feature were among the magazine's most popular attractions. Other regular departments include columns by long-time editor-in-chief Michael J. Miller ("Forward Thinking"), Bill Machrone, and Jim Louderback, as well as: "First Looks" (a collection of reviews of newly released products) "Pipeline" (a collection of short articles and snippets on computer-industry developments) "Solutions" (which includes various how-to articles) "User-to-User" (a section in which the magazine's experts answer user-submitted questions) "After Hours" (a section about various computer entertainment products; the designation "After Hours" is a legacy of the magazine's traditional orientation towards business computing.) "Abort, Retry, Fail?" (a beginning-of-the-magazine humor page which for a few years was known as "Backspace"—and was subsequently the last page). For a number of years in the 1980s PC Magazine gave significant coverage to programming for the IBM PC and compatibles in languages such as Turbo Pascal, BASIC, Assembly and C. Charles Petzold was one of the notable writers on programming topics. History In an early review of the new IBM PC, Byte reported "the announcement of a new magazine called PC: The Independent Guide to the IBM Personal Computer. It is published by David Bunnell, of Software Communications, Inc. ... It should be of great interest to owners of the IBM Personal Computer". The first issue of PC, dated February–March 1982, appeared early that year. (The word Magazine was added to the name with the third issue in June 1982, but not added to the logo until the first major redesign in January 1986). PC Magazine was created by Bunnell, Jim Edlin, and Cheryl Woodard (who also helped David found the subsequent PC World and Macworld magazines). David Bunnell, Edward Currie and Tony Gold were the magazines co-founders. Bunnell and Currie created the magazine's business plan at Lifeboat Associates in New York which included, in addition to PC Magazine, explicit plans for publication of PC Tech, PC Week and PC Expositions (PC Expo) all of which were subsequently realized. Tony Gold, a co-founder of Lifeboat Associates financed the magazine in the early stages. The magazine grew beyond the capital required to publish it, and to solve this problem, Gold sold the magazine to Ziff-Davis, which moved it to New York City. By February 1983 it was published by PC Communications Corp., a subsidiary of Ziff-Davis Publishing Co., Bunnell and his staff left to form PC World magazine. The first issue of PC featured an interview with Bill Gates, made possible by his friendship with David Bunnell who was among the first journalists and writers to take an interest in personal computing. By its third issue PC was square-bound because it was too thick for saddle-stitch. At first the magazine published new issues every two months, but became monthly as of the August 1982 issue, its fourth. In March 1983 a reader urged the magazine to consider switching to a biweekly schedule because of its thickness, and in June another joked of the dangers of falling asleep while reading PC in bed. Although the magazine replied to the reader's proposal with "Please say you're kidding about the bi-weekly schedule. Please?", after the December 1983 issue reached 800 pages in size, in 1984 PC began publishing new issues every two weeks, with each about 400 pages in size. In January 2008 the magazine dropped back to monthly issues. Print circulation peaked at 1.2 million in the late 1990s. In November 2008 it was announced that the print edition would be discontinued as of the January 2009 issue, but the online version at pcmag.com would continue. By this time print circulation had declined to about 600,000. The magazine had no ISSN until 1983, when it was assigned , which was later changed to . Editor Wendy Sheehan Donnell is the current editor-in-chief of PCMag.com. Prior to this position, Donnell was deputy editor under the previous editor-in-chief, Dan Costa. Costa was editor-in-chief from August 2011 to December 2021. Lance Ulanoff held the position of editor-in-chief from July 2007 to July 2011. Jim Louderback was editor-in-chief before Ulanoff, from 2005, and left when he accepted the position of chief executive officer of Revision3, an online media company. Development and evolution The magazine has evolved significantly over the years. The most drastic change has been the shrinkage of the publication due to contractions in the computer-industry ad market and the easy availability of the Internet, which has tended to make computer magazines less "necessary" than they once were. This is also the primary reason for the November 2008 decision to discontinue the print version. Where once mail-order vendors had huge listing of products in advertisements covering several pages, there is now a single page with a reference to a website. At one time (the 1980s through the mid-1990s), the magazine averaged about 400 pages an issue, with some issues breaking the 500- and even 600-page marks. In the late 1990s, as the computer-magazine field underwent a drastic pruning, the magazine shrank to approximately 300 and then 200 pages. It has adapted to the new realities of the 21st century by reducing its once-standard emphasis on massive comparative reviews of computer systems, hardware peripherals, and software packages to focus more on the broader consumer-electronics market (including cell phones, PDAs, MP3 players, digital cameras, and so on). Since the late 1990s, the magazine has taken to more frequently reviewing Macintosh software and hardware. The magazine practically invented the idea of comparative hardware and software reviews in 1984 with a groundbreaking "Project Printers" issue. For many years thereafter, the blockbuster annual printer issue, featuring more than 100 reviews, was a PC Magazine tradition. PC Magazine was one of the first publications to have a formal test facility called PC Labs. The name was used early in the magazine but it was not until PC Labs was actually built at the magazine's 1 Park Avenue, New York facility that it became a real entity in 1986. William Wong was the first PC Labs Director. PC Labs created a series of benchmarks and older versions can still be found on the internet. PC Labs was designed to help writers and editors to evaluate PC hardware and software especially for large projects like the annual printer edition where almost a hundred printers were compared using PC Labs printer benchmarks. The publication also took on a series of editorial causes over the years, including copy protection (the magazine refused to grant its coveted Editors' Choice award to any product that used copy protection) and the "brain-dead" Intel 80286 (then-editor-in-chief Bill Machrone said the magazine would still review 286s but would not recommend them). PC Magazine was a booster of early versions of the OS/2 operating system in the late 1980s, but then switched to a strong endorsement of the Microsoft Windows operating environment after the release of Windows 3.0 in May 1990. Some OS/2 users accused of the magazine of ignoring OS/2 2.x versions and later. (Columnist Charles Petzold was sharply critical of Windows because it was more fragile and less stable and robust than OS/2, but he observed the reality that Windows succeeded in the marketplace where OS/2 failed, so the magazine by necessity had to switch coverage from OS/2 to Windows. In the April 28, 1992 issue PC Magazine observed that the new OS/2 2.0 was "exceptionally stable" compared to Windows 3.x due to "bullet-proof memory protection" that prevented an errant application from crashing the OS, albeit at the cost of higher system requirements.) During the dot-com bubble, the magazine began focusing heavily on many of the new Internet businesses, prompting complaints from some readers that the magazine was abandoning its original emphasis on computer technology. After the collapse of the technology bubble in the early 2000s, the magazine returned to a more traditional approach. See also Macworld PC World DOS Power Tools References External links Archived PC magazines on the Internet Archive Digitized PC magazines on Google Books Monthly magazines published in the United States Video game magazines published in the United States Arabic-language magazines Magazines published in Belgium Biweekly magazines published in the United States Magazines published in Brazil Magazines published in Bulgaria Chinese-language magazines Video game magazines published in China Online computer magazines Defunct computer magazines published in the United States Computer magazines published in the Netherlands Computer game magazines published in the Netherlands English-language magazines Magazines published in Greece Greek-language magazines Home computer magazines Magazines published in Israel Magazines established in 1982 Magazines disestablished in 2009 Magazines published in New York (state) Magazines published in Mexico Online magazines with defunct print editions Portuguese-language magazines Magazines published in Romania Monthly magazines published in Russia Computer magazines published in Russia Video game magazines published in Russia Computer magazines published in Serbia Serbian-language magazines Magazines published in Singapore Spanish-language magazines Computer magazines published in Spain Video game magazines published in Spain
2661301
https://en.wikipedia.org/wiki/Social%20affordance
Social affordance
Social affordance is a type of affordance. It refers to the properties of an object or environment that permit social actions. Social affordance is most often used in the context of a social technology such as Wiki, Chat and Facebook applications and refers to sociotechnical affordances. Social affordances emerge from the coupling between the behavioral and cognitive capacities of a given organism and the objective properties of its environment. Social affordances – or more accurately sociotechnical affordances – refer as reciprocal interactions between a technology application, its users, and its social context. These social interactions include users’ responses, social accessibility and society related changes. Social affordances are not synonymous with mere factual, statistical frequency; on the contrary, the social normality of primitive forms of coordination can become normative, even in primate societies. A good example clarifies social affordance as follows: “ A wooden bench is supposed to have a sit affordance. A hiker who has walked for hours and passes the wooden bench on a walk along small country roads might perceive the sit affordance of the wooden bench as a function of the degree of fatigue. A very tired hiker will sit on the wooden bench but will not lie down (unless the wooden bench also has a lie affordance). A still fit hiker, however, might not even pick up on the sit affordance of the bench and pass it. The wooden bench is in that case no more than a piece of wood with no further meaning.” Affordance Affordance is a term introduced by psychologist James J. Gibson. In his 1979 book "The Ecological Approach to Visual Perception", he writes: “The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill. The verb to afford is found in the dictionary, but the noun affordance is not. I have made it up. I mean by it something that refers to both the environment and the animal in a way that no existing term does. It implies the complementarily of the animal and the environment” Possibilities for motor action — or what Gibson termed affordances — depend on the match between environmental conditions and actors’ physical characteristics. An example can clarify this term; when a person goes through a door, either the person is thin enough or the door is wide enough to let the person get in. Affordances are relational properties; they are neither in the environment nor in the perceiver, but are derived from the ecological relationship between the perceiver and the perceived so that the perceiver and perceived are logically interdependent. This psychological term then evolves for uses in many fields: perceptual psychology, cognitive psychology, environmental psychology, industrial design, human–computer interaction, interaction design, instructional design and artificial intelligence. The evolution of social affordances The term affordance is first used in human-computer interaction in the 1980s by Norman with the term perceived affordance. Relevant publications were: Gaver's seminal articles on technology affordance in 1991, affordances of media spaces in 1992, affordances for interaction, and then Bradner's notion of social affordance, where social affordances are action possibilities for social behavior. Social affordance is at first used in Computer Supported Collaboration Learning experiments. Computer support collaboration learning applications and users’ interactions are the major issue in social affordance. Social affordances are properties of Computer Supported Collaboration Learning environments which act as social-contextual facilitators relevant for the learner's social interactions. This term afterwards applies to other human-computer interactions including human cognition responses, visual perceptions and it also refers to the ecological relationship between human and computers. Social affordances in human-robot interaction Social affordance in the context of human-robot interaction refers to the action possibilities offered by the presence of a set of social agents. Though not frequently used in the human-robot interaction community, except for Uyanik et al. (2013), Shu et al. (2016), and Shu et al. (2017), social affordance pertains to the many human-robot interaction problems and studies. Social affordances in interaction design Affordance firstly refers to designs that offering visual perception or cognition of its users' insight usages of the designs. The designs in software then accept the idea of affordance and expand the concept to social affordance in human and computer interactions. In social affordance, they offer possibilities to act from the users. The ecological relationships from users and designs build the social affordance in interaction design. For example, hats are for putting on heads. Mailing boxes are for letters. Social affordances and human-computer interaction Human-computer interactions were proposed by the effects that arise from the human and computer interactions in computer assisted education software. It later expands its usage to any social interaction between computer related applications and its users. In computer assisted education software, social affordance is evaluated with five characteristics: Accessibility, Contextualisation, Professional learning, Communities, Learning design, Adaptability. Social affordance in interactive website also exists. Evaluations of social affordances in websites have focused on some of the following features: Tagging, User Profiles, Activity Streams, Comments, Ratings and Votes. These social affordances allow users to be aware of other users’ opinions, thoughts and feedback and, in so doing, help to engage users and build social connections. Factors that influence social affordance Culture: Culture is an important factor that influence social affordance. Vatrapu support his view that culture difference plays a prominent role that influence social affordance. He designs experiments to compare differences between Western and Chinese users in computer support collaboration learning applications. If we consider culture as an attentional system that gives individuals the incentive to recognize some social affordances as worthy of being acted upon, it is no wonder that natives "see" the analogical mappings that make sense of their society as a whole without being able to justify them. Habit: Different habitual backgrounds adapt to different social affordances. However, different habit from local subculture can change to global consumption collective by the internet connections. Age: In cellphones, big display design for the elderly accounts for the social affordance difference for age. Social affordance in website or software design must include age related consideration. Gender: Analysis of perceptual bias between two gender exists. With perceptual bias, gender plays an dominant role in social affordance. Types of social software There is much social software in web, summary as follows: Multi-player online gaming environments / virtual worlds Multi-User Dungeons (MUDs); Massively-Multiplayer Online Games (MMOGs) such as Second Life, Active Worlds, World of Warcraft, EverQuest Discourse facilitation systems Synchronous: Instant messaging (IM, e.g. Windows Live Messenger, AOL Instant Messenger, Yahoo Instant Messenger, Google Chat, ICQ, Skype); chat Asynchronous: Email; bulletin boards; discussion boards; moderated commenting systems (e.g. K5, Slashdot, Plastic) Content management systems Blogs; wikis; document management systems (e.g. Plone); web annotation systems Product development systems SourceForge; Savane; LibreSource Peer-to-peer file sharing systems BitTorrent; Gnutella; Napster; Limewire; Kazaa; Morpheus; eMule; iMesh Selling/purchasing management systems eBay Learning management systems Blackboard/WebCT; ANGEL; Moodle; LRN; Sakai; ATutor; Claroline; Dokeos Relationship management systems MySpace; Friendster; Facebook; Orkut; eHarmony; Bebo Syndication systems List-servs; RSS aggregators Distributed classification systems (“folksonomies”) Social bookmarking: del.icio.us; Digg; Furl Social cataloguing (books): LibraryThing; Neighborrow; Shelfari (music): RateYourMusic.com; Discogs (movies / DVDs): Flixster; DVDSpot; DVD Aficionado (scholarly citations): BibSonomy; Bibster; refbase; CiteULike; Connotea Other: Flickr Social affordances trends in social software Wiki- Web 2.0 technologies (e.g., Wikis, blogs, social bookmarking) are being used as a tool to support collaborative learning among students. Wiki is an effective form to support collaborative learning practices. Wiki enables students to engage effectively in learning with peers. The affordance of Wiki as a learning tool attracts educators' interests frequently. Computer-supported collaborative learning- The term Social affordance appears in computer supported collaborative learning firstly. Many articles discuss social affordance in computer supported collaborative learning. Social affordance for effective computer supported collaborative learning is a crucial issue for both educators and software developers. Facebook- Cross-case analysis shows that Facebook has pedagogical, social, and technical affordances for teaching and learning. Computer-mediated Communication technologies like Facebook aid in building and strengthening social networks Scholars have noted that affordance imply that the term "Social" cannot account for technological features of a social network platform alone. Hence, the level of sociability should determined by the actual performances of its users. Different studies on Facebook for social affordance may appear abundantly in the future. Blogging- "Three qualities—reader input, fixity, and juxtaposition—can’t claim to be a complete or definitive list of blogging’s journalistic affordances. But they offer enough of a scaffold to let us consider what makes blogs blog-like." Google Search- The social affordance in Google Search induce debates on "what content should appear?" and "how it should rank?". The social affordance in Google Search elicit questions on not only "how Google Search should be used?", but also "what if it is not used?". Web filtering- How the rating and recommendation information are available to the user? In the article Procters suggest: nominal rating; frequency; sequential accountability; distributional accountability; sources; topical coherence as measured by inter-document text similarity; temporal coherence are key factors relevant to social affordance in web filtering. References External links Knebel, Nicole (2004) "The sociability of computer-mediated communication, coordination and collaboration (CM3C) systems" . . Human–computer interaction
225779
https://en.wikipedia.org/wiki/Program%20optimization
Program optimization
In computer science, program optimization, code optimization, or software optimization is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources. In general, a computer program may be optimized so that it executes more rapidly, or to make it capable of operating with less memory storage or other resources, or draw less power. General Although the word "optimization" shares the same root as "optimal", it is rare for the process of optimization to produce a truly optimal system. A system can generally be made optimal not in absolute terms, but only with respect to a given quality metric, which may be in contrast with other possible metrics. As a result, the optimized system will typically only be optimal in one application or for one audience. One might reduce the amount of time that a program takes to perform some task at the price of making it consume more memory. In an application where memory space is at a premium, one might deliberately choose a slower algorithm in order to use less memory. Often there is no "one size fits all" design which works well in all cases, so engineers make trade-offs to optimize the attributes of greatest interest. Additionally, the effort required to make a piece of software completely optimal incapable of any further improvement is almost always more than is reasonable for the benefits that would be accrued; so the process of optimization may be halted before a completely optimal solution has been reached. Fortunately, it is often the case that the greatest improvements come early in the process. Even for a given quality metric (such as execution speed), most methods of optimization only improve the result; they have no pretense of producing optimal output. Superoptimization is the process of finding truly optimal output. Levels of optimization Optimization can occur at a number of levels. Typically the higher levels have greater impact, and are harder to change later on in a project, requiring significant changes or a complete rewrite if they need to be changed. Thus optimization can typically proceed via refinement from higher to lower, with initial gains being larger and achieved with less work, and later gains being smaller and requiring more work. However, in some cases overall performance depends on performance of very low-level portions of a program, and small changes at a late stage or early consideration of low-level details can have outsized impact. Typically some consideration is given to efficiency throughout a project though this varies significantly but major optimization is often considered a refinement to be done late, if ever. On longer-running projects there are typically cycles of optimization, where improving one area reveals limitations in another, and these are typically curtailed when performance is acceptable or gains become too small or costly. As performance is part of the specification of a program a program that is unusably slow is not fit for purpose: a video game with 60 Hz (frames-per-second) is acceptable, but 6 frames-per-second is unacceptably choppy performance is a consideration from the start, to ensure that the system is able to deliver sufficient performance, and early prototypes need to have roughly acceptable performance for there to be confidence that the final system will (with optimization) achieve acceptable performance. This is sometimes omitted in the belief that optimization can always be done later, resulting in prototype systems that are far too slow often by an order of magnitude or more and systems that ultimately are failures because they architecturally cannot achieve their performance goals, such as the Intel 432 (1981); or ones that take years of work to achieve acceptable performance, such as Java (1995), which only achieved acceptable performance with HotSpot (1999). The degree to which performance changes between prototype and production system, and how amenable it is to optimization, can be a significant source of uncertainty and risk. Design level At the highest level, the design may be optimized to make best use of the available resources, given goals, constraints, and expected use/load. The architectural design of a system overwhelmingly affects its performance. For example, a system that is network latency-bound (where network latency is the main constraint on overall performance) would be optimized to minimize network trips, ideally making a single request (or no requests, as in a push protocol) rather than multiple roundtrips. Choice of design depends on the goals: when designing a compiler, if fast compilation is the key priority, a one-pass compiler is faster than a multi-pass compiler (assuming same work), but if speed of output code is the goal, a slower multi-pass compiler fulfills the goal better, even though it takes longer itself. Choice of platform and programming language occur at this level, and changing them frequently requires a complete rewrite, though a modular system may allow rewrite of only some component for example, a Python program may rewrite performance-critical sections in C. In a distributed system, choice of architecture (client-server, peer-to-peer, etc.) occurs at the design level, and may be difficult to change, particularly if all components cannot be replaced in sync (e.g., old clients). Algorithms and data structures Given an overall design, a good choice of efficient algorithms and data structures, and efficient implementation of these algorithms and data structures comes next. After design, the choice of algorithms and data structures affects efficiency more than any other aspect of the program. Generally data structures are more difficult to change than algorithms, as a data structure assumption and its performance assumptions are used throughout the program, though this can be minimized by the use of abstract data types in function definitions, and keeping the concrete data structure definitions restricted to a few places. For algorithms, this primarily consists of ensuring that algorithms are constant O(1), logarithmic O(log n), linear O(n), or in some cases log-linear O(n log n) in the input (both in space and time). Algorithms with quadratic complexity O(n2) fail to scale, and even linear algorithms cause problems if repeatedly called, and are typically replaced with constant or logarithmic if possible. Beyond asymptotic order of growth, the constant factors matter: an asymptotically slower algorithm may be faster or smaller (because simpler) than an asymptotically faster algorithm when they are both faced with small input, which may be the case that occurs in reality. Often a hybrid algorithm will provide the best performance, due to this tradeoff changing with size. A general technique to improve performance is to avoid work. A good example is the use of a fast path for common cases, improving performance by avoiding unnecessary work. For example, using a simple text layout algorithm for Latin text, only switching to a complex layout algorithm for complex scripts, such as Devanagari. Another important technique is caching, particularly memoization, which avoids redundant computations. Because of the importance of caching, there are often many levels of caching in a system, which can cause problems from memory use, and correctness issues from stale caches. Source code level Beyond general algorithms and their implementation on an abstract machine, concrete source code level choices can make a significant difference. For example, on early C compilers, while(1) was slower than for(;;) for an unconditional loop, because while(1) evaluated 1 and then had a conditional jump which tested if it was true, while for (;;) had an unconditional jump . Some optimizations (such as this one) can nowadays be performed by optimizing compilers. This depends on the source language, the target machine language, and the compiler, and can be both difficult to understand or predict and changes over time; this is a key place where understanding of compilers and machine code can improve performance. Loop-invariant code motion and return value optimization are examples of optimizations that reduce the need for auxiliary variables and can even result in faster performance by avoiding round-about optimizations. Build level Between the source and compile level, directives and build flags can be used to tune performance options in the source code and compiler respectively, such as using preprocessor defines to disable unneeded software features, optimizing for specific processor models or hardware capabilities, or predicting branching, for instance. Source-based software distribution systems such as BSD's Ports and Gentoo's Portage can take advantage of this form of optimization. Compile level Use of an optimizing compiler tends to ensure that the executable program is optimized at least as much as the compiler can predict. Assembly level At the lowest level, writing code using an assembly language, designed for a particular hardware platform can produce the most efficient and compact code if the programmer takes advantage of the full repertoire of machine instructions. Many operating systems used on embedded systems have been traditionally written in assembler code for this reason. Programs (other than very small programs) are seldom written from start to finish in assembly due to the time and cost involved. Most are compiled down from a high level language to assembly and hand optimized from there. When efficiency and size are less important large parts may be written in a high-level language. With more modern optimizing compilers and the greater complexity of recent CPUs, it is harder to write more efficient code than what the compiler generates, and few projects need this "ultimate" optimization step. Much code written today is intended to run on as many machines as possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Additionally, assembly code tuned for a particular processor without using such instructions might still be suboptimal on a different processor, expecting a different tuning of the code. Typically today rather than writing in assembly language, programmers will use a disassembler to analyze the output of a compiler and change the high-level source code so that it can be compiled more efficiently, or understand why it is inefficient. Run time Just-in-time compilers can produce customized machine code based on run-time data, at the cost of compilation overhead. This technique dates to the earliest regular expression engines, and has become widespread with Java HotSpot and V8 for JavaScript. In some cases adaptive optimization may be able to perform run time optimization exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors. Profile-guided optimization is an ahead-of-time (AOT) compilation optimization technique based on run time profiles, and is similar to a static "average case" analog of the dynamic technique of adaptive optimization. Self-modifying code can alter itself in response to run time conditions in order to optimize code; this was more common in assembly language programs. Some CPU designs can perform some optimizations at run time. Some examples include Out-of-order execution, Speculative execution, Instruction pipelines, and Branch predictors. Compilers can help the program take advantage of these CPU features, for example through instruction scheduling. Platform dependent and independent optimizations Code optimization can be also broadly categorized as platform-dependent and platform-independent techniques. While the latter ones are effective on most or all platforms, platform-dependent techniques use specific properties of one platform, or rely on parameters depending on the single platform or even on the single processor. Writing or producing different versions of the same code for different processors might therefore be needed. For instance, in the case of compile-level optimization, platform-independent techniques are generic techniques (such as loop unrolling, reduction in function calls, memory efficient routines, reduction in conditions, etc.), that impact most CPU architectures in a similar way. A great example of platform-independent optimization has been shown with inner for loop, where it was observed that a loop with an inner for loop performs more computations per unit time than a loop without it or one with an inner while loop. Generally, these serve to reduce the total instruction path length required to complete the program and/or reduce total memory usage during the process. On the other hand, platform-dependent techniques involve instruction scheduling, instruction-level parallelism, data-level parallelism, cache optimization techniques (i.e., parameters that differ among various platforms) and the optimal instruction scheduling might be different even on different processors of the same architecture. Strength reduction Computational tasks can be performed in several different ways with varying efficiency. A more efficient version with equivalent functionality is known as a strength reduction. For example, consider the following C code snippet whose intention is to obtain the sum of all integers from 1 to : int i, sum = 0; for (i = 1; i <= N; ++i) { sum += i; } printf("sum: %d\n", sum); This code can (assuming no arithmetic overflow) be rewritten using a mathematical formula like: int sum = N * (1 + N) / 2; printf("sum: %d\n", sum); The optimization, sometimes performed automatically by an optimizing compiler, is to select a method (algorithm) that is more computationally efficient, while retaining the same functionality. See algorithmic efficiency for a discussion of some of these techniques. However, a significant improvement in performance can often be achieved by removing extraneous functionality. Optimization is not always an obvious or intuitive process. In the example above, the "optimized" version might actually be slower than the original version if were sufficiently small and the particular hardware happens to be much faster at performing addition and looping operations than multiplication and division. In some cases, however, optimization relies on using more elaborate algorithms, making use of "special cases" and special "tricks" and performing complex trade-offs. A "fully optimized" program might be more difficult to comprehend and hence may contain more faults than unoptimized versions. Beyond eliminating obvious antipatterns, some code level optimizations decrease maintainability. Optimization will generally focus on improving just one or two aspects of performance: execution time, memory usage, disk space, bandwidth, power consumption or some other resource. This will usually require a trade-off where one factor is optimized at the expense of others. For example, increasing the size of cache improves run time performance, but also increases the memory consumption. Other common trade-offs include code clarity and conciseness. There are instances where the programmer performing the optimization must decide to make the software better for some operations but at the cost of making other operations less efficient. These trade-offs may sometimes be of a non-technical nature such as when a competitor has published a benchmark result that must be beaten in order to improve commercial success but comes perhaps with the burden of making normal usage of the software less efficient. Such changes are sometimes jokingly referred to as pessimizations. Bottlenecks Optimization may include finding a bottleneck in a system a component that is the limiting factor on performance. In terms of code, this will often be a hot spot a critical part of the code that is the primary consumer of the needed resource though it can be another factor, such as I/O latency or network bandwidth. In computer science, resource consumption often follows a form of power law distribution, and the Pareto principle can be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations. In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context). More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data — the setup, initialization time, and constant factors of the more complex algorithm can outweigh the benefit, and thus a hybrid algorithm or adaptive algorithm may be faster than any single algorithm. A performance profiler can be used to narrow down decisions about which functionality fits which conditions. In some cases, adding more memory can help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor, due to the latency of each disk read. Caching the result is similarly effective, though also requiring larger memory use. When to optimize Optimization can reduce readability and add code that is used only to improve the performance. This may complicate programs or systems, making them harder to maintain and debug. As a result, optimization or performance tuning is often performed at the end of the development stage. Donald Knuth made the following two statements on optimization: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%" (He also attributed the quote to Tony Hoare several years later, although this might have been an error as Hoare disclaims having coined the phrase.) "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering" "Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing. When deciding whether to optimize a specific part of the program, Amdahl's Law should always be considered: the impact on the overall program depends very much on how much time is actually spent in that specific part, which is not always clear from looking at the code without a performance analysis. A better approach is therefore to design first, code from the design and then profile/benchmark the resulting code to see which parts should be optimized. A simple and elegant design is often easier to optimize at this stage, and profiling may reveal unexpected performance problems that would not have been addressed by premature optimization. In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization. Modern compilers and operating systems are so efficient that the intended performance increases often fail to materialize. As an example, caching data at the application level that is again cached at the operating system level does not yield improvements in execution. Even so, it is a rare case when the programmer will remove failed optimizations from production code. It is also true that advances in hardware will more often than not obviate any potential improvements, yet the obscuring code will persist into the future long after its purpose has been negated. Macros Optimization during code development using macros takes on different forms in different languages. In some procedural languages, such as C and C++, macros are implemented using token substitution. Nowadays, inline functions can be used as a type safe alternative in many cases. In both cases, the inlined function body can then undergo further compile-time optimizations by the compiler, including constant folding, which may move some computations to compile time. In many functional programming languages macros are implemented using parse-time substitution of parse trees/abstract syntax trees, which it is claimed makes them safer to use. Since in many cases interpretation is used, that is one way to ensure that such computations are only performed at parse-time, and sometimes the only way. Lisp originated this style of macro, and such macros are often called "Lisp-like macros." A similar effect can be achieved by using template metaprogramming in C++. In both cases, work is moved to compile-time. The difference between C macros on one side, and Lisp-like macros and C++ template metaprogramming on the other side, is that the latter tools allow performing arbitrary computations at compile-time/parse-time, while expansion of C macros does not perform any computation, and relies on the optimizer ability to perform it. Additionally, C macros do not directly support recursion or iteration, so are not Turing complete. As with any optimization, however, it is often difficult to predict where such tools will have the most impact before a project is complete. Automated and manual optimization See also :Category:Compiler optimizations Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superior algorithm. Optimizing a whole system is usually undertaken by programmers because it is too complex for automated optimizers. In this situation, programmers or system administrators explicitly change code so that the overall system performs better. Although it can produce better efficiency, it is far more expensive than automated optimizations. Since many parameters influence the program performance, the program optimization space is large. Meta-heuristics and machine learning are used to address the complexity of program optimization. Use a profiler (or performance analyzer) to find the sections of the program that are taking the most resources the bottleneck. Programmers sometimes believe they have a clear idea of where the bottleneck is, but intuition is frequently wrong. Optimizing an unimportant piece of code will typically do little to help the overall performance. When the bottleneck is localized, optimization usually starts with a rethinking of the algorithm used in the program. More often than not, a particular algorithm can be specifically tailored to a particular problem, yielding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with a quicksort routine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine. After the programmer is reasonably sure that the best algorithm is selected, code optimization can start. Loops can be unrolled (for lower loop overhead, although this can often lead to lower speed if it overloads the CPU cache), data types as small as possible can be used, integer arithmetic can be used instead of floating-point, and so on. (See algorithmic efficiency article for these and other techniques.) Performance bottlenecks can be due to language limitations rather than algorithms or data structures used in the program. Sometimes, a critical part of the program can be re-written in a different programming language that gives more direct access to the underlying machine. For example, it is common for very high-level languages like Python to have modules written in C for greater speed. Programs already written in C can have modules written in assembly. Programs written in D can use the inline assembler. Rewriting sections "pays off" in these circumstances because of a general "rule of thumb" known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So, putting intellectual effort into optimizing just a small part of the program can have a huge effect on the overall speed if the correct part(s) can be located. Manual optimization sometimes has the side effect of undermining readability. Thus code optimizations should be carefully documented (preferably using in-line comments), and their effect on future development evaluated. The program that performs an automated optimization is called an optimizer. Most optimizers are embedded in compilers and operate during compilation. Optimizers can often tailor the generated code to specific processors. Today, automated optimizations are almost exclusively limited to compiler optimization. However, because compiler optimizations are usually limited to a fixed set of rather general optimizations, there is considerable demand for optimizers which can accept descriptions of problem and language-specific optimizations, allowing an engineer to specify custom optimizations. Tools that accept descriptions of optimizations are called program transformation systems and are beginning to be applied to real software systems such as C++. Some high-level languages (Eiffel, Esterel) optimize their programs by using an intermediate language. Grid computing or distributed computing aims to optimize the whole system, by moving tasks from computers with high usage to computers with idle time. Time taken for optimization Sometimes, the time taken to undertake optimization therein itself may be an issue. Optimizing existing code usually does not add new features, and worse, it might add new bugs in previously working code (as any change might). Because manually optimized code might sometimes have less "readability" than unoptimized code, optimization might impact maintainability of it as well. Optimization comes at a price and it is important to be sure that the investment is worthwhile. An automatic optimizer (or optimizing compiler, a program that performs code optimization) may itself have to be optimized, either to further improve the efficiency of its target programs or else speed up its own operation. A compilation performed with optimization "turned on" usually takes longer, although this is usually only a problem when programs are quite large. In particular, for just-in-time compilers the performance of the run time compile component, executing together with its target code, is the key to improving overall execution speed. References Jon Bentley: Writing Efficient Programs, . Donald Knuth: The Art of Computer Programming External links How To Write Fast Numerical Code: A Small Introduction "What Every Programmer Should Know About Memory" by Ulrich Drepper explains the structure of modern memory subsystems and suggests how to utilize them efficiently "Linux Multicore Performance Analysis and Optimization in a Nutshell", presentation slides by Philip Mucci Programming Optimization by Paul Hsieh Writing efficient programs ("Bentley's Rules") by Jon Bentley "Performance Anti-Patterns" by Bart Smaalders Programming language topics Articles with example C code
5912888
https://en.wikipedia.org/wiki/TopStyle
TopStyle
TopStyle was a CSS/XHTML/HTML editor for Microsoft Windows developed by Nick Bradbury and now maintained by Stefan van As. The editor was code-centric rather than WYSIWYG, with integrated support for previews using either the Mozilla Gecko, Internet Explorer Trident, or Apple Inc. Webkit layout engines. The software was available as a commercial version with a trial period. History TopStyle was created by Nick Bradbury. He also created Macromedia HomeSite (then just "HomeSite") in 1995. HomeSite was acquired by Allaire in March 1997, by Macromedia in 2001, and finally (as of 2006) by Adobe Systems. Nick Bradbury left the company to found Bradbury Software in 2003 and created the editor TopStyle and a news aggregator titled FeedDemon. Bradbury Software was acquired by NewsGator Technologies in May 2005. Stefan van As acquired TopStyle in December 2008. Features TopStyle supported HTML, XHTML, and CSS editing in various revisions, along with powerful checking for standards and browser compatibility issues. Checking optionally covered a variety of levels, such as a strict W3C CSS specifications, to issues on a wide variety of browsers. This style checker feature provided detailed warning messages allowing the user to visually see if parts of HTML or CSS code would be incompatible with a particular browser. The software supported converting deprecated HTML styling to CSS code, converting HTML code to valid XHTML, and checking for orphaned code or pages. TopStyle integrated with HTML Tidy, W3C's validation services, and the Windows application CSE HTML Validator. TopStyle 4 could use CSE HTML Validator to check links. TopStyle also integrated with Adobe Dreamweaver. Syntax highlighting was supported for PHP, ASP, CFML, CSS, XHTML, HTML, JavaScript, and VBScript. Version history TopStyle 3.5 was released on October 15, 2007 (version 3.5.0.9). New features include better support for Windows Vista and browsers released since TopStyle 3.12 (Safari 2, IE7, Firefox 2, Opera 9), Box Spy (tool that shows padding/margins of boxes in the preview window), and many more including many bug fixes and improvements. TopStyle 4 was released on May 31, 2009. New features include (UTF-8) unicode support, live FTP editing, support for browsers released since TopStyle 3.5 (IE8, Firefox 3, Safari 3, and Safari 4), Script Insight for ASP + PHP + ColdFusion, support for iPhone (and iPod touch) webapp development, support for HTML 5, IE8 document compatibility, and many more including many bug fixes and improvements. TopStyle 5 was released in November 30, 2012, including support for CSS3. TopStyle's development ended in 2016. See also List of HTML editors Comparison of HTML editors Validator References External links Official TopStyle web site TopStyle's web-based forums Nick Bradbury's blog HTML editors Windows text-related software Proprietary software 2003 software
40475850
https://en.wikipedia.org/wiki/Robotics%20Toolbox%20for%20MATLAB
Robotics Toolbox for MATLAB
The Robotics Toolbox is MATLAB toolbox software that supports research and teaching into arm-type and mobile robotics. While the Robotics Toolbox is free software, it requires the proprietary MATLAB environment in order to execute. A subset of functions have been ported to GNU Octave and Python. The Toolbox forms the basis of the exercises in several textbooks. Purpose The Toolbox provides functions for manipulating and converting between datatypes such as vectors, homogeneous transformations, roll-pitch-yaw and Euler angles, axis-angle representation, unit-quaternions, and twists, which are necessary to represent 3-dimensional position and orientation. It also plots coordinate frames, supports Plücker coordinates to represent lines, and provides support for Lie group operations such as logarithm, exponentiation, and conversions to and from skew-symmetric matrix form. As the basis of the exercises in several textbooks, the Toolbox is useful for the study and simulation of: classical arm-type robotics: kinematics, dynamics, and trajectory generation. The Toolbox uses a very general method of representing the kinematics and dynamics of serial-link manipulators using Denavit-Hartenberg parameters or modified Denavit-Hartenberg parameters. These parameters are encapsulated in MATLAB objects. Robot objects can be created by the user for any serial-link manipulator; a number of examples are provided for well known robots such as the Puma 560 and the Stanford arm amongst others. Operations include forward kinematics, analytic and numerical inverse kinematics, graphical rendering, manipulator Jacobian, inverse dynamics, forward dynamics, and simple path planning. It can operate with symbolic values as well as numeric, and provides a Simulink blockset. Ground robots and includes: standard path planning algorithms (bug, distance transform, D*, and PRM), lattice planning, kinodynamic planning (RRT), localization (EKF, particle filter), map building (EKF) and simultaneous localization and mapping (using an EKF or graph-based method), and a Simulink model of a non-holonomic vehicle. Flying quadrotor robots, and includes a detailed Simulink model. The Toolbox requires MATLAB, commercial software from MathWorks, in order to operate. See also Robotics software projects Robotics simulator References External links Homepage and downloads GitHub home Toolbox description on Open Hub Free software Robotics software Robotics simulation software Robotics suites
3130543
https://en.wikipedia.org/wiki/RightNow%20Technologies
RightNow Technologies
Oracle RightNow is a customer relationship management (CRM) software service for enterprise organizations. It was originally developed by RightNow Technologies, Inc., which was acquired by Oracle Corporation in 2011 in a $1.8 billion deal. History The company was founded in 1997 by Greg Gianforte in Bozeman, Montana. Additional offices were opened in California, New York, New Jersey, Massachusetts, Texas, Illinois, Washington, DC, Colorado, Canada, Europe, Australia and Asia. The company employed over 1000 people in 2011. RightNow's business model was software as a service using cloud computing. Gianforte started the company in 1997 without external capital and created a software product that was initially focused on customer service with an integrated knowledge base. As the company grew, it added marketing and sales functionality, voice automation, customer feedback management, analytics and a social platform to its product, creating a full customer experience suite. In 2006, RightNow Technologies acquired SFA company Salesnet. In 2009, RightNow Technologies acquired social networking company HiveLive. In 2011, the company acquired Q-go for $34 million. Q-go was founded in 1999. It specialized in semantic search service, based on natural language processing technology, providing relevant answers to queries on a company’s Internet website or corporate intranet, formulated in natural sentences or keyword input alike. It integrated automatic statistical reporting of user query behavior for businesses that want to monitor what kinds of questions their customers are asking so they can adjust content to provide the appropriate information for customers and to reduce the load on traditional customer service ports of call, such as call centers and answers by email. The technology has been implemented and deployed in a range of industries, including banking, insurance, pension, telecommunications and logistics, as well as several government agencies. In October 2011, Oracle Corporation announced its intention to acquire RightNow for $1.5 billion, a deal which was completed in January 2012. The main product offered by RightNow Technologies was RightNow CX, a customer experience suite. RightNow CX was divided into RightNow Web Experience, RightNow Social Experience, RightNow Contact Center Experience, and RightNow Engage. See Also Oracle Advertising and Customer Experience (CX) List of acquisitions by Oracle References Computer companies of the United States Software companies based in Montana Companies formerly listed on the Nasdaq Bozeman, Montana CRM software companies American companies established in 1997 Cloud applications Cloud computing providers Oracle acquisitions Search engine software Internet search engines Software companies established in 1997 Software companies disestablished in 2012 2012 mergers and acquisitions Defunct software companies of the United States
48838704
https://en.wikipedia.org/wiki/Artec%203D
Artec 3D
Artec 3D is a developer and manufacturer of 3D scanning hardware and software. The company is headquartered in Luxembourg, with offices also in the United States (Santa Clara, California), China (Shanghai), and Russia (Moscow). Artec 3D's products and services are used in various industries, including engineering, healthcare, media and design, entertainment, education, fashion and historic preservation. In 2013, Artec 3D launched an automated full-body 3D scanning system, Shapify.me, that creates 3D portraits called “Shapies.” Technology 3D scanners capture the geometry of an object and produce a three-dimensional digital model. Artec's 3D scanners are structured light scanners. They operate by projecting light in a pattern, usually in the form of multiple parallel beams, onto an object. By projecting a grid pattern on the object, the scanners are able to capture the deformation or distortion from multiple angles and then calculate the distance to specific points on the object using triangulation. The three-dimensional coordinates obtained are used to digitally reconstruction the real-world object. Light scanners can utilize either blue light or white light, which is what Artec's scanners use. The choice of light does not impact the processes or concepts behind the technology. Hardware Eva Eva is a handheld, color scanner, released in 2012, that can capture and process up to two million points per second. The scanner was designed for the capture of medium to large objects. The device has a scan area of 214 x 148 mm at its closest range and 536 x 371 mm at its furthest, a 3D resolution of up to 0.5 mm and a 3D point accuracy of 0.1 mm. Eva can operate at distances between 0.4 m and 1 m from the object, capturing up to 16 frames per second. Data can be exported as OBJ, PLY, WRL, STL, AOP, ASCII, PTX, E57, or XYZRGB files types. Eva does not require a warm up period and can be used immediately upon powering on. Spider Spider is a 3D handheld, color scanner, released in 2013, designed to capture smaller, complex objects with high resolution and accuracy. The device has a 3D resolution of as high as 0.1 mm and a point accuracy up to 0.05mm. Spider does not require markers or manual alignment during post-processing. It requires a 30-minute warm up period to achieve maximum accuracy. Resulting scans can be exported into a number of file formats, including OBJ and STL. Space Spider Space Spider is a 3D handheld, color scanner released in 2015. The Space Spider utilizes a blue LED light source and has a 3D resolution of up to 0.1 mm with 0.05 mm accuracy. It operates at distances between 170 mm and 350 mm from an object. The device was initially developed for use on the International Space Station and incorporates an advanced temperature control system to prevent overheating, a common issue for electronics in space. The scanner requires a warm up period of three minutes to achieve maximum accuracy and is able to guarantee this precision even after several hours of constant use. Ray Ray is a portable 3D laser scanner designed for capturing large objects and areas in detail, from up to 110 meters away. Released in 2018, Ray produces scans with submillimeter accuracy (up to 0.7 mm) and minimum of noise, requiring significantly reduced post-processing times. Artec Ray is suitable for reverse engineering and inspection applications, as well as historical preservation, both indoors and outdoors. This compact (under 5 kg) LIDAR solution is mobile, with the internal battery giving users up to 4 hours of onsite scanning. Color is provided via two fully integrated 5 megapixel cameras. Scans are made directly in Artec Studio software, which offers a full range of post-processing tools. Scans can also be exported to Geomagic Design X for additional processing options. It's also possible to control Ray from a distance via an iPhone or iPad with the Artec Remote app (Wifi). Remote allows the user to make previews, select one or multiple scan areas, scan, and save the data directly to an SD card, as well as change scan settings and check battery and scanner status. Shapify Booth The Shapify Booth is an automatic 3D full body scanning booth unveiled in 2014 that contains four of Artec's handheld scanners and a stationary platform. The 3D scanners rotate around a person at 360 degrees to capture 700 surfaces in 12 seconds. The data captured is then automatically turned into a watertight, full-color 3D printable model in approximately five minutes. Shapify Booths can be bought or leased by businesses worldwide. Broadway 3D Broadway is the facial recognition biometric system developed by Artec under the Artec ID brand. The device is equipped with a 3D vision system and differentiates nuanced geometry with accuracy of up to fractions of a millimeter. It requires less than one second for facial recognition and has a registration time of two seconds. Broadway 3D provides a working distance range of 0.8 m to 1.6 m and can recognize up to 60 people per minute. The technology was utilized by the International Airport of Sochi to enhance security leading up to the 2014 Winter Olympics. Leo Leo is an ergonomic, handheld 3D color scanner with automatic, onboard processing, released in 2018. The Leo features a touch screen panel so users can watch in real-time as a 3D replica of the scanned object comes to life. By rotating and zooming the model, the user can see if any areas were missed, thus allowing full coverage in one scan. With a working distance of 0.35 – 1.2 m, Leo is a professional, high-speed scanner, designed for capturing everything from small parts all the way up to large areas such as crime scenes and heavy machinery. It has an angular field of view of 38.5 × 23° and a 160,000 cm3 volume capture zone. Data acquisition speed is up to 3 million points/second. No need for target markers, and Leo can operate effectively in broad daylight or complete darkness, and everything in between. Fully mobile and entirely wireless, no cables needed. SSD memory cards make possible unlimited captures. Built on the NVIDIA® Jetson™ platform, with a TX1 Quad-core ARM® Cortex-A57 MPCore CPU, NVIDIA Maxwell™ 1 TFLOPS GPU with 256 NVIDIA® CUDA® Cores; a built-in 9 DoF inertial system, with accelerometer, gyro and compass, so Leo always understands its physical position and surroundings. Micro Micro is an automated desktop 3D scanner designed for creating digital replicas of very small objects. Released in 2019, Micro's twin color cameras are synchronized with its dual-axis rotation system for scanning objects up to 90mm x 60mm x 60mm in size. Utilizing blue light technology, Micro has a 3D accuracy of up to 10 microns, and exports into popular file formats including STL, OBJ, and PTX. To prepare for scanning, objects are simply mounted upon Micro's scanning platform, the user chooses from a range of preselected scanning paths, or chooses their own, and then scanning can begin. A popular choice for quality inspection and reverse engineering of very small objects, Micro can also be used for dentistry and jewelry, and more. Software Artec Studio Artec Studio is a software program for 3D scanning and post-processing. Data is captured and split into several “scans,” which are then processed and fused into a 3D model. Artec Studio includes a fully automatic post-processing mode called “Autopilot,” which prompts users through questions related to the characteristics of the object being scanned and provides the option to be guided through the post-processing pipeline. The Autopilot mode will automatically align scans in a global coordinate system, determine which algorithms to use for post-processing, clean captured data and remove base surfaces. Upon completion, scan data can be directly exported to 3D Systems Geomagic Design X and SOLIDWORKS for further CAD processing. Artec ScanApp Artec ScanApp is a Mac OS X (supports El Capitan & Yosemite) application that allows data to be captured from an Artec Eva 3D Scanner to a Macintosh computer. Data collected with ScanApp can be processed within the software, or exported to a Windows PC for further processing with Artec Studio 11. Artec Scanning SDK Artec Scanning SDK is a Software Development Kit (SDK) that allows for individuals or companies to tailor existing or develop new software applications to work with Artec's handheld 3D scanners. Industries & Applications Artec 3D's handheld scanners and software have been utilized across various industries. Examples of notable industry-specific applications include: Engineering & Manufacturing, to create 3D digital models of: Water pipes by UK water and wastewater provider, Thames Water, in order to assess their condition and prioritize maintenance; The flooring of automobiles for the creation of customized floor mats by Nika Holding, a company that makes custom automobile accessories; Ships and parts for the Dutch Royal Navy, for conducting precise repairs and creating reverse-engineered parts. Healthcare, to create: Protective helmets for infants with Positional Plagiocephaly by The London Orthotic Consultancy; Customized prosthetics and orthotics; Pre- and post-surgery facial masks for patients undergoing aesthetic, plastic and reconstructive operations. Science & Education, to assist in global research and digitally preserve: Fossil remains of a 1.8 million-year-old crocodile, elephant and giant tortoise at the Turkana Basin in northern Kenya by the Turkana Basin Institute and Louise Leakey; 55 models of endangered and extinct birds including the eastern imperial eagle, white-tailed eagle and boreal owl with 3D printing marketplace, Threeding; 500 cultural heritage sites (including the Rani ki vav stepwell in India and the Washington Monument) and the British Museum’s Assyrian Relief collection, in collaboration with international non-profit organization, CyArk; Collections of historical and religious artifacts from the Historical Museum of Stara Zagora, Regional Historical Museums of Varna and Pernik, and the National Museum of Military History (Bulgaria) in collaboration with Threeding; Fossil skeletons of the Homo naledi species in the Dinaledi Chamber of the Rising Star cave system near Johannesburg, Africa, by the University of the Witwatersrand; Cow, deer, fox, bobcat and human skulls to create an interactive skull museum at St. Cloud University’s Visualization Lab in Minnesota, allowing students and faculty to examine objects otherwise too fragile to hold; Historical artifacts by 11th and 12th grade students at Mid-Pacific Institute as part of its Museum Studies coursework. Art & Design, to digitally capture: United States President Barack Obama for the creation of the first 3D printed presidential bust; A sculpted head of a dinosaur for the movie Jurassic World, which allowed it to be rescaled as needed for the film; Stephen Colbert’s head, which was cloned for the Wonderful Pistachio commercial that aired during the 2016 Super Bowl; Television host Larry King, former president and chairman of Marvel Comics, Stan Lee, and American singer, songwriter and actress, Christina Milian, for the creation of miniature figurines by CoKreeate. References Companies with year of establishment missing Privately held companies Software companies of Luxembourg 3D imaging
1064879
https://en.wikipedia.org/wiki/Long%20John%20Baldry
Long John Baldry
John William "Long John" Baldry (12 January 1941 – 21 July 2005) was an English-Canadian blues singer, musician and voice actor. In the 1960s, he was one of the first British vocalists to sing the blues in clubs and shared the stage with many British musicians including the Rolling Stones and the Beatles. Before achieving stardom, Rod Stewart and Elton John were members of bands led by Baldry. He enjoyed pop success in 1967 when "Let the Heartaches Begin" reached No. 1 in the UK, and in Australia where his duet with Kathi McDonald "You've Lost That Lovin' Feelin'" reached No. 2 in 1980. Baldry lived in Canada from the late 1970s until his death. He continued to make records there, and do voiceover work. Two of his best-known voice roles were as Dr. Ivo Robotnik in Adventures of Sonic the Hedgehog, and as KOMPLEX in Bucky O'Hare and the Toad Wars. Early life John Baldry was born at East Haddon Hall, East Haddon, Northamptonshire, which was serving as a makeshift wartime maternity ward, on 12 January 1941, the son of William James Baldry (1915–1990), a Metropolitan Police constable and his wife, Margaret Louisa née Parker (1915–1989); their usual address was recorded as 18 Frinton Road, East Ham. His height was noticed as a baby thus giving him his "Long" nickname during his childhood. His early life was spent in Edgware, Middlesex where he attended Camrose Primary School until the age of 11, after which he attended Downer Grammar School (now Canons High School). Blues bands of the 1960s Baldry grew to , resulting in the nickname Long John. Baldry appeared quite regularly in the early 1960s in the Gyre & Gimble coffee lounge, around the corner from Charing Cross railway station, and at the Bluesville R. & B. Club, Manor House, London, also Klooks Kleek (Railway Hotel, West Hampstead). He appeared weekly for some years at Eel Pie Island on the Thames at Twickenham and also appeared at the Station Hotel in Richmond, one of the Rolling Stones' earliest venues. In the early 1960s, he sang with Alexis Korner's Blues Incorporated, with whom he recorded the first British blues album in 1962 R&B from the Marquee. At stages, Mick Jagger, Jack Bruce and Charlie Watts were members of this band while Keith Richards and Brian Jones played on stage, although none played on the R&B at the Marquee album. When The Rolling Stones made their debut at the Marquee Club in July 1962, Baldry put together a group to support them. Later, Baldry was the announcer introducing the Stones on their U.S.-only live album Got Live If You Want It!, in 1966. Baldry became friendly with Paul McCartney after a show at the Cavern Club in Liverpool in the early 1960s, leading to an invitation to sing on one of the Beatles 1964 TV specials Around The Beatles. In the special, Baldry performs "Got My Mojo Workin'" and a medley of songs with members of the Vernons Girls trio; in the latter, the Beatles are shown singing along in the audience. In 1963, Baldry joined the Cyril Davies R&B All Stars with Nicky Hopkins playing piano. He took over in 1964 after the death of Cyril Davies, and the group became Long John Baldry and his Hoochie Coochie Men featuring Rod Stewart on vocals and Geoff Bradford on guitar. Stewart was recruited when Baldry heard him busking a Muddy Waters song at Twickenham Station after Stewart had been to a Baldry gig at Eel Pie Island. Long John Baldry became a regular fixture on Sunday nights at Eel Pie Island from then onwards, fronting a series of bands. In 1965, the Hoochie Coochie Men became Steampacket with Baldry and Stewart as male vocalists, Julie Driscoll as the female vocalist and Brian Auger on Hammond organ. After Steampacket broke up in 1966, Baldry formed Bluesology featuring Reg Dwight on keyboards and Elton Dean, later of Soft Machine, as well as Caleb Quaye on guitar. Dwight, when he began to record as a solo artist, adopted the name Elton John, his first name from Elton Dean and his surname from John Baldry. Following the departure of Elton John and Bluesology, Baldry was left without a backup band. Attending a show in the Mecca at Shaftesbury Avenue, he saw a five-piece harmony group called Chimera from Plymouth, Devon, who had recently turned professional. He approached them after their set and said how impressed he was by their vocal harmonies and that they would be ideal to back him on the cabaret circuit he was currently embarked on. This they did. Solo artist In 1967, he recorded a pop song "Let the Heartaches Begin" that went to number one in Britain, followed by a 1968 top 20 hit titled "Mexico", which was the theme of the UK Olympic team that year. "Let the Heartaches Begin" made the lower reaches of the Billboard Hot 100 in the US. Baldry was still touring, doing gigs with Bluesology, but the band refused to back his rendition of "Let the Heartaches Begin", and left the stage while he performed to a backing-tape played on a large Revox tape-recorder. In 1971, John and Stewart each produced one side of It Ain't Easy which became Baldry's most popular album and made the top 100 of the US album chart. The album featured "Don't Try to Lay No Boogie Woogie on the King of Rock and Roll" which became his most successful song in the US. Baldry's first tour of the US was at this time. The band included Micky Waller, Ian Armitt, Pete Sears, and Sammy Mitchell. Stewart and John would again co-produce his 1972 album Everything Stops For Tea which also made the lower reaches of the US album charts. The same year, Baldry worked with ex-Procol Harum guitarist Dave Ball. Baldry had mental health problems and was institutionalised for a brief time in 1975. The 1979 album Baldry's Out was recorded in Canada, which he released at Zolly's Forum; a nightclub in Oshawa, underneath the Oshawa Shopping Centre. In a 1997 interview with a German television programme Baldry claimed to be the last person to see singer Marc Bolan before Bolan's death on 16 September 1977, having conducted an interview with the fellow singer for an American production company, he says, just before Bolan was killed in a car accident. Move to Canada, later career After time in New York City and Los Angeles in 1978, Baldry lived in Dundas, Ontario from 1980 to 1984 before settling in Vancouver, British Columbia, where he became a Canadian citizen. He toured the west coast, as well as the US Northwest. Baldry also toured the Canadian east. In 1976, he teamed with Seattle singer Kathi McDonald who became part of the Long John Baldry Band, touring Canada and the US. In 1979 the pair recorded a version of The Righteous Brothers' "You've Lost That Lovin' Feelin", following which McDonald became part of his touring group for two decades. The song entered the US Billboard charts and was a No. 2 hit in Australia in 1980. He last recorded with the Stony Plain label. His 1997 album Right To Sing The Blues won a Juno Award in the Blues Album of the Year category in the Juno Awards of 1997. In 2003 Baldry headlined the British Legends of Rhythm and Blues UK tour, alongside Zoot Money, Ray Dorset and Paul Williams. He played Columbus, Ohio, on 19 July 2004, at Barristers Hall with guitarist Bobby Cameron, in a show produced by Andrew Myers. They played to a small group, some came from Texas. Two years previously the two had a 10-venue sell-out tour of Canada. Baldry's final UK Tour as 'The Long John Baldry Trio' concluded with a performance on Saturday 13 November 2004 at The King's Lynn Arts Centre, King's Lynn, Norfolk, England. The trio consisted of LJB, Butch Coulter on harmonica and Dave Kelly on slide guitar. Personal life Baldry was openly gay during the early 1960s, at least amongst his friends and industry peers. However, he did not make a formal public acknowledgement of this until the 1970s. This was possibly because until 1967 in Britain, male homosexual conduct was still a criminal offence punishable by imprisonment. In 1968, Elton John tried to commit suicide after relationship problems with a woman, Linda Woodrow. His lyricist Bernie Taupin and Baldry found him, and Baldry talked him out of marrying her. The song "Someone Saved My Life Tonight" from Captain Fantastic and the Brown Dirt Cowboy was about the experience. The name "Sugar Bear" in the song is a reference to Baldry. Baldry had a brief relationship with lead-guitarist of The Kinks, Dave Davies. In 1978 his then-upcoming album Baldry's Out announced his formal coming out, and he addressed sexuality problems with a cover of Canadian songwriter Barbra Amesbury's "A Thrill's a Thrill". Death Baldry died 21 July 2005, in Vancouver General Hospital, of a severe chest infection. He was survived by his partner, Felix "Oz" Rexach, a brother, Roger, and a sister, Margaret. Discography Studio albums Live albums Compilations Singles EPs Other recordings Performances on other albums (1960) 6 Out Of 4 ~ The Thames-Side Four - Folklore (F-EP/1) Live recording of the group with LJB on guitar and vocals. (1962) R&B from the Marquee ~ Blues Incorporated - Decca (ACL 1130) Baldry provides lead vocals on three tracks including 'How Long, How Long Blues.''' (1970) The First Supergroup ~ The Steampacket - BYG Records (529.706) Recorded December 1965 the album features tracks with LJB on lead vocals (1971) The First Rhythm & Blues Festival In England ~ Various Artists - BYG Records (529.705) Recorded live in Feb 1964 Baldy sings '2.19' and 'Mojo Working (1971) Every Picture Tells A Story ~ Rod Stewart - Mercury (6338 063) LJB provides backing vocals on the title track and 'Seems Like A Long Time'. (1972) Mar Y Sol: The First International Puerto Rico Pop Festival ~ Various Artists - Atco Records (SD 2-705) Baldry sings a live version of the self-penned 'Bring My Baby Back To Me'. (1975) Dick Deadeye: Original Soundtrack ~ Various Artists - GM Records (GML 1018) Soundtrack to the animated film of the same name with LJB taking lead vocals on three tracks. (1975) Sumar Á Sýrlandi ~ Stuðmenn - Egg (EGG 0000 1/13) Rare Icelandic album. Baldry sings the track 'She Broke My Heart'. (1996) Bone, Bottle, Brass or Steel ~ Doug Cox - Malahat Mountain LJB performs 'Good Morning Blues' accompanied by Doug Cox. (1998) You Got The Bread... We Got The Jam! ~ Schuld & Stamer - Blue Streak Records (BSCD98001) Long John joins with acoustic blues duo Schuld & Stamer on several tracks. (2002) For Fans Only! ~ Genya Ravan - AHA Music Features a rare duet with Ravan and Baldry on 'Something's Got A Hold On Me'. Recorded in 1978. (2011) The Definitive Steampacket Recordings ~ The Steampacket - Nasty Productions Features two previously unreleased Steampacket tracks with LJB on lead vocals. (2013) Radio Luxembourg Sessions: 208 Rhythm Club - Vol. 5 ~ Various Artists - Vocalion (CDNJT 5319) October 1961 recording. LJB sings 'Every Day I Have The Blues' (2013) Radio Luxembourg Sessions: 208 Rhythm Club - Vol. 6 ~ Various Artists - Vocalion (CDNJT 5320) October 1961 recording of LJB singing 'The Glory Of Love'.TV specials'(1965) Rod The Mod (1974) The Gospel According To Long John (1985) Long John Baldry: Rockin' The Blues (1987) Long John Baldry At The Maintenance Shop (1993) Long John Baldry In Concert (1993) Leverkusen Blues Festival '93: The Long John Baldry Band (1993) Waterfront Blues Festival: Long John Baldry (1997) Leverkusen Blues Festival '97: Long John Baldry & Tony Ashton (1998) Café Campus Blues with Long John Baldry (2001) Happy Birthday Blues: Long John Baldry & Friends (2007) Long John Baldry: In The Shadow Of The Blues Filmography Film Television Video games Theatre Bibliography Bob Brunning (1986) Blues: The British Connection, London: Helter Skelter, 2002, Bob Brunning, The Fleetwood Mac Story: Rumours and Lies - Omnibus Press, 2004, foreword by B.B.King Dick Heckstall-Smith (2004) The safest place in the world: A personal history of British Rhythm and blues, Clear Books, - First Edition: Blowing The Blues - Fifty Years Playing The British Blues Christopher Hjort Strange brew: Eric Clapton and the British blues boom, 1965-1970, foreword by John Mayall, Jawbone (2007), Paul Myers: Long John Baldry and the Birth of the British Blues, Vancouver 2007 - GreyStone Books Harry Shapiro Alexis Korner: The Biography'', Bloomsbury Publishing PLC, London 1997, Discography by Mark Troster References External links Official website replaced with archived version Long John Baldry Website Archive Musical Tree ~ JohnBaldry.com (Baldry band memberships) Archived version of page Profile at GTA agency recovered archived version of site London (Ontario) Free Press report of death published 23 July 2005 Long John Baldry ~ VH1 profile [ Long John Baldry: Biography ~ AllMusic.com] Billboard report on Baldry's death 22 July 2005 Long John Baldry and The Marquee Club It Ain't Easy: Long John Baldry and the Birth of the British Blues (Paperback) Long John Baldry: In the Shadow of the Blues (documentary) 1941 births 2005 deaths 20th-century English male actors 20th-century English singers 21st-century English male actors 21st-century English singers 20th-century Canadian guitarists 20th-century Canadian male singers 20th-century British guitarists 20th-century British male singers 21st-century Canadian male singers 21st-century Canadian guitarists British rhythm and blues boom musicians English blues singers English buskers English male singers English male voice actors Canadian male voice actors Canadian gay actors Canadian gay musicians Infectious disease deaths in British Columbia Juno Award for Blues Album of the Year winners LGBT singers from Canada English gay musicians LGBT singers from the United Kingdom Naturalized citizens of Canada English blues guitarists English male guitarists Canadian blues guitarists Canadian male guitarists English emigrants to Canada All-Stars (band) members Steampacket members Blues Incorporated members People educated at Canons High School Bluesology members Audiobook narrators Stony Plain Records artists 20th-century LGBT people 21st-century LGBT people
198977
https://en.wikipedia.org/wiki/Qmail
Qmail
qmail is a mail transfer agent (MTA) that runs on Unix. It was written, starting December 1995, by Daniel J. Bernstein as a more secure replacement for the popular Sendmail program. Originally license-free software, qmail's source code was later dedicated in the public domain by the author. Features Security When first published, qmail was the first security-aware mail transport agent; since then, other security-aware MTAs have been published. The most popular predecessor to qmail, Sendmail, was not designed with security as a goal, and as a result has been a perennial target for attackers. In contrast to sendmail, qmail has a modular architecture composed of mutually untrusting components; for instance, the SMTP listener component of qmail runs with different credentials from the queue manager or the SMTP sender. qmail was also implemented with a security-aware replacement to the C standard library, and as a result has not been vulnerable to stack and heap overflows, format string attacks, or temporary file race conditions. Performance When it was released, qmail was significantly faster than Sendmail, particularly for bulk mail tasks such as mailing list servers. qmail was originally designed as a way for managing large mailing lists. Simplicity At the time of qmail's introduction, Sendmail configuration was notoriously complex, while qmail was simple to configure and deploy. Innovations qmail encourages the use of several innovations in mail (some originated by Bernstein, others not): Maildir Bernstein invented the Maildir format for qmail, which splits individual email messages into separate files. Unlike the de facto standard mbox format, which stored all messages in a single file, Maildir avoids many locking and concurrency problems, and can safely be provisioned over NFS. qmail also delivers to mbox mailboxes. Wildcard mailboxes qmail introduced the concept of user-controlled wildcards. Out of the box, mail addressed to "user-wildcard" on qmail hosts is delivered to separate mailboxes, allowing users to publish multiple mail addresses for mailing lists and spam management. qmail also introduces the Quick Mail Transport Protocol (QMTP) and Quick Mail Queuing Protocol (QMQP) protocols. Modularity qmail is nearly a completely modular system in which each major function is separated from the other major functions. It is easy to replace any part of the qmail system with a different module as long as the new module retains the same interface as the original. Controversy Security reward and Georgi Guninski's vulnerability In 1997, Bernstein offered a US$500 reward for the first person to publish a verifiable security hole in the latest version of the software. In 2005, security researcher Georgi Guninski found an integer overflow in qmail. On 64-bit platforms, in default configurations with sufficient virtual memory, the delivery of huge amounts of data to certain qmail components may allow remote code execution. Bernstein disputes that this is a practical attack, arguing that no real-world deployment of qmail would be susceptible. Configuration of resource limits for qmail components mitigates the vulnerability. On November 1, 2007, Bernstein raised the reward to US$1000. At a slide presentation the following day, Bernstein stated that there were 4 "known bugs" in the ten-year-old qmail-1.03, none of which were "security holes". He characterized the bug found by Guninski as a "potential overflow of an unchecked counter". "Fortunately, counter growth was limited by memory and thus by configuration, but this was pure luck." On May 19, 2020, a working exploit for Guninski's vulnerability was published by Qualys but exploit authors' state they were denied the reward because it contains additional environmental restrictions. Frequency of updates The core qmail package has not been updated for many years. New features were initially provided by third party patches, from which the most important at the time were brought together in a single meta-patch called netqmail. Standards compliance qmail was not designed to replace Sendmail, and does not behave exactly as Sendmail did in all situations. In some cases, these differences in behavior have become grounds for criticism. For instance, qmail's approach to bounce messages (a format called QSBMF) differs from the standard format of delivery status notifications specified by the IETF in RFC 1894, meanwhile advanced to draft standard as RFC 3464, and recommended in the SMTP specification. Furthermore, some qmail features have been criticized for introducing mail forwarding complications; for instance, qmail's "wildcard" delivery mechanism and security design prevents it from rejecting messages from forged or nonexistent senders during SMTP transactions. In the past, these differences may have made qmail behave differently when abused as a spam relay, though modern spam delivery techniques are less influenced by bounce behavior. Copyright status qmail was released to the public domain in November 2007. Until November 2007, qmail was license-free software, with permission granted for distribution in source form or in pre-compiled form (a "var-qmail package") only if certain restrictions (primarily involving compatibility) were met. This unusual licensing arrangement made qmail non-free according to some guidelines (such as the DFSG), and was a cause of controversy. qmail is the only broadly deployed public domain software message transfer agent (MTA). See also qpsmtpd djbdns List of mail servers Comparison of mail servers References External links , maintained by the author. qmail-LDAP-UI – qmail-LDAP-UI is a Web-based User Administration tool Qmailtoaster – Distributes RPM files for appropriate distros to install qmail quickly and easily. Has a wiki and mailing list. pkgsrc qmail and qmail-run, a pair of easy-to-install cross-platform qmail source packages included in pkgsrc The qmail section of FAQTS, an extensive knowledgebase built by qmail users qmailWiki is a relatively new wiki about qmail, hosted by Inter7 J.M.Simpson qmail site Useful Information about qmail, including explanations and patches, by John M. Simpson (Updated regularly) Unofficial qmail Bug and Wishlist qmail queue messages deliver (PHP) qmail-distributions – qmail patches combined into easy to use distributions Roberto's qmail notes – An English/Italian howto on qmail and related software. A big patch is included. Updated regularly. Message transfer agents Free email server software Free software programmed in C Public-domain software with source code Email server software for Linux
2370025
https://en.wikipedia.org/wiki/Mozilla%20Corporation
Mozilla Corporation
The Mozilla Corporation (stylized as moz://a) is a wholly owned subsidiary of the Mozilla Foundation that coordinates and integrates the development of Internet-related applications such as the Firefox web browser, by a global community of open-source developers, some of whom are employed by the corporation itself. The corporation also distributes and promotes these products. Unlike the non-profit Mozilla Foundation, and the Mozilla open source project, founded by the now defunct Netscape Communications Corporation, the Mozilla Corporation is a taxable entity. The Mozilla Corporation reinvests all of its profits back into the Mozilla projects. The Mozilla Corporation's stated aim is to work towards the Mozilla Foundation's public benefit to "promote choice and innovation on the Internet." A MozillaZine article explained:The Mozilla Foundation will ultimately control the activities of the Mozilla Corporation and will retain its 100 percent ownership of the new subsidiary. Any profits made by the Mozilla Corporation will be invested back into the Mozilla project. There will be no shareholders, no stock options will be issued and no dividends will be paid. The Mozilla Corporation will not be floating on the stock market and it will be impossible for any company to take over or buy a stake in the subsidiary. The Mozilla Foundation will continue to own the Mozilla trademarks and other intellectual property and will license them to the Mozilla Corporation. The Foundation will also continue to govern the source code repository and control who is allowed to check in. Establishment The Mozilla Corporation was established on August 3, 2005, to handle the revenue-related operations of the Mozilla Foundation. As a non-profit, the Mozilla Foundation is limited in terms of the types and amounts of revenue. The Mozilla Corporation, as a taxable organization (essentially, a commercial operation), does not have to comply with such strict rules. Upon its creation, the Mozilla Corporation took over several areas from the Mozilla Foundation, including coordination and integration of the development of Firefox and Thunderbird (by the global free software community) and the management of relationships with businesses. With the creation of the Mozilla Corporation, the rest of the Mozilla Foundation narrowed its focus to concentrate on the Mozilla project's governance and policy issues. In November 2005, with the release of Mozilla Firefox 1.5, the Mozilla Corporation's website at mozilla.com was unveiled as the new home of the Firefox and Thunderbird products online. In 2006, the Mozilla Corporation generated $66.8 million in revenue and $19.8 million in expenses, with 85% of that revenue coming from Google for "assigning [Google] as the browser's default search engine, and for click-throughs on ads placed on the ensuing search results pages." Notable events In March 2006, Jason Calacanis reported a rumor on his blog that Mozilla Corporation gained $72M during the previous year, mainly thanks to the Google search box in the Firefox browser. The rumor was later addressed by Christopher Blizzard, then a member of the board, who wrote on his blog that, "it's not correct, though not off by an order of magnitude." Two years later, TechCrunch wrote: "In return for setting Google as the default search engine on Firefox, Google pays Mozilla a substantial sum – in 2006, the total amounted to around $57 million, or 85% of the company's total revenue. The deal was originally going to expire in 2006, but was later extended to 2008 and then ran through 2011." The deal was extended again another 3 years, until November 2014. Under the deal, Mozilla was to have received from Google another $900 million ($300 million annually), nearly 3 times the previous amount. In August 2006, Microsoft invited Mozilla employees to collaborate to ensure compatibility of Mozilla software with then upcoming Windows Vista. Microsoft offered to host one-to-one at the new open-source facility at Microsoft headquarters in Redmond, Wash. Mozilla accepted the offer. In March 2014, Mozilla came under some criticism after it appointed Brendan Eich as its new Chief Executive Officer (CEO). In 2008, Eich had made a $1,000 contribution in support of California Proposition 8, a ballot initiative that barred legal recognition of same-sex marriages in California. Three of six Mozilla board members reportedly resigned over the choice of CEO, though Mozilla said the resigning board members had "a variety of reasons" and reasserted its continued commitment to LGBT equality, including same-sex marriage. On April 1, the online dating site OkCupid started displaying visitors using Mozilla Firefox a message urging them to switch to a different web browser, pointing out that 8% of the matches made on OkCupid are between same-sex couples. On April 3, Mozilla announced that Eich had decided to step down as CEO and also leave the board of Mozilla Foundation. This, in turn, prompted criticism from some commentators who criticized the pressure that led Eich to resign. For example, Conor Friedersdorf argued in The Atlantic that "the general practice of punishing people in business for bygone political donations is most likely to entrench powerful interests and weaken the ability of the powerless to challenge the status quo." In April 2014, Chris Beard, the former chief marketing officer of Mozilla, was appointed interim CEO. Beard was named CEO on July 28 of the same year. On February 27, 2017, Mozilla acquired the bookmark manager and suggestion service Pocket. In accordance with Mozilla's history of operating as "open by default" and based on comments by Mozilla chief business officer Denelle Dixon-Thayer that Pocket would "become part of the Mozilla open source project", it was reported that Pocket would become open source. Prior to the acquisition, the startup behind Pocket operated it as a closed source, commercial service, and Mozilla published the source code that added a "Save to Pocket" feature to Firefox as open source. , Pocket remains closed source, while the extension remains open source. In February 2017, Mozilla dissolved its IoT "Connected Devices" initiative, firing around 50 employees, to focus on "Emerging Technologies" like AR, VR and Servo/Rust. On August 29, 2019, Mozilla and Chris Beard jointly announced that 2019 will be Beard's last year as CEO of Mozilla. In December, Mitchell Baker became the interim CEO, before being named CEO in April 2020. In January 2020, Mozilla fired 70 employees after the new revenue streams could not deliver the expected revenue quickly enough. In August 2020, Mozilla announced restructuring that will close down Mozilla operations in Taipei, Taiwan, and reduce Mozilla's workforce in the United States, Canada, Europe, Australia, and New Zealand. All together, about 250 people will be let go with severance packages and ~60 people will be reassigned to different projects or teams. Mozilla is "reducing investment in some areas such as developer tools, internal tooling, and platform feature development" and reorganizing "security/privacy products" to prioritize revenue-generating projects. Shortly after the announcement of staff cuts, Mozilla insiders leaked information that the Google search deal will be extended until 2023 instead of expiring in 2020, meaning the corporation financial state is stable. In December 2020, Mozilla closed its headquarters office in Mountain View, citing reduced need for office space due to pandemic. The title of HQ office went to the San Francisco office. Affiliations Google The Mozilla Corporation's relationship with Google has been noted in the popular press, especially with regard to their paid referral agreement. Mozilla's original deal with Google to have Google Search as the default web search engine in the browser expired in 2011, but a new deal was struck, where Google agreed to pay Mozilla just under a billion dollars over three years in exchange for keeping Google as its default search engine. The price was driven up due to aggressive bidding from Microsoft's Bing and Yahoo!'s presence in the auction as well. Despite the deal, Mozilla Firefox maintains relationships with Bing, Yahoo!, Yandex, Baidu, Amazon.com and eBay. The 2007 release of the anti-phishing protection in Firefox 2 in particular raised considerable controversy: Anti-phishing protection, enabled by default, is based on a list updated twice hourly from Google's servers. The browser also sends a cookie with each update request. Internet privacy advocacy groups have expressed concerns surrounding Google's possible uses for this data, especially since Firefox's privacy policy states that Google may share (non-personally identifying) information gathered through safe browsing with third parties, including business partners. Following Google CEO Eric Schmidt's comments in December 2009 regarding privacy during a CNBC show, Asa Dotzler, Mozilla's director of community development suggested that users use the Bing search engine instead of Google search. Google also promoted Firefox through YouTube until the release of Google Chrome. Yahoo In November 2014, Mozilla signed a five-year partnership with Yahoo!, making Yahoo! Search the default search engine for Firefox browsers in the US. With the release of Firefox Quantum on November 17, 2017, Google became the default search engine again. Microsoft Microsoft's head of Australian operations, Steve Vamos, stated in late 2004 that he did not see Firefox as a threat and that there was not significant demand for the feature-set of Firefox among Microsoft's users. Microsoft Chairman Bill Gates has used Firefox, but has commented that "it's just another browser, and IE [Microsoft's Internet Explorer] is better". A Microsoft SEC filing on June 30, 2005 acknowledged that "competitors such as Mozilla offer software that competes with the Internet Explorer Web browsing capabilities of our Windows operating system products." The release of Internet Explorer 7 was fast tracked, and included functionality that was previously available in Firefox and other browsers, such as tabbed browsing and RSS feeds. Despite the cold reception from Microsoft's top management, the Internet Explorer development team maintains a relationship with Mozilla. They meet regularly to discuss web standards such as extended validation certificates. In 2005, Mozilla agreed to allow Microsoft to use its Web feed logo in the interest of common graphical representation of the Web feeds feature. In August 2006, Microsoft offered to help Mozilla integrate Firefox with the then-forthcoming Windows Vista, an offer Mozilla accepted. In October 2006, as congratulations for a successful ship of Firefox 2, the Internet Explorer 7 development team sent a cake to Mozilla. As a nod to the browser wars, some jokingly suggested that Mozilla send a cake back along with the recipe, in reference to the open-source software movement. The IE development team sent another cake on June 17, 2008, upon the successful release of Firefox 3, again on March 22, 2011, for Firefox 4, and yet again for the Firefox 5 release. In November 2007, Jeff Jones (a "security strategy director" in Microsoft's Trustworthy Computing Group) criticized Firefox, claiming that Internet Explorer experienced fewer vulnerabilities and fewer higher severity vulnerabilities than Firefox in typical enterprise scenarios. Mozilla developer Mike Shaver discounted the study, citing Microsoft's bundling of security fixes and the study's focus on fixes, rather than vulnerabilities, as crucial flaws. In February 2009, Microsoft released Service Pack 1 for version 3.5 of the .NET Framework. This update also installed Microsoft .NET Framework Assistant add-on (enabling ClickOnce support). The update received media attention after users discovered that the add-on could not be uninstalled through the add-ons interface. Several hours after the website Annoyances.org posted an article regarding this update, Microsoft employee Brad Abrams posted in his blog Microsoft's explanation for why the add-on was installed, and also included detailed instructions on how to remove it. However, the only way to get rid of this extension was to modify manually the Windows Registry, which could cause Windows systems to fail to boot up if not done correctly. On October 16, 2009, Mozilla blocked all versions of Microsoft .NET Framework Assistant from being used with Firefox and from the Mozilla Add-ons service. Two days later, the add-on was removed from the blocklist after confirmation from Microsoft that it is not a vector for vulnerabilities. Version 1.1 (released on June 10, 2009 to the Mozilla Add-ons service) and later of the Microsoft .NET Framework Assistant allows the user to disable and uninstall in the normal fashion. Firefox was one of the twelve browsers offered to European Economic Area users of Microsoft Windows from 2010 – see BrowserChoice.eu. IRS audit The Internal Revenue Service opened an audit of the Mozilla Foundation's 2004-5 revenues in 2008, due to its search royalties, and in 2009, the investigation was expanded to the 2006 and 2007 tax years, though that part of the audit was closed. As Mozilla does not derive at least a third of its revenue from public donations, it does not automatically qualify as a public charity. In November 2012, the audit was closed after finding that the Mozilla Foundation owed a settlement of $1.5 million to the IRS. People Most Mozilla Foundation employees transferred to the new organization at Mozilla Corporation's founding. Board of directors The board of directors is appointed by and responsible to Mozilla Foundation's board. In March 2014, half the board members resigned. The remaining board members are: Mitchell Baker, Executive Chairwoman & CEO Julie Hanna Karim Lakhani Management team The senior management team includes: Mitchell Baker, Executive Chairwoman & CEO Lindsey Shepard, CMO Dave Camp, Chief, Core Products Roxi Wen, CFO Amy Keating, Chief Legal Officer Notable current employees Gian-Carlo Pascutto Julian Seward Tantek Çelik Notable past employees Brendan Eich, former CEO of Mozilla Corporation, inventor of JavaScript John Lilly, former CEO of Mozilla Corporation Christopher Blizzard, former Open Source Evangelist (now at Facebook) John Resig, former Technical Evangelist (now at Khan Academy) Mike Schroepfer, former VP of Engineering (now at Facebook) Mike Shaver, former VP of Technical Strategy (now at Facebook) Window Snyder, former Chief Security Officer (now at Square, Inc.) Ellen Siminoff, former board member, also President and CEO of Shmoop University and Chair of Efficient Frontier Li Gong, president of Mozilla Corporation until 2015 Doug Turner, former Engineering Director (now at Google Chrome) Andreas Gal, former CTO (now at Silk Labs) Johnny Stenback, former Engineering Director (now at Google Chrome) John Hammink See also References External links Mozilla Corp. in 12 simple items 2005 establishments in California Companies based in Mountain View, California Free software companies Mozilla Software companies based in the San Francisco Bay Area Software companies established in 2005 Software companies of the United States de:Mozilla#Mozilla Corporation
8136190
https://en.wikipedia.org/wiki/Chicken%20Invaders
Chicken Invaders
Chicken Invaders is a series of shoot 'em up video games created by Greek indie developer Konstantinos Prouskas. With the release of the first game Chicken Invaders in 1999, the games are one of the longest running series of video games developed in Greece. All six main entries in the series have been developed by Prouskas' InterAction Studios, and have been released for Microsoft Windows, OS X, Linux, iOS, Windows Phone, and Android platforms. The main theme of the games is a battle between a lone combat spacecraft and a technologically advanced race of space-faring chickens, who are intent on subjugating (and later destroying) Earth. The games make heavy use of humor, especially in the form of parodies of Galaxian, Star Wars, Space Invaders and Star Trek. Games Main series Chicken Invaders Chicken Invaders is the first game of the Chicken Invaders franchise, released on 24 July 1999. It is a fixed shooter, reminiscent of the original Space Invaders (of which the game is a parody of), in which the player controls a lone spacecraft by moving it horizontally across the bottom of the screen and firing at swarms of invading extraterrestrial chickens. The game features both single player and two-player game modes. Known also as the DX Edition, it is a reimagined, 3D version of an earlier, unfinished DOS version made in 1997. Chicken Invaders features weapon power-ups that resemble gift boxes, used by the player to upgrade their weapons. Enemy chickens drop eggs as projectile weapons, which the player needs to avoid. When chickens are defeated, they drop drumsticks, which the player can collect to earn a missile, a special weapon used to clear the screen of enemies. The game features an infinite amount of levels. Each level features 10 waves, and at the end of each level players fight a boss, which must be defeated in order to advance (or warp) to a new system. The gameplay is endless, bringing in wave after wave, until the player has finally lost all of their lives, in which case the game is over. The difficulty increases each time players advance to a new chapter; the enemies move or fall faster, and objects like asteroids will also move faster. Chicken Invaders is the only game in the franchise to not feature "holiday editions", a trend which would become a staple of the franchise. It is also the only game in the series to be an exclusive Microsoft Windows title, and as such the only game to not have been ported to other platforms. Chicken Invaders: The Next Wave Chicken Invaders: The Next Wave (known also as Chicken Invaders 2) is the second game in the main Chicken Invaders series, released on 22 December 2002. The player again takes command of the same lone spacecraft of the previous installment and must eliminate the chicken infestation of the Solar System. The game is considered a major improvement over its predecessor, featuring a variety of unique waves of chickens with different types of enemies and bosses, as well as allowing the player to move their ship in both the horizontal and vertical directions. It is also the first game in the franchise to introduce multiple weapons. The game departs from the "endless" format, instead featuring a final boss confrontation and an ending. As in the previous entry in the series, The Next Wave can be played by one or two players. The Next Wave features eleven chapters, each one corresponding to a gravitationally rounded object of the Solar System. Players start the campaign moving inwards from Pluto, with the final chapter of the game taking place on the Sun, where players confront the Mother-Hen Ship. Each Chapter consists of 10 waves of attacking hostiles, resulting in a total of 110 different levels. In all the waves, chickens attack the player's ship by dropping eggs, which the player needs to avoid. Players can collect different items to help them in-game, such as new primary weapons which players earn by collecting parcels, and power-ups, which can be used to upgrade the current primary weapon. Weapons can be upgraded up to eleven levels. As in Chicken Invaders, players can collect chicken drumsticks and roasts to receive missiles, powerful weapons that can wipe out an entire wave of enemies. Incentives are given however for not using the missiles, as players may receive special bonuses (e.g. new weapons or extra firepower) if they opt to take on the wave head on. The Next Wave was the first game in the franchise to feature a special version of the game (an edition) based around certain holidays. Chicken Invaders: The Next Wave - Christmas Edition was released on 25 November 2003, and was initially intended to be available during the months leading up to Christmas, however, its popularity caused InterAction studios to release this, and all subsequent Editions as standalone games. The Christmas Edition of The Next Wave replaces game graphics, sounds and music with festive versions and elements related to the Christmas holiday. Both versions of the game received Remastered versions, which were released on 20 December 2011 (the Christmas Edition) and on 26 January 2012 (the regular edition). These versions feature HD graphics, orchestral high-quality music, automatic fire function, progress saving, new languages, etc. The game and its Christmas Edition were initially released for Microsoft Windows, and have since been ported to MacOS, Linux, iOS, Android and Windows Phone. Chicken Invaders: Revenge of the Yolk Chicken Invaders: Revenge of the Yolk (known also as Chicken Invaders 3) is the third game in the Chicken Invaders main series, released on 11 November 2006 as a Christmas Edition, with the regular version releasing on 20 January 2007. The game is similar to The Next Wave in many respects, with the addition of new game mechanics and featuring 4-player cooperative play. It was intended as the final game in a planned Chicken Invaders trilogy, until the release of Ultimate Omelette in 2010. Revenge of the Yolk features a plot and cutscenes, while waves and enemies are more diverse than in the previous entries. The game has 120 waves, in 10 levels across 12 systems in the galaxy. New game mechanics added include unlockable content in the form of additional features which first have to be unlocked by finishing the game, that can either make the game easier or harder, as well as a variety of cosmetic changes. Furthermore, Revenge of the Yolk was the first game to introduce an overheating mechanic, designed to discourage players from holding down the fire button. The game features 30 bonuses, while weapons can be powered up to 12 power levels (the last one being a secret). Revenge of the Yolk is the only game in the franchise to be preceded by an Edition version, with its Christmas Edition being released almost 1,5 month earlier. The game was also the first to spawn an Easter Edition, released on 12 February 2010. Chicken Invaders: Ultimate Omelette Chicken Invaders: Ultimate Omelette (known also as Chicken Invaders 4) is the fourth main installment in the Chicken Invaders series, released on 29 November 2010 as a demo, with the full version releasing a week later. It is the first game in the series to be released on Valve's Steam platform, and the first to receive an accolade, as it was included in the Adrenaline Vaults Top Casual PC games of 2010. Ultimate Omelette further added game mechanics to the Revenge of the Yolk formula, most notably the ability to rotate the player's ship to face in any direction, depending on the level (instead of constantly facing 'up', as in the previous installments). The game further features a more 'cinematic' camera zooming in or out depending on the action (e.g. boss fights typically have the camera zoomed out to contain all the action). The game features a more intricate plot than its predecessors, new weapons, dockable ship upgrades (Satellites) to deal additional damage to the various enemies and bosses, and in-game currency (Keys), dropped by specific enemies (indicated by a golden halo) during the course of the game, which allow the player to purchase unlockable content. Ultimate Omelette contains 120 waves across 12 Chapters. As is the case with Revenge of the Yolk, the player's ship primary weapons can be upgraded through 11 power levels, plus a secret 12th. Ultimate Omelette had a Christmas Edition released on 24 November 2011, an Easter Edition released on 11 March 2012, and a Thanksgiving Edition (the only Edition in the series to be themed around the Thanksgiving holiday), which was released on 15 November 2013. Chicken Invaders: Cluck of the Dark Side Chicken Invaders: Cluck of the Dark Side (known also as Chicken Invaders 5) is the fifth game in the main series released for Microsoft Windows, OS X, Linux, iOS, Android, and Windows Phone. The beta release of the game was announced on 26 July 2014 and was completed in October, with the game receiving a final release on 21 November 2014. The game shares many similarities with its predecessors, with notable additions including a Spaceship Customization mode, which allows players to change the Hero's M-404 PI's color and paintjob, and the Artifact Recovery Mission chapters, where players explore various different Planets. Furthermore, new weapons were added, along with a Mission Progress screen which shows player progress. Thematic Editions of the game include the Halloween Edition released on 2 October 2015, and the Christmas Edition released on 19 September 2016. Spin-off games Chicken Invaders Universe Chicken Invaders Universe is a spin-off game in the Chicken Invaders franchise released on 14 December 2018 in Early access. The game is an MMO, in which players, as recruits fresh out of the Heroes Academy, join the ranks of the United Hero Force to fight against the fowl Henpire. On 14 July 2018, InterAction studios released the first teaser for the game, while on 14 August 2018, the official website for the game was created. As of December 2021, the game has been in Early Access for 3 years, currently at version 99.1, most recently updated on 14 Dec 2021. It is currently unknown when the Early Access phase will end. Reception The Chicken Invaders series has been positively received. CNET gave the first Chicken Invaders 4 out of 5 stars, highlighting the graphics and sound but criticizing the game's lack of features, and repetitive gameplay. CNET also gave The Next Wave 4 out of 5 stars stars, again praising the graphics, but criticizing the lack of a windowed mode. The Next Wave was rated "mediocre" by GameSpot, criticizing the repetitive gameplay and indistinct enemy bullets. However, the colorful, cartoony graphics where highlighted, as were "flashes of understanding of what makes a good shooter". Shortly after Ultimate Omelettes release, it was included in Adrenaline Vault's "Top Casual PC Games of 2010" list. The reviewer described it as "the most fun arcade space shooter in a very, very long time". The original orchestral soundtrack was also praised as rousing, invigorating, and "simply the absolute best". Gamezebo gave Ultimate Omellete, a 4 out of 5 stars rating, stating that "Chicken Invaders 4 is fantastic for a first-timer to the series", but criticizing that "for veterans of the games, it will feel a little too samey. The action has improved somewhat over the course of the series, but it’s still really the same ideas and premise that ran through the other games. Hence, those who are not new to the games may feel that paying full price for this latest release is a little too much." Reception of the fifth game in the series, Cluck of the Dark Side has likewise been favorable. The game currently has "Overwhelmingly Positive" reviews on the Steam store, and was voted as "Greek Game of the Year" by Greek media and entertainment website GameWorld.gr users in its annual Game of the Year Awards 2015. References External links Android (operating system) games Indie video games IOS games Linux games MacOS games Scrolling shooters Shoot 'em ups Video game franchises Windows games Windows Phone games Video games about birds Video games developed in Greece
3065729
https://en.wikipedia.org/wiki/Network%20Admission%20Control
Network Admission Control
Network Admission Control (NAC) refers to Cisco's version of Network Access Control, which restricts access to the network based on identity or security posture. When a network device (switch, router, wireless access point, DHCP server, etc.) is configured for NAC, it can force user or machine authentication prior to granting access to the network. In addition, guest access can be granted to a quarantine area for remediation of any problems that may have caused authentication failure. This is enforced through an inline custom network device, changes to an existing switch or router, or a restricted DHCP class. A typical (non-free) WiFi connection is a form of NAC. The user must present some sort of credentials (or a credit card) before being granted access to the network. In its initial phase, the Cisco Network Admission Control (NAC) functionality enables Cisco routers to enforce access privileges when an endpoint attempts to connect to a network. This access decision can be on the basis of information about the endpoint device, such as its current antivirus state. The antivirus state includes information such as version of antivirus software, virus definitions, and version of scan engine. Network admission control systems allow noncompliant devices to be denied access, placed in a quarantined area, or given restricted access to computing resources, thus keeping insecure nodes from infecting the network. The key component of the Cisco Network Admission Control program is the Cisco Trust Agent, which resides on an endpoint system and communicates with Cisco routers on the network. The Cisco Trust Agent collects security state information, such as what antivirus software is being used, and communicates this information to Cisco routers. The information is then relayed to a Cisco Secure Access Control Server (ACS) where access control decisions are made. The ACS directs the Cisco router to perform enforcement against the endpoint. This Cisco product has been marked End of Life since November 30, 2011, which is Cisco's terminology for a product that is no longer developed or supported. Posture assessment Besides user authentication, authorization in NAC can be based upon compliance checking. This posture assessment is the evaluation of system security based on the applications and settings that a particular system is using. These might include Windows registry settings or the presence of security agents such as anti-virus or personal firewall. NAC products differ in their checking mechanisms: 802.1x Extensibile Authentication Protocol Microsoft Windows AD domain authentication - login credentials Cisco NAC Appliance L2 switch or L3 authentication Pre-installed security agent Web-based security agent Network packet signatures or anomalies External network vulnerability scanner External database of known systems Agent-less posture assessment Most NAC vendors require the 802.1x supplicant (client or agent) to be installed. Some, including Hexis' NetBeat NAC, Trustwave, and Enterasys offer an agent-less posture checking. This is designed to handle the "Bring Your Own Device" or "BYOD" scenario to: Detect and fingerprint all network attached devices, whether wired or wireless Determine if these devices have common vulnerabilities and exposures (aka "CVEs") Quarantine rogue devices as well as those infected with new malware The agent-less approach works heterogeneously across almost all network environments and with all network device types. See also Access control Network Access Protection Cisco NAC Appliance Network Access Control PacketFence References External links Network Admission Control - Cisco Systems Agent-less Network Admission Control - NetClarity, Inc. FastNAC Computer network security
29145119
https://en.wikipedia.org/wiki/IBM%20Storwize
IBM Storwize
IBM Storwize systems were virtualizing RAID computer data storage systems with raw storage capacities up to 32 PB. Storwize is based on the same software as IBM SAN Volume Controller (SVC). Formerly Storwize was an independent data storage organisation. Models Сollateral lines: IBM SAN Volume Controller – virtualizes multiple storage arrays; IBM FlashSystem 9100 line – Flash memory high-end storage; IBM Flex System V7000 Storage Node – was designed for integration with IBM PureSystems (Support withdrawn at SVC v7.3.0) The Storwize family offers several members: High-end 7000 line: IBM Storwize V7000 Gen3 - Capacity on up to 760 modules (32 PB) and the capability to use FlashCore modules IBM Storwize V7000 Gen2 - Capacity up to 4 PB and the capability to virtualize external storage IBM Storwize V7000 – Capacity up to 1.92 PB and the capability to virtualize external storage IBM Storwize V7000 Unified – provides file connectivity Midrange line: IBM Storwize V5100 - Capacity on up to 760 modules and the capability to use FlashCore modules IBM Storwize V5030E - capacity on up to 760 modules IBM Storwize V5010E - capacity on up to 392 modules IBM Storwize V5030 - capacity on up to 760 modules IBM Storwize V5020 - capacity on up to 392 modules IBM Storwize V5010 - capacity on up to 392 modules IBM Storwize V5000 - capacity up to 960 TB Entry line: IBM Storwize V3700 – capacity up to 480 TB IBM Storwize V3500 – capacity up to 48 TB (available in China, Hong Kong and Taiwan only) Each of the above family members run software that is based on a common source codebase, although each has a type specific downloadable package. In Feb 2020 the Storwize V5000 and V5100 are replaced by the FlashSystem 5000 and 5100 respectively; and the FlashSystem 900 and Storwize V7000 are replaced by the FlashSystem 7200. Timeline According to the official availability dates and the days the systems are removed from marketing you can determine the following availability to purchase shown in light green. The graphics only contains the IBM storage systems starting with 'V', .i.e. V3700, V5000, V5010(E), V5020, V5030(E), V5100 and V7000. These systems vary even beyond their names, therefore the graphics also contains IBM type and model. All the displayed systems can still get regular service at the end of the timeline (beginning of 2020). For the IBM SAN Volume Controller's timeline see there. Architecture Storwize V7000 provides a very similar architecture to SVC, using the RAID code from the DS8000 to provide internal managed disks and SSD code from the DS8000 for tiered storage. Features and software All Storwize systems offer the following features: Command line interface Graphical user interface (GUI) for easier use (CLI commands can be displayed as details), with (nearly) all functions available Thin Provisioning known as Space Efficient Volumes Volumes can be resized, i.e. expanded or reduced (only advisable, if supported by the operating system) RAID levels 0,1,5,6,10 Distributed RAID (DRAID), i.e. DRAID-5 and DRAID-6 FlashCopy provides Snapshots with several combinations of these attributes: multiple, incremental, independent, space-efficient, immediately writable, or cascaded Data Migration: data can be moved between all virtualized storage (both internal and external) with no disruption Performance Management using throttling of bandwidth and transactions either by host or by volume supports interactive and non-interactive ssh-login supports several ways of call home and notifications (e-mail, syslog, SNMP, VPN-to-IBM, gui-based) security functions, e.g. audit log, encrypted access In addition, other Storwize systems offer the following features (some of them may require licenses): Automated tiered storage known as EasyTier, which supports hot spot removal within a tier (intra-tier) and between tiers (inter-tier) Encryption for data at rest (all available original drives are self encrypting) Peer to Peer Remote Copy Metro Mirror (synchronous copy) Global Mirror (asynchronous copy) Global Mirror with Change Volumes (checkpointed asynchronous copy using flashcopies, e.g. for low bandwidth) External Storage Virtualization Virtual Disk Mirroring Real-time Compression Data Reduction Pools with Deduplication and Real-time Compression Clustering, i.e. single user interface for several (up to 8) controllers, while supporting volume moves between them without disruption Stretched cluster configuration (SVC only), optional "enhanced" version with site awareness (hardware redundancy only across sites) Hyperswap (automated storage failover using the standard multipathing drivers; hardware redundancy on either site is given) In addition, the Storwize V7000 Unified offered the following features: File level storage (NAS) Policy based file placement Active Cloud Engine Global Namespace Async replication on File level Limitations Most Storwize systems are intended for certain environments and provide several features, that are not licensed by default. There are several types of licenses that depend on the chosen model and the subject of the license: licenses per net capacity (mainly with the SVC, where tiers can be treated differently) licenses per enclosures (the controller and each expansion need one) licenses per controller (just one license for all attached enclosures) There are some limitations for each model and each licensed internal code. These can be read on IBMs web pages. Here are examples for V7000, the V5000 series, and IBM SAN Volume Controller. In addition, there are more contributors to a working environment. IBM provides this in an interactive interoperabitlity matrix called IBM System Storage Interoperation Center (SSIC). Supported storage media As of November 2016, available Storwize media sizes include 2.5" flash SSDs with up to 15.36 TB capacity and 3.5" Nearline-HDDs with up to 10 TB capacity, available for Storwize 5000, 7000 and SAN Volume Controller native attach. IBM Storwize Easy Tier will automatically manage and continually optimize data placement in mixed pools of nearline disks / standard disks / read-intensive Flash and enterprise-grade Flash SSDs, including from virtualized devices. Hardware The Storwize family hardware consists of control enclosures and expansion enclosures, connected with wide SAS cables (Four lanes of 6 Gbit/s or 12 Gbit/s). Each enclosure houses 2.5" or 3.5" drives. The control enclosure contains two independent control units (node canisters) based on SAN Volume Controller technology, which are clustered via an internal network. Each enclosure also includes two power supply units (PSUs). Storwize V7000 family Eight available enclosure models: Type 2076-112 = Control enclosure – up to 12 3.5" drives (2010 - 2015) Type 2076-124 = Control enclosure – up to 24 2.5" drives (2010 - 2015) Type 2076-212 = Expansion enclosure – up to 12 3.5" drives (2010 - 2016) Type 2076-224 = Expansion enclosure – up to 24 2.5" drives (2010 - 2016) Type 2076-312 = Control enclosure – up to 12 3.5" drives + 10Gb iSCSI (2012 - 2018) Type 2076-324 = Control enclosure – up to 24 2.5" drives + 10Gb iSCSI (2012 - 2018) Type 2076-524 = Control enclosure – up to 24 2.5" drives + 10Gb iSCSI (known as V7000 Gen 2) (2014 - 2017) Type 2076-624 = Control enclosure – up to 24 2.5" drives + 10Gb iSCSI (known as V7000 Gen 2+) (2016 - current) Type 2076-724 = Control enclosure – up to 24 2.5" drives, NVMe flash drives or NVMe flash core modules + 10Gb iSCSI, 25 Gb iSER (known as V7000 Gen 3) (2018 - current) Storwize V7000 Gen 3 The IBM Storwize V7000 SFF Enclosure Model 724, announced November 6, 2018, supports NVMe and FC-NVMe (NVMe/FC) on 16 or 32 Gbit/s adapters, and iSER (iSCSI Extensions for RDMA) or iSCSI on 25GbE adapters. The Control Enclosure holds 24 2.5" NVMe flash drives or 24 2.5" NVMe FlashCore modules (FlashCore modules contain IBM MicroLatency technology with built-in hardware compression and encryption). Two node canisters, each with two 1.7 GHz 8-core Skylake processors and integrated hardware-assisted compression acceleration Cache: 128 GiB (per I/O group) standard, with optional 256 GiB or 1.15 TiB per I/O group (node pair) Connection: 32 Gbit/s fibre channel (FC) with NVMe support, 16 Gbit/s FC with NVMe support, 25 Gbit/s iSER and on board 10 Gbit/s iSCSI connectivity options Clustering support over Ethernet using RDMA as well as Fibre Channel Ability to cluster with older generations Storwize V7000 systems and with IBM FlashSystem 9100 Software Details: First supported on the Storwize V7000 8.2.0.2 software. Storwize V7000 Gen 2+ The IBM Storwize V7000 SFF Control Enclosure Model 624, announced 23 August 2016, features two node canisters and up to 256 GiB cache (system total) in a 2U, 19-inch rack mount enclosure. 1 Gbit/s iSCSI connectivity is standard, with options for 16 Gbit/s FC and 10 Gbit/s iSCSI/FCoE connectivity. It holds up to 24 2.5" SAS drives and supports the attachment of up to 20 Storwize V7000 expansion enclosures. IBM Storwize V7000 Gen 2+ is an updated Storwize V7000 Gen 2 with a newer CPU, doubled cache memory and faster FC options, integrated compression acceleration, and additional scalability with the following features: Two node canisters, each with a 10-core Broadwell processor and integrated hardware-assisted compression acceleration Cache: 128 GiB (per I/O group) standard, with optional 256 GiB per I/O group (node pair) for Real-time Compression workloads Connection: 16 Gbit/s Fibre Channel (FC), 10 Gbit/s iSCSI / Fibre Channel over Ethernet (FCoE), and 1 Gbit/s iSCSI connectivity options Software Details: First supported on the Storwize V7000 7.7.1 software. Storwize V7000 Gen 2 The IBM Storwize V7000 SFF Control Enclosure Model 524, announced 6 May 2014, features two node canisters and up to 128 GiB cache (system total) in a 2U, 19-inch rack mount enclosure. 1 Gbit/s iSCSI connectivity is standard, with options for 8 Gbit/s FC and 10 Gbit/s iSCSI/FCoE connectivity. It holds up to 24 2.5" SAS drives and supports the attachment of up to 20 Storwize V7000 expansion enclosures. IBM Storwize V7000 next-generation models offer increased performance and connectivity, integrated compression acceleration, and additional scalability with the following features: Chassis: 2U rack-mountable Two node canisters, each with an 8-core Haswell processor and integrated hardware-assisted compression acceleration Cache: 64 GiB (per I/O group) standard, with optional 128 GiB per I/O group (node pair) for Real-time Compression workloads Connection: 8 Gbit/s Fibre Channel (FC), 10 Gb iSCSI / Fibre Channel over Ethernet (FCoE), and 1 Gb iSCSI connectivity options Four 1 Gbit/s Ethernet ports standard for 1 Gb iSCSI connectivity and IP management per node canister. One 1 Gbit/s Ethernet port is dedicated for Technician access Storage: 12 Gbit/s SAS expansion enclosures supporting 12 3.5" large form factor (LFF) or 24 2.5" small form factor (SFF) drives Scaling for up to 504 drives per I/O group with the attachment of 20 Storwize V7000 expansion enclosures and up to 1,056 drives in a four-way clustered configuration The ability to be added into existing clustered systems with previous generation Storwize V7000 systems Compatibility with IBM Storwize V7000 Unified File Modules for unified storage capability All models include a three-year warranty with customer replaceable unit (CRU) and on-site service. Optional warranty service upgrades are available for enhanced levels of warranty service. Software Details: First supported on the Storwize V7000 7.3.0 software. The 7.4.0 software adds support for protection (SCSI T10 standard data integrity field (DIF)), encryption at rest and 4 KiB block drives. Storwize V7000 (Gen 1) Storwize V7000 consists of one to four control enclosures and up to 36 expansion enclosures, for a maximum of 40 enclosures altogether. It can scale up to 960 disks and 1.44PB raw internal capacity. Hardware details: Chassis: 2U rack-mountable Redundant dual-active intelligent controllers Cache: 16 GiB memory per control enclosure (8 GiB per internal controller) as a base Connection: For each control enclosure: Eight 8 Gbit/s Fibre Channel ports (four 8 Gbit/s FC ports per controller), four 1 Gbit/s iSCSI and optionally four 10 Gbit/s iSCSI/FCoE host ports (two 1 Gbit/s iSCSI and optionally two 10 Gbit/s iSCSI/FCoE ports per controller) or four 25 Gbit/s iSER host ports Storage: Up to 48 TB of physical storage per enclosure using 4 TB near-line SAS disk drives, or up to 28.8 TB physical storage per enclosure using 1.2 TB SAS 10K disk drives Control enclosures support attachment of up to 9 expansion enclosures with configurations up to 360 TB physical internal storage capacities (for Storwize V7000, up to 1.44 PB in clustered systems) Power: Dual power supplies and cooling components, the control enclosure PSUs each house a battery pack which retains cached data in the event of a power failure. File module - V7000 Unified - provides attachment to 1 Gbit/s and 10 Gbit/s network-attached storage (NAS) environments Storwize V5000 family Storwize V5100 Storwize V5100 consists of a control enclosure and up to 20 standard expansion enclosures or 8 high-density expansion enclosures. It supports NVMe and FC-NVMe (NVMe-oF) on 16 Gbit/s, 32 Gbit/s adapters, and iSCSI/iWARP/RoCE (iSCSI Extensions for RDMA) on 25GbE adapters. It can scale up to 760 disks and 23.34 PB raw internal capacity. Hardware details: Chassis: 2U rack-mountable Redundant dual-active intelligent controllers Cache: 64 GiB per control enclosure (32 GiB per internal controller) as a base, expandable up to 1152 GiB per control enclosure Connection: For each control enclosure: Eight 16 Gbit/s Fibre Channel ports (four 16 Gbit/s FC ports per controller), eight 32 Gbit/s FC-NVMe Fibre Channel ports, eight 10 Gbit/s iSCSI and optionally eight 25 Gbit/s iSCSI/iWARP/ROCE host ports Power: Dual power supplies and cooling components, the control enclosure PSUs each house a battery pack which retains cached data in the event of a power failure. Storwize V5030E Storwize V5030E consists of a control enclosure and up to 20 standard expansion enclosures or 8 high-density expansion enclosures. It can scale up to 760 disks and 23.34 PB raw internal capacity. Storwize V5010E Storwize V5010E consists of a control enclosure and up to 10 standard expansion enclosures or 4 high-density expansion enclosures. It can scale up to 392 disks and 12.04 PB raw internal capacity. Storwize V5030 Storwize V5030 consists of a control enclosure and up to 20 standard expansion enclosures or 8 high-density expansion enclosures. It can scale up to 760 disks and 23.34 PB raw internal capacity. Storwize V5020 Storwize V5020 consists of a control enclosure and up to 10 standard expansion enclosures or 4 high-density expansion enclosures. It can scale up to 392 disks and 12.04 PB raw internal capacity. Storwize V5010 Storwize V5010 consists of a control enclosure and up to 10 standard expansion enclosures or 4 high-density expansion enclosures. It can scale up to 392 disks and 12.04 PB raw internal capacity. Storwize V5000 Storwize V5000 consists of one to two control enclosures and up to 12 expansion enclosures, for a maximum of 18 enclosures altogether. It can scale up to 480 disks and 960 TB raw internal capacity. Hardware details: Chassis: 2U rack-mountable Six available enclosure models: Type 2078-12C = Control enclosure – up to 12 3.5" drives Type 2078-24C = Control enclosure – up to 24 2.5" drives Type 2078-212 = Expansion enclosure – up to 12 3.5" drives Type 2078-224 = Expansion enclosure – up to 24 2.5" drives Type 2078-12C = Control enclosure – up to 12 3.5" drives + 10Gb iSCSI Type 2078-24C = Control enclosure – up to 24 2.5" drives + 10Gb iSCSI Cache: 16 GiB per control enclosure (8 GiB per internal controller) as a base Storage: Up to 48 TB per enclosure using 4 TB near-line SAS disk drives, or up to 28.8 TB per enclosure using 1.2 TB 10K SAS disk drives Redundant dual-active intelligent controllers Connection: For each control enclosure: Eight 8 Gbit/s Fibre Channel ports (four 8 Gbit/s FC ports per controller), four 1 Gbit/s iSCSI and optionally four 10 Gbit/s iSCSI/FCoE host ports (two 1 Gbit/s iSCSI and optionally two 10 Gbit/s iSCSI/FCoE ports per controller) Control enclosures support attachment of up to six expansion enclosures Power: Dual power supplies and cooling components Storwize V3700 family Storwize V3700 Storwize V3700 consists of one control enclosure and up to 4 expansion enclosures. It can scale up to 240 2.5" disks or 120 3.5" disks and 480TB raw internal capacity. Hardware details: Chassis: 2U rack-mountable Four available enclosure models: 2072-12C - Dual Control Enclosure – up to 12 3.5" drives 2072-12E - Expansion Enclosure – up to 12 3.5" drives 2072-24C - Dual Control Enclosure – up to 24 2.5" drives 2072-24E - Expansion Enclosure up to 24 2.5" drives Cache: 8 GiB per control enclosure (4 GiB per internal controller) as a base, up to 16 GiB (8 GiB per internal controller) Connection: Control enclosure networking: 4 x 1 Gb iSCSI and 6 x 6 Gbit/s SAS host interfaces with optional 8 Gbit/s Fibre Channel, further 6 Gbit/s SAS or 10 Gb iSCSI/Fibre Channel over Ethernet host ports Storage: Dual-port, hot-swappable 6 Gb SAS disk drives - Up to 240 2.5" disks or 120 3.5" disks (with 9 expansion units) Power: Redundant, hot-swappable power supplies and fans, AC power (110 – 240 V) in October 2013, IBM announced DC powered models, NEBS and ETSI compliance and remote mirror over IP networks, integrating Bridgeworks SANrockIT technology to optimize the use of network bandwidth. Key hardware features also include: Dual controllers with up to 480 TB of capacity 8GB Cache (4GB per controller), with optional upgrade to 16GB RAID levels 0, 1, 5, 6, and 10 Storwize V3700 also offers management and interoperability features from previous Storwize systems, include simple management capabilities, virtualization of internal storage and thin provisioning for improved storage utilization and one-way data migration to easily move data onto Storwize V3700. SAN Volume Controller An entry-level SAN Volume Controller configuration contains a single I/O group, can scale out to support four I/O groups and can scale up to support 4,096 host servers, up to 8,192 volumes and up to 32 PB of virtualized storage capacity. Hardware details (per node - an I/O group consists of TWO nodes): Chassis: 2U rack-mountable (Based on IBM System x 3650 M4 server) Processor: one (optionally two) Intel Xeon E5-2650v2 2.6 GHz 8-core Optional compression accelerator cards (up to two) Memory: 32 GiB (optionally 64 GiB) Connection: Four (optionally, 8 or 12) 8 Gbit/s Fibre Channel ports Storage: Three 1 Gbit/s and optionally four 10 Gbit/s iSCSI/FCoE ports Supports up to two SSD expansion enclosures (up to 48 SSDs per I/O group) Power: Two power supplies and integrated battery units Flex System V7000 Storage Node Flex System V7000 released in 2012 and can scale up to 240 2.5" disks per control enclosure, or 960 2.5" disks per clustered system. Hardware details: Redundant dual-active intelligent controllers Cache: 16 GiB per control enclosure (8 GiB per internal controller) as a base Connection: SAN-attached 8 Gbit/s Fibre Channel, 10 Gigabit Ethernet (GbE) FCoE and iSCSI host connectivity Control enclosures support attachment of up to 9 expansion enclosures (internal and external mix) with configurations up to 960 drives. Software: This is not supported at v7.3.0. External links IBM Storage Virtualization IBM Storwize Products References IBM storage servers
300571
https://en.wikipedia.org/wiki/End-of-life%20product
End-of-life product
An end-of-life product (EOL product) is a product at the end of the product lifecycle which prevents users from receiving updates, indicating that the product is at the end of its useful life (from the vendor's point of view). At this stage, a vendor stops the marketing, selling, or provision of parts, services or software updates for the product. (The vendor may simply intend to limit or end support for the product.) In the specific case of product sales, a vendor may employ the more specific term "end-of-sale" ("EOS"). All users can continue to access discontinued products, but cannot receive security updates and technical support. The time-frame after the last production date depends on the product and relates to the expected product lifetime from a customer's point of view. Different lifetime examples include toys from fast food chains (weeks or months), mobile phones (3 years) and cars (10 years). Product support Product support during EOL varies by product. For hardware with an expected lifetime of 10 years after production ends, the support includes spare parts, technical support and service. Spare-part lifetimes are price-driven due to increasing production costs, as high-volume production sites are often closed when series production ends. Manufacturers may also continue to offer parts and services even when it is not profitable, to demonstrate good faith and to retain a reputation of durability. Minimum service lifetimes are also mandated by law for some products in some jurisdictions. Alternatively, some producers may discontinue maintenance of a product in order to force customers to upgrade to newer products. Computing In the computing field, the concept of end-of-life has significance in the production, supportability and purchase of software and hardware products. For example, Microsoft marked Windows 98 for end-of-life on June 30, 2006. Software produced after that date may not work for it. Microsoft's product Office 2007 (released on November 30, 2006), for instance, is not installable on Windows Me or any prior versions of Windows. Depending on the vendor, end-of-life may differ from end of service life, which has the added distinction that a vendor of systems or software will no longer provide maintenance, troubleshooting or other support. Such software which is abandoned service-wise by the original developers is also called abandonware. Sometimes, software vendors hand over software on end-of-life, end-of-sale or end-of-service to the user community, to allow them to provide service and further upgrades themselves. Notable examples are the web browser Netscape Communicator, which was released 1998 by Netscape Communications under an open-source license to the public, and the office suite StarOffice which was released by Sun Microsystems in October 2000 as OpenOffice.org (LibreOffice forked from this). Sometimes, software communities continue the support on end-of-official-support even without endorsement of the original developer, such developments are then called unofficial patches, existing for instance for Windows 98 or many PC games. See also Backward compatibility End of Life Vehicles Directive, a directive of the European Union Digital obsolescence Planned obsolescence Phase-out of fossil fuel vehicles#Unintended side-effects Product life cycle management Product lifetime Product change notification Service life Software release life cycle References JEDEC standard EOL : JESD48 JEDEC standard PCN : JESD46 Further reading Scharnhorst, W., Althaus, H.-J., Hilty, L. and Jolliet, O.: Environmental assessment of End-of-Life treatment options for an GSM 900 antenna rack, Int J LCA., 11 (6), pp: 426–436. 2006 Scharnhorst, W., Althaus, H.-J., Classen, M., Jolliet, O. and Hilty, L. M.: End of Life treatment of second generation mobile phone networks: strategies to reduce the environmental impact, Env Imp Ass Rev 25 (5), pp: 540–566. 2005 Product lifecycle management Software release
21870572
https://en.wikipedia.org/wiki/Joe%20Warren%20%28fighter%29
Joe Warren (fighter)
Joseph Ryan Warren (born October 31, 1976) is an American Greco-Roman wrestler and mixed martial artist who most recently competed for Bellator MMA. In Bellator, Warren became the first fighter in the promotions history to become world champion in 2 divisions, winning the Bellator Featherweight World Championship in 2010 and the Bantamweight World Championship in 2014. As a Greco-Roman wrestler, he won the 2006 Pan American and World Championship and was a favorite for the 2008 Olympics. He later participated in and won the Gold Medal at the 2007 World Cup. During the end of 2008 Warren started transitioning to mixed martial arts, and on March 8, 2009, he made his professional debut. He has competed for Bellator MMA, and Dream in Japan. He is a former Bellator Bantamweight Champion and former Bellator Featherweight Champion. Warren is currently ranked as the #24 Bantamweight in the world by fightmatrix.com. Greco-Roman wrestling career Warren practiced freestyle wrestling before switching to Greco-Roman. He began his career at East Kentwood High School where he placed 3 times in Division one with one state championship coming during his senior year and held the national takedown record. He wrestled for the University of Michigan. He won the division of men's Greco-Roman wrestling at the 2006 FILA Wrestling World Championships and was a favorite for the 2008 Summer Olympics in Beijing, China. Other accomplishments include 6th at the 2000 World University Championship at , 9th at the 2005 FILA Wrestling World Championships, 1st at the 2006 Pan American Championship and 1st at the 2007 World Cup all at 60 kg. In 2007 he missed Pan American Games at Rio de Janeiro because of positive test for cannabis. In 2008 he got 2 years ban from international wrestling competition and missed 2008 Summer Olympics. On December 18, 2010, it was reported that Warren would be making a return to wrestling, to try to qualify for the 2012 Summer Olympics in London. This did not occur. Mixed martial arts career DREAM Joe Warren started a transition to MMA in 2008 and joined up with Team Quest where he got to train with fellow Greco-Roman wrestler and Pride Fighting Championship Champion Dan Henderson. His MMA debut was on March 8, 2009, at Dream.7, where he defeated former WEC Bantamweight Champion Chase Beebe by TKO (doctor stoppage) after the first round due to a cut Beebe received over his right eye. In the second round of the tournament, at Dream.9 on May 26, 2009, he was matched up with and defeated former K-1 Hero's Lightweight Grand Prix Champion, and formerly 17–1, Norifumi Yamamoto in his first fight after a 512-day layoff officially due to elbow and knee injuries. In preparation for the bout Warren trained with former WEC Featherweight Champion Urijah Faber and his Team Alpha Male after Faber called Warren and told him he knew how to defeat Yamamoto. Faber had previously prepared Joseph Benavidez to fight Yamamoto in July 2008, but the fight did not happen as Yamamoto pulled out three days before the fight. Warren's fight happened as planned though, and after going the allotted 15 minutes Warren was awarded a split decision victory. The final two rounds of the tournament took place at Dream 11 which took place on October 6, 2009. In his scheduled semi-final bout Warren fought Brazilian jiu-jitsu expert Bibiano Fernandes, where he quickly lost due to a controversial first-round armbar after securing a takedown. Bellator Fighting Championships 2010 On February 1, 2010, Warren officially announced that he signed with Bellator Fighting Championships, and that he would compete in the Featherweight tournament during Bellator's Season 2. At Bellator 13, Warren fought in a quarter-final bout against Eric Marriott. Warren dominated the fight with his wrestling and took the fight on all three judges score cards, giving him the unanimous decision win. Warren advanced to the semi-final round where he defeated Georgi Karakhanyan via unanimous decision at Bellator 18. On June 24, 2010, Warren won the Bellator featherweight tournament by claiming a split-decision over Patricio Freire. Joe was both dropped and caught in a rear-naked choke in the first round. He came back in rounds 2 and 3 by scoring takedowns followed by a ground and pound attack. The official scores were (29–28), (28–29), and (29–28). After the fight, Bellator Fighting Championships Featherweight Champion Joe Soto came into the ring and the two exchanged words, with Warren telling Soto "you've got my belt" and Soto promising to hold onto the title. The title fight took place on September 2, 2010 at Bellator 27 in the third season of Bellator Fighting Championships. This was Warren's first title shot. Following a dominant opening round by Soto, Warren won the fight via KO (strikes) in the opening minute of the second round to become the new Bellator Featherweight champion. 2011 Warren faced Marcos Galvão in a non-title fight on April 16, 2011 at Bellator 41. In the fight Galvão negated a majority of Warren's offense for the first two rounds by showing strong takedown defense, taking down Warren multiple times, taking Warren's back, and executing good knees from the clinch. In the third round he was taken down by Warren and controlled throughout the round. At the end of the fight, Bellator color commentator, Jimmy Smith, believed Galvão won the fight 29–28. Along with Smith, many top MMA sites, (MMAJunkie, Sherdog, MMAFighting, MMASpot), all believed that Galvão won the fight by 29–28. It was then announced that Warren had won the fight via unanimous decision (30–27, 29–28, 29–28). Warren was expected to put his title on the line versus Patricio Freire at Bellator 47 on July 23, 2011, but had to be postponed due to Pitbull's unexpected injury. In the fall of 2011, Warren was entered the Bellator Season 5 bantamweight tournament. He was hoping to become the promotion's first two division champion. Warren faced fellow amateur wrestling world champion, Alexis Vila, at Bellator 51, in the quarterfinal round of Bellator's season five bantamweight tournament. He lost the fight via KO in the first round. 2012 Warren fought Pat Curran on March 9, 2012 at Bellator 60 in the first defense of his Bellator Featherweight Championship. He lost the fight via KO in the third round. After losing the title, Warren returned and faced Owen Evinger on November 9, 2012 at Bellator 80. He won the fight via unanimous decision. 2013 On February 5, 2013, it was announced that Warren would be one of the four coaches to appear on the promotion's reality series titled Fight Master: Bellator MMA. Joe Warren was set to fight Nick Kirk at Bellator 98 in the semifinal match of Bellator season nine bantamweight tournament, however he was not cleared to fight because he was knocked out in a sparring session prior to the fight. The fight was then rescheduled for Bellator 101 on September 27, 2013. Warren won via submission in the second round. Warren faced Travis Marx in the finals on November 8, 2013 at Bellator 107. He won via TKO in the second round to win the Bellator season nine Bantamweight tournament. 2014 Warren was scheduled to face Bellator Bantamweight champion Eduardo Dantas at Bellator 118. However, on April 26, 2014 it was revealed that Dantas was injured head and withdrew from the fight. Warren was to face Rafael Silva in an Interim Bantamweight title fight. Silva, however, missed weight and the promotion made the interim title available only if Warren were to win. Warren won the fight via unanimous decision to become the Bellator Interim Bantamweight champion. Warren faced Eduardo Dantas in a title unification bout on October 10, 2014 at Bellator 128. He won the fight via unanimous decision to become the undisputed Bellator Bantamweight Champion. 2015 Warren made his first bantamweight title defense against Marcos Galvão in a rematch on March 27, 2015 at Bellator 135. He lost the fight and the title via verbal submission due to a kneebar in the second round. Warren faced WEC and Bellator veteran L.C. Davis in the main event of Bellator 143 on September 25, 2015. He won the fight via unanimous decision. 2016 Warren next faced undefeated prospect Darrion Caldwell in a title eliminator in the main event at Bellator 151 on March 4, 2016. He lost the fight via technical submission due to a rear-naked choke in the first round. Warren faced Sirwan Kakai at Bellator 161 on September 16, 2016. He won the fight via guillotine choke submission in the third round. Warren faced Eduardo Dantas in a rematch at Bellator 166 on December 2, 2016, for the Bellator bantamweight championship. He lost the fight via majority decision. 2017 Warren faced prospect Steve Garcia at Bellator 181 on July 14, 2017. He won the fight by unanimous decision. 2018 Warren faced Joe Taimanglo at Bellator 195 on March 2, 2018. He lost the fight via split decision. Warren next faced Shawn Bunch on November 30, 2018 at Bellator 210. He lost the bout via first round technical knockout. Departure After 2 years of inactivity, Bellator MMA announced on October 27, 2020, that Warren had been released from the promotion. Personal life Warren and his wife have a son who was born July 5, 2008 and a girl, Maddox Reese Warren, who was born March 4, 2010. Championships and accomplishments Mixed martial arts Bellator Fighting Championships Bellator Featherweight World Championship (One time) Bellator Bantamweight World Championship (One time) Interim Bellator Bantamweight Championship (One time) Bellator Season 2 Featherweight Tournament Championship Bellator Season 9 Bantamweight Tournament Championship First fighter to hold Championships in multiple weight classes Oldest Fighter to win a championship in Bellator History (37 years and 362 days) Most decision wins in Bellator History (8) DREAM DREAM 2009 Featherweight Grand Prix Semifinalist FIGHT! Magazine 2009 Upset of the Year vs. Norifumi Yamamoto on May 26 Fight Matrix 2008 Rookie of the Year Professional wrestling Real Pro Wrestling RPW Season One 132 lb Championship Semifinalist Amateur wrestling International Federation of Associated Wrestling Styles 2007 World Cup Senior Greco-Roman Gold Medalist 2007 Dave Schultz Memorial International Open Senior Greco-Roman Silver Medalist 2006 FILA Wrestling World Championships Senior Greco-Roman Gold Medalist 2006 Pan American Championships Senior-Greco Roman Gold Medalist 2005 Sunkist Kids/ASU International Open Senior Greco-Roman Gold Medalist 2005 Dave Schultz Memorial International Open Senior Greco-Roman Gold Medalist 2004 Henri Deglane Challenge Senior Greco-Roman Bronze Medalist 2004 Dave Schultz Memorial International Open Senior Greco-Roman Silver Medalist 2003 NYAC Christmas International Open Senior Greco-Roman Gold Medalist 2003 Henri Deglane Challenge Senior Greco-Roman Silver Medalist 2003 Sunkist Kids International Open Senior Greco-Roman Gold Medalist 2002 Henri Deglane Challenge Senior Greco-Roman Silver Medalist 2002 Dave Schultz Memorial International Open Senior Greco-Roman Silver Medalist USA Wrestling FILA World Team Trials Senior Greco-Roman Winner (2005, 2006, 2007) USA Senior Greco-Roman National Championship (2005, 2006, 2007) USA Senior Greco-Roman National Championship 3rd Place (2004) USA Wrestling Greco-Roman Wrestler of the Year (2006) USA University Greco-Roman National Championship (1998) 2005 NYAC Holiday Tournament Senior Greco-Roman Gold Medalist 2004 NYAC Christmas Championships Senior Greco-Roman Gold Medalist 2004 USA Olympic Team Trials Senior Greco-Roman Runner-up 2002 NYAC Christmas Classic Senior Greco-Roman Silver Medalist National Wrestling Hall of Fame Dan Gable Museum Alan and Gloria Rice Greco-Roman Hall of Champions Inductee (2009) National Collegiate Athletic Association NCAA Division I Collegiate National Championship 3rd Place (2000) NCAA Division I All-American (2000) Big Ten Conference Championship Runner-up (1998, 1999) Michigan High School Athletic Association MHSAA Class A High School State Championship (1995) MHSAA Class A High School State Championship 3rd Place (1994) MHSAA Class A All-State (1993, 1994, 1995) World Championships Matches |- ! Res. ! Record ! Opponent ! Score ! Date ! Event ! Location ! Notes |- ! style=background:white colspan=9 | |- | Win | 7-2 | align=left | David Bedinadze | style="font-size:88%"|1-1, 4-1, 2-1 | style="font-size:88%"|2006-09-20 | style="font-size:88%"|2006 World Wrestling Championships | style="text-align:left;font-size:88%;" | Guangzhou, China | style="text-align:left;font-size:88%;" | Gold Medal |- | Win | 6-2 | align=left | Eusebiu Diaconu | style="font-size:88%"|1-1, 2-1 | style="font-size:88%"|2006-09-20 | style="font-size:88%"|2006 World Wrestling Championships | style="text-align:left;font-size:88%;" | Guangzhou, China | style="text-align:left;font-size:88%;" | |- | Win | 5-2 | align=left | Vyacheslav Djaste | style="font-size:88%"|4-1, 2-0 | style="font-size:88%"|2006-09-20 | style="font-size:88%"|2006 World Wrestling Championships | style="text-align:left;font-size:88%;" | Guangzhou, China | style="text-align:left;font-size:88%;" | |- | Win | 4-2 | align=left | Ali Ashkani | style="font-size:88%"|2-1, 1-1 | style="font-size:88%"|2006-09-20 | style="font-size:88%"|2006 World Wrestling Championships | style="text-align:left;font-size:88%;" | Guangzhou, China | style="text-align:left;font-size:88%;" | |- | Win | 3-2 | align=left | Dilshod Aripov | style="font-size:88%"|2-3, 3-1, 3-1 | style="font-size:88%"|2006-09-20 | style="font-size:88%"|2006 World Wrestling Championships | style="text-align:left;font-size:88%;" | Guangzhou, China | style="text-align:left;font-size:88%;" | |- ! style=background:white colspan=9 | |- | Loss | 2-2 | align=left | Vahan Juharyan | style="font-size:88%"|0-2, 1-1 | style="font-size:88%"|2005-09-30 | style="font-size:88%"|2005 World Wrestling Championships | style="text-align:left;font-size:88%;" | Budapest, Hungary | style="text-align:left;font-size:88%;" | |- | Loss | 2-1 | align=left | Ali Ashkani | style="font-size:88%"|1-2, 0-7 | style="font-size:88%"|2005-09-30 | style="font-size:88%"|2005 World Wrestling Championships | style="text-align:left;font-size:88%;" | Budapest, Hungary | style="text-align:left;font-size:88%;" | |- | Win | 2-0 | align=left | Luis Liendo | style="font-size:88%"|9-5, 7-0 | style="font-size:88%"|2005-09-30 | style="font-size:88%"|2005 World Wrestling Championships | style="text-align:left;font-size:88%;" | Budapest, Hungary | style="text-align:left;font-size:88%;" | |- | Win | 1-0 | align=left | Eric Buisson | style="font-size:88%"|Fall | style="font-size:88%"|2005-09-30 | style="font-size:88%"|2005 World Wrestling Championships | style="text-align:left;font-size:88%;" | Budapest, Hungary | style="text-align:left;font-size:88%;" | |- Mixed martial arts record |- | Loss | align=center | 15–8 |Shawn Bunch |TKO (submission to punches) |Bellator 210 | |align=center|1 |align=center|1:42 |Thackerville, Oklahoma, United States | |- | Loss | align=center | 15–7 | Joe Taimanglo | Decision (split) | Bellator 195 | | align=center | 3 | align=center | 5:00 | Thackerville, Oklahoma, United States | |- | Win | align=center | 15–6 | Steve Garcia | Decision (unanimous) | Bellator 181 | | align=center | 3 | align=center | 5:00 | Thackerville, Oklahoma, United States | |- | Loss | align=center | 14–6 | Eduardo Dantas | Decision (majority) | Bellator 166 | | align=center | 5 | align=center | 5:00 | Thackerville, Oklahoma, United States | |- | Win | align=center | 14–5 | Sirwan Kakai | Submission (guillotine choke) | Bellator 161 | | align=center | 3 | align=center | 1:04 | Cedar Park, Texas, United States | |- | Loss | align=center | 13–5 | Darrion Caldwell | Technical Submission (rear-naked choke) | Bellator 151 | | align=center | 1 | align=center | 3:23 | Thackerville, Oklahoma, United States | |- | Win | align=center | 13–4 | L.C. Davis | Decision (unanimous) | Bellator 143 | | align=center | 3 | align=center | 5:00 | Hidalgo, Texas, United States | |- | Loss | align=center | 12–4 | Marcos Galvão | Verbal Submission (kneebar) | Bellator 135 | | align=center | 2 | align=center | 0:45 | Thackerville, Oklahoma, United States | |- | Win | align=center | 12–3 | Eduardo Dantas | Decision (unanimous) | Bellator 128 | | align=center | 5 | align=center | 5:00 | Thackerville, Oklahoma, United States | |- | Win | align=center | 11–3 | Rafael Silva | Decision (unanimous) | Bellator 118 | | align=center | 5 | align=center | 5:00 | Atlantic City, New Jersey, United States | |- | Win | align=center | 10–3 | Travis Marx | TKO (knee and punches) | Bellator 107 | | align=center | 2 | align=center | 1:54 | Thackerville, Oklahoma, United States | |- | Win | align=center | 9–3 | Nick Kirk | Submission (reverse triangle armbar) | Bellator 101 | | align=center | 2 | align=center | 3:03 | Portland, Oregon, United States | |- | Win | align=center | 8–3 | Owen Evinger | Decision (unanimous) | Bellator 80 | | align=center | 3 | align=center | 5:00 | Hollywood, Florida, United States | |- | Loss | align=center | 7–3 | Pat Curran | KO (punches) | Bellator 60 | | align=center | 3 | align=center | 1:25 | Hammond, Indiana, United States | |- | Loss | align=center | 7–2 | Alexis Vila | KO (punch) | Bellator 51 | | align=center | 1 | align=center | 1:04 | Canton, Ohio, United States | |- | Win | align=center | 7–1 | Marcos Galvão | Decision (unanimous) | Bellator 41 | | align=center | 3 | align=center | 5:00 | Yuma, Arizona, United States | |- | Win | align=center | 6–1 | Joe Soto | KO (knee and punches) | Bellator 27 | | align=center | 2 | align=center | 0:33 | San Antonio, Texas, United States | |- | Win | align=center | 5–1 | Patricio Freire | Decision (split) | Bellator 23 | | align=center | 3 | align=center | 5:00 | Louisville, Kentucky, United States | |- | Win | align=center | 4–1 | Georgi Karakhanyan | Decision (unanimous) | Bellator 18 | | align=center | 3 | align=center | 5:00 | Monroe, Louisiana, United States | |- | Win | align=center | 3–1 | Eric Marriott | Decision (unanimous) | Bellator 13 | | align=center | 3 | align=center | 5:00 | Hollywood, Florida, United States | |- | Loss | align=center | 2–1 | Bibiano Fernandes | Submission (armbar) | Dream 11 | | align=center | 1 | align=center | 0:42 | Yokohama, Japan | |- | Win | align=center | 2–0 | Norifumi Yamamoto | Decision (split) | Dream 9 | | align=center | 2 | align=center | 5:00 | Yokohama, Japan | |- | Win | align=center | 1–0 | Chase Beebe | TKO (doctor stoppage) | Dream 7 | | align=center | 1 | align=center | 10:00 | Saitama, Saitama, Japan | See also List of current mixed martial arts champions List of male mixed martial artists References External links (archive) Joe Warren profile at the National Wrestling Hall of Fame 1976 births Living people American male mixed martial artists American male sport wrestlers Featherweight mixed martial artists Bantamweight mixed martial artists Mixed martial artists utilizing collegiate wrestling Mixed martial artists utilizing Greco-Roman wrestling Mixed martial artists utilizing boxing World Wrestling Championships medalists Bellator MMA champions Bellator male fighters University of Michigan alumni
54360501
https://en.wikipedia.org/wiki/Collaborator%20%28software%29
Collaborator (software)
Collaborator is a peer code review and document review software application by SmartBear Software, headquartered in Somerville, Massachusetts. This tool is used by teams to standardize their review process, reduce defects early, and speed up their development timelines. Companies in highly-regulated industries like Automotive, Healthcare, Aerospace, Finance, and Embedded Systems also use the detailed review reports in Collaborator to meet compliance burdens. History Collaborator was originally named Code Collaborator as the original product of SmartBear software founded by Jason Cohen in 2003 Cohen sold Smartbear in 2010 as part of a merger of three different companies Automated QA, Pragmatic Software and SmartBear. Code Collaborator was the winner of the 2008 collaboration tools Jolt award References 2017 software Proprietary software
348147
https://en.wikipedia.org/wiki/On%20the%20fly
On the fly
On the fly is a phrase used to describe something that is being changed while the process that the change affects is ongoing. It is used in the automotive, computer, and culinary industries. In cars, on the fly can be used to describe the changing of the cars configuration while it is still driving. Processes that can occur while the car is still driving include switching between two wheel drive and four wheel drive on some cars and opening and closing the roof on some convertible cars. In computing, on the fly CD writers can read from one CD and write the data to another without saving it on a computer's memory. Switching programs or applications on the fly in multi-tasking operating systems means the ability to switch between native and/or emulated programs or applications that are still running and running in parallel while performing their tasks or processes, but without pausing, freezing, or delaying any, or other unwanted events. Switching computer parts on the fly means computer parts are replaced while the computer is still running. It can also be used in programming to describe changing a program while it is still running. In restaurants and other places involved in the preparation of food, the term is used to indicate that an order needs to be made right away. Colloquial usage In colloquial use, "on the fly" means something created when needed. The phrase is used to mean: something that was not planned ahead changes that are made during the execution of same activity: ex tempore, impromptu. Automotive usage In the automotive industry, the term refers to the circumstance of performing certain operations while a vehicle is driven by the engine and moving. In reference to four-wheel drive vehicles, this term describes the ability to change from two to four-wheel drive while the car is in gear and moving. In some convertible models, the roof can be folded electrically on the fly, whereas in other cases the car must be stopped. In harvesting machines, newer monitoring systems let the driver track the quality of the grain, while enabling them to adjust the rotor speed on the fly as harvesting progresses. Computer usage In multitasking computing an operating system can handle several programs, both native applications or emulated software, that are running independent, parallel, together in the same time in the same device, using separated or shared resources and/or data, executing their tasks separately or together, while a user can switch on the fly between them or groups of them to use obtained effects or supervise purposes, without waste of time or waste of performance. In operating systems using GUI very often it is done by switching from an active window (or an object playing similar role) of a particular software piece to another one but of another software. A computer can compute results on the fly, or retrieve a previously stored result. It can mean to make a copy of a removable media (CD-ROM, DVD, etc.) directly, without first saving the source on an intermediate medium (a harddisk); for example, copying a CD-ROM from a CD-ROM drive to a CD-Writer drive. The copy process requires each block of data to be retrieved and immediately written to the destination, so that there is room in the working memory to retrieve the next block of data. When used for encrypted data storage, on the fly the data stream is automatically encrypted as it is written and decrypted when read back again, transparently to software. The acronym OTFE is typically used. On-the-fly programming is the technique of modifying a program without stopping it. A similar concept, hot swapping, refers to on-the-fly replacement of computer hardware. On-the-fly computing On-the-fly computing (OTF computing) is about automating and customizing software tailored to the needs of a user. According to a requirement specification, this software is composed of basic components, so-called basic services, and a user-specific setting of these basic components is made. Accordingly, the requested services are compiled only at the request of the user and then run in a specially designed data center to make the user the functions of the (on-the-fly) created service accessible. Restaurant usage In restaurants, cafes, banquet halls, and other places involved in the preparation of food, the term is used to indicate that an order needs to be made right away. This is often because a previously-served dish is inedible, because a waiter has made a mistake or delayed, or because a guest has to leave promptly. Usage in sports In ice hockey, it is both legal and common for teams to make line changes (player substitutions) when the puck is in play. Such line changes are referred to as being done "on the fly". References Computer jargon Restaurant terminology Technical terminology
24920873
https://en.wikipedia.org/wiki/Java%20Development%20Kit
Java Development Kit
The Java Development Kit (JDK) is a distribution of Java Technology by Oracle Corporation. It implements the Java Language Specification (JLS) and the Java Virtual Machine Specification (JVMS) and provides the Standard Edition (SE) of the Java Application Programming Interface (API). It is derivative of the community driven OpenJDK which Oracle steward. It provides software for working with Java applications. Examples of included software are the virtual machine, a compiler, performance monitoring tools, a debugger, and other utilities that Oracle considers useful for a Java programmer. Oracle have released the current version of the software under the Oracle No-Fee Terms and Conditions (NFTC) license. Oracle release binaries for the x86-64 architecture for Windows, macOS, and Linux based operating systems, and for the aarch64 architecture for macOS and Linux. Previous versions have supported the Oracle Solaris operating system and SPARC architecture. Oracle's primary implementation of the JVMS is known as the HotSpot (virtual machine). JDK contents The JDK has as its primary components a collection of programming tools, including: appletviewer – this tool can be used to run and debug Java applets without a web browser apt – the annotation-processing tool extcheck – a utility that detects JAR file conflicts idlj – the IDL-to-Java compiler. This utility generates Java bindings from a given Java IDL file. jabswitch – the Java Access Bridge. Exposes assistive technologies on Microsoft Windows systems. java – the loader for Java applications. This tool is an interpreter and can interpret the class files generated by the javac compiler. Now a single launcher is used for both development and deployment. The old deployment launcher, jre, comes with Sun JDK, and instead it has been replaced by this new java loader. javac – the Java compiler, which converts source code into Java bytecode javadoc – the documentation generator, which automatically generates documentation from source code comments jar – the archiver, which packages related class libraries into a single JAR file. This tool also helps manage JAR files. javafxpackager – tool to package and sign JavaFX applications jarsigner – the jar signing and verification tool javah – the C header and stub generator, used to write native methods javap – the class file disassembler javaws – the Java Web Start launcher for JNLP applications JConsole – Java Monitoring and Management Console jdb – the debugger jhat – Java Heap Analysis Tool (experimental) jinfo – This utility gets configuration information from a running Java process or crash dump. (experimental) jmap Oracle jmap - Memory Map– This utility outputs the memory map for Java and can print shared object memory maps or heap memory details of a given process or core dump. (experimental) jmc – Java Mission Control jpackage – a tool for generating self-contained application bundles. (experimental) jps – Java Virtual Machine Process Status Tool lists the instrumented HotSpot Java Virtual Machines (JVMs) on the target system. (experimental) jrunscript – Java command-line script shell. jshell - a read–eval–print loop, introduced in Java 9. jstack – utility that prints Java stack traces of Java threads (experimental) jstat – Java Virtual Machine statistics monitoring tool (experimental) jstatd – jstat daemon (experimental) keytool – tool for manipulating the keystore pack200 – JAR compression tool policytool – the policy creation and management tool, which can determine policy for a Java runtime, specifying which permissions are available for code from various sources. VisualVM – visual tool integrating several command-line JDK tools and lightweight performance and memory profiling capabilities (no longer included in JDK 9+) wsimport – generates portable JAX-WS artifacts for invoking a web service. xjc – Part of the Java API for XML Binding (JAXB) API. It accepts an XML schema and generates Java classes. Experimental tools may not be available in future versions of the JDK. The JDK also comes with a complete Java Runtime Environment, usually called a private runtime, due to the fact that it is separated from the "regular" JRE and has extra contents. It consists of a Java Virtual Machine and all of the class libraries present in the production environment, as well as additional libraries only useful to developers, such as the internationalization libraries and the IDL libraries. Copies of the JDK also include a wide selection of example programs demonstrating the use of almost all portions of the Java API. Other JDKs In addition to the most widely used JDK discussed in this article, there are other JDKs commonly available for a variety of platforms, some of which started from the Sun JDK source and some that did not. All adhere to the basic Java specifications, but often differ in explicitly unspecified areas, such as garbage collection, compilation strategies, and optimization techniques. They include: In development or in maintenance mode: Azul Systems Zing, low latency JDK for Linux; Azul Systems / OpenJDK-based Zulu for Linux, Windows, Mac OS X, embedded and the cloud; OpenJDK / IcedTea; Aicas JamaicaVM; IBM J9 JDK, for AIX, Linux, Windows, MVS, OS/400, Pocket PC, z/OS; Not being maintained or discontinued: Apache Harmony; Apple's Mac OS Runtime for Java JVM/JDK for Classic Mac OS; Blackdown Java – Port of Sun's JDK for Linux; GNU's Classpath and GCJ (The GNU Compiler for Java); Oracle Corporation's JRockit JDK, for Windows, Linux, and Solaris; See also Classpath (Java) Java platform Java version history References External links Oracle Java SE Oracle Java SE Support Roadmap Open source OpenJDK project OpenJDK builds from Oracle OpenJDK builds from AdoptOpenJDK IBM Java SDK Downloads Open source JDK 7 project GNU Classpath – a Free software JDK alternative JDK Software development kits Oracle software Sun Microsystems software
5816777
https://en.wikipedia.org/wiki/David%20Litchfield
David Litchfield
David Litchfield (born 1975) is a British security expert and The Director of Information Security Assurance for Apple. Anne Saita, writing for Information Security magazine, called him along with his brother Mark Litchfield, "World's Best Bug Hunters" in December, 2003. Computer security Litchfield has found hundreds of vulnerabilities in many popular products, among which the most outstanding discoveries in products by Microsoft, Oracle and IBM. At the Blackhat Security Briefings in July 2002 he presented some exploit code to demonstrate a buffer overflow vulnerability he had discovered in Microsoft's SQL Server 2000. Then six months later, on 25 January 2003, persons unknown used the code as the template for the SQL Slammer Worm. After several years in vulnerability research, Litchfield made a move into Oracle forensics and has documented how to perform a forensic analysis of a compromised database server in a series of white papers – Oracle Forensics Parts 1 to 6. He is in the process of researching and developing an open source tool called the Forensic Examiner's Database Scalpel (F.E.D.S). Business and researcher Litchfield founded a company named Cerberus Information Security which was acquired by @stake in July 2000. A year and a half later he founded Next Generation Security Software (NGS) with three colleagues and his brother Mark and his father from @stake. Under his leadership NGS won many top business and technical awards. These include the Queen's Award for Enterprise in 2007 awarded at Buckingham Palace by the Queen, Winner of the International Trade Award for Innovation in 2008 awarded at the House of Lords, Winner of the SC Award for Best Security Company in Europe in 2008 and Runners Up in 2007, as an individual David won the award for the Entrepreneur of South London in 2007 with many more other awards. He is the author of various software packages, and also of many technical documents on security issues. He is the author of the Oracle Hacker's Handbook and is a co-author of the Database Hacker's Handbook, the Shellcoder's Handbook and SQL Server Security. He was also a contributing author for Special Ops. David Litchfield is currently the Director of Information Security Assurance for Apple. References External links David Litchfield's White Papers Computer security specialists 1975 births Living people People educated at Glenalmond College British businesspeople British technology writers
16191273
https://en.wikipedia.org/wiki/Deborah%20Estrin
Deborah Estrin
Deborah Estrin (born December 6, 1959) is a Professor of Computer Science at Cornell Tech. She is co-founder of the non-profit Open mHealth and gave a TEDMED talk on small data in 2013. Estrin is known for her work on sensor networks, participatory sensing, mobile health, and small data. She is one of the most-referenced computer scientists of all time, with her work cited over 128,000 times according to Google Scholar. In 2009, Estrin was elected a member of the National Academy of Engineering for the pioneering design and application of heterogeneous wireless sensing systems for environmental monitoring. Career Estrin earned a PhD in Electrical Engineering and Computer Science from MIT in 1985, under the supervision of Jerry Saltzer. She has also received honorary degrees recognizing her work: a degree honoris causa from EPFL in 2008, and an honorary doctorate degree from Uppsala University, Sweden in 2011. Estrin was a Professor of Computer Science at the University of Southern California between 1986 and 2001, and at the UCLA between 2001 and 2013, where she was the founding director of the NSF-funded Center for Embedded Networked Sensing (CENS). In 2012, Cornell Tech announced Estrin as the first academic hire to the high-tech campus in New York City. At Cornell Tech, Estrin is the Robert V. Tishman '37 Professor of Computer Science. She is also the founder of the Health Tech hub and director of the Small Data Lab, and a member of the Connected Experiences Lab. Estrin's research has focused on using mobile devices and sensors to collect and analyze data, with applications to health and well-being. Her non-profit startup, Open mHealth, created open data sharing standards and tools that allow developers of health applications to store, process, and visualize data. Her research also explores immersive recommendation systems and the privacy implications of user modeling and data use. Estrin has received numerous academic and popular recognitions for her research. She was named one of Popular Science's "Brilliant 10" in 2003. In 2007, she was elected a Fellow of the American Academy of Arts and Sciences, and in 2009 was inducted into the National Academy of Engineering. She is a fellow of the ACM and the IEEE. In 2018 she was elected as a McArthur Fellow for "Designing open-source platforms that leverage mobile devices and data to address socio-technological challenges such as personal health management". She is the daughter of the late Gerald Estrin, also a UCLA Computer Science professor, and of the late Thelma Estrin, a pioneering engineer and computer scientist also at UCLA. She is the sister of Judy Estrin, and a wife to Ache Stokelman. She is known for being a great mentor to her students, and for her sense of humor. Awards 1987: National Science Foundation's Presidential Young Investigator Award 2007: Anita Borg Institute Women of Vision Award for Innovation 2008: Doctor Honoris Causa EPFL 2009: National Academy of Engineering 2011: Doctor Honoris Causa Uppsala University, Sweden 2017: IEEE Internet Award 2018: MacArthur Genius Grant Estrin is featured in the Notable Women in Computing cards. See also Henry Samueli School of Engineering and Applied Science References External links TEDMED talk on small data, 2013 Open mHealth Biography of Deborah Estrin Publications Video (6 min.) of Deborah Estrin being awarded the Anita Borg Institute's Women of Vision Award, 2007 Video (4 min.) of Deborah Estrin's acceptance speech for the Anita Borg Institute's Women of Vision Award, 2007 Center for Embedded Networked Sensing (CENS) home page American computer scientists Internet pioneers Jewish American scientists 1959 births Living people American women computer scientists Women Internet pioneers Fellows of the Association for Computing Machinery Cornell University faculty Cornell Tech faculty UCLA Henry Samueli School of Engineering and Applied Science faculty University of Southern California faculty MIT School of Engineering alumni University of California, Berkeley alumni 20th-century American engineers 21st-century American engineers 20th-century American scientists 21st-century American scientists 20th-century American women scientists 21st-century American women scientists Fellow Members of the IEEE Members of the United States National Academy of Engineering Fellows of the American Academy of Arts and Sciences MacArthur Fellows American women academics 21st-century American Jews
461958
https://en.wikipedia.org/wiki/Megami%20Tensei
Megami Tensei
Megami Tensei, marketed internationally as Shin Megami Tensei (formerly Revelations), is a Japanese media franchise created by Aya Nishitani, Kouji "Cozy" Okada, Ginichiro Suzuki, and Kazunari Suzuki. Primarily developed and published by Atlus, and currently owned by Atlus (and Sega, after acquisition), the franchise consists of multiple subseries and covers multiple role-playing genres including tactical role-playing, action role-playing, and massively multiplayer online role-playing. The first two titles in the series were published by Namco (now Bandai Namco), but have been almost always published by Atlus in Japan and North America since the release of Shin Megami Tensei. For Europe, Atlus publishes the games through third-party companies. The series was originally based on Digital Devil Story, a science fiction novel series by Aya Nishitani. The series takes its name from the first book's subtitle. Most Megami Tensei titles are stand-alone entries with their own stories and characters. Recurring elements include plot themes, a story shaped by the player's choices, and the ability to fight using and often recruit creatures (demons, Personas) to aid the player in battle. Elements of philosophy, religion, occultism, and science fiction have all been incorporated into the series at different times. While not maintaining as high a profile as series such as Final Fantasy and Dragon Quest, it is highly popular in Japan and maintains a strong cult following in the West, finding critical and commercial success. The series has become well known for its artistic direction, challenging gameplay, and music, but raised controversy over its mature content, dark themes, and use of Christian religious imagery. Additional media include manga adaptations, anime films, and television series. In Japan, some games in the series do not use the "Megami Tensei" title, such as the Persona sub-series. However, English localizations have used the "Shin Megami Tensei" moniker since the release of Shin Megami Tensei: Nocturne in 2003, until being discarded in overseas territories beginning with Persona 4 Arena (2012). Most of the early games in the series were not localized due to them being on Nintendo platforms, which had strict guidelines about religious subjects and topics in the West at the time. Titles Games The first installment in the franchise, Digital Devil Story: Megami Tensei, was released on September 11, 1987. The following entries have nearly always been unrelated to each other except in carrying over thematic and gameplay elements. The Megami Tensei games, and the later Shin Megami Tensei titles form the core of the series, while other subseries such as Persona and Devil Summoner are spin-offs marketed as part of the franchise. There are also stand-alone spin-off titles. Main series Two entries have been released for the Famicom: Digital Devil Story: Megami Tensei in 1987, and Digital Devil Story: Megami Tensei II in 1990. The two titles are unrelated to each other in terms of story, and each introduced the basic gameplay and story mechanics that would come to define the series. Two entries were released for the Super Famicom: Shin Megami Tensei in 1992, and Shin Megami Tensei II in 1994. After a nine-year gap, Shin Megami Tensei III: Nocturne was released in 2003 for the PlayStation 2. Its Maniax Edition director's cut was released in Japan and North America in 2004, and in Europe in 2005. The numeral was dropped for its North American release, and its title changed to Shin Megami Tensei: Lucifer's Call in Europe. The next entry, Shin Megami Tensei: Strange Journey, was released for the Nintendo DS in 2009 in Japan and 2010 in North America. Shin Megami Tensei IV for the Nintendo 3DS was released in 2013 in Japan and North America, and a year later in Europe as a digital-only release. Another game set in the same universe, Shin Megami Tensei IV: Apocalypse, was released for the 3DS in February 2016 in Japan. Shin Megami Tensei V was released on the Nintendo Switch in 2021. In addition to the main series, there are Shin Megami Tensei spin-off games. The first is Shin Megami Tensei If..., released in the same year and on the same system as Shin Megami Tensei II. The second, Shin Megami Tensei: Nine, was released for the Xbox in 2002. Originally designed as a massively multiplayer online role-playing game (MMORPG), it was later split into a dual single-player and multiplayer package, and the single-player version released first. The online version was delayed and eventually cancelled as the developers could not manage the required online capacities using Xbox Live. A true MMORPG, Shin Megami Tensei: Imagine, was released for Microsoft Windows in 2007 in Japan, 2008 in North America, and 2009 in Europe. Western service was terminated in 2014 when Marvelous USA, the game's then-handlers, shut down their PC Online game department. Its Japanese service ended in May 2016. A smartphone game, Shin Megami Tensei: Liberation Dx2, was released in 2018. Persona Persona is the largest and most popular spin-off from the Megami Tensei series. The first entry in the series, Megami Ibunroku Persona (originally released overseas as Revelations: Persona), was released in 1996 in Japan and North America. The first Persona 2 title, Innocent Sin, was released in 1999 in Japan. The second game, Eternal Punishment, was released in 2000 in Japan and North America. Persona 3 was released in 2006 in Japan, 2007 in North America, and 2008 in Europe. Its sequel, Persona 4, was released in 2008 in Japan and North America, and in 2009 in Europe. A sixth entry in the series, Persona 5, was released in Japan on September 15, 2016, and was released in North America and Europe on April 4, 2017, to critical acclaim. In addition to the main Persona games are spin-offs, so far focused on Persona 3, 4 and 5: the canon spin-offs Persona Q: Shadow of the Labyrinth and Persona Q2: New Cinema Labyrinth, two fighting games Persona 4 Arena and its sequel Arena Ultimax as well as the crossover fighting game BlazBlue: Cross Tag Battle, and rhythm games Persona 4: Dancing All Night, Persona 3: Dancing in Moonlight, and Persona 5: Dancing in Starlight. While Persona 3 and 4 used the Shin Megami Tensei moniker in the West, it was dropped for the Persona 4 Arena duology and Persona 4 Golden as it would have made the titles too long to be practical. Devil Summoner The Devil Summoner subseries began in 1995 with the release of Shin Megami Tensei: Devil Summoner. It was followed by Devil Summoner: Soul Hackers in 1997, which itself is planned to be followed by Soul Hackers 2 in 2022. Two action role-playing prequels set in 1920s Tokyo were also developed, which revolve around demon summoner Raidou Kuzunoha: Raidou Kuzunoha vs. the Soulless Army was released in 2006, and Raidou Kuzunoha vs. King Abaddon was released in 2008. Other spin-offs Aside from Persona and Devil Summoner, there are other spin-off series covering multiple genres. After the release of Shin Megami Tensei II, Atlus began focusing work on building spin-offs and subseries that would form part of the Megami Tensei franchise. Shortly after Nocturnes release, a duology titled Digital Devil Saga (Digital Devil Saga: Avatar Tuner in Japan) was created based around similar systems to Nocturne, and was also intended as a more accessible gaming experience. Two tactical role-playing games have been developed by Atlus for the DS under the Devil Survivor moniker: the original Devil Survivor and Devil Survivor 2. Both have received expanded ports for the 3DS. Other subseries include Last Bible, a series aimed at a younger audience and using a pure fantasy setting; Devil Children, which was inspired by the popular Pokémon series; and Majin Tensei, a series of strategy games. Two notable stand-alone spin-offs are action spin-off Jack Bros. and Tokyo Mirage Sessions ♯FE, a crossover with Intelligent Systems' Fire Emblem series. Related media Several titles in the franchise have received anime and manga adaptations: Persona 3 received both a four-part theatrical adaptation (#1 Spring of Birth, #2 Midsummer Knight's Dream, #3 Falling Down, #4 Winter of Rebirth), and a spin-off series titled Persona: Trinity Soul. Persona 4 received two adaptations: Persona 4: The Animation, based on the original game, and Persona 4: The Golden Animation, based on its expanded PlayStation Vita port. A live-action television series based on the original Devil Summoner was broadcast between 1997 and 1998. Devil Survivor 2 also received an anime adaptation of the same name, while the Devil Children series received two anime adaptations. Multiple Shin Megami Tensei and Persona titles have received manga and CD drama adaptations. Action figures and merchandise related to Persona have also been produced. Common elements Despite most games in the series taking place in different continuities, they do share certain elements. One of its defining traits is it being set in a contemporary urban environment, specifically modern-day Tokyo. Post-apocalyptic elements are a recurring feature in settings and narratives. This choice was originally made to set the game apart from other fantasy-based gaming franchises of the time, as modern day Tokyo was rarely seen in games as opposed to versions of it from the past. The Persona series takes place exclusively within this setting, spanning a single continuity and mostly focusing on the exploits of a group of young people. Shin Megami Tensei II is one of the notable early exceptions to the series' common setting, as it is set in a science fiction-styled future despite still including fantasy elements. The Last Bible series also shifted to a full fantasy setting. Two more recent notable departures were Strange Journey, which shifted the focus to Antarctica to portray the threat on a global scale, and Shin Megami Tensei IV, which included a medieval-stage society existing separately from a modern-day Tokyo. The Devil Summoner games take the form of modern-day detective stories as opposed to post-apocalyptic settings. The series title translates as "Reincarnation of the Goddess": this has carried over into the current Shin Megami Tensei series, which has been officially translated as "True Goddess Metempsychosis". The word "Metempsychosis" refers to the cycle of reincarnation that ties into many Megami Tensei stories. The reborn goddess of the title has multiple meanings: it refers to a female character in each game that could be interpreted as the goddess, and is also representative of the drastic changes a location undergoes during a game. The concept of reincarnation was also included in narratives and gameplay mechanics to tie in with these themes. The series' overarching title has been truncated to "MegaTen" by series fans. Originating in Japan, the abbreviation has become a common term for the series among its fans. Gameplay The gameplay in the series has become notable for its high difficulty, along with several mechanics that have endured between games. A key element present since the first Megami Tensei is the ability to recruit demons to fight alongside the player in battle, alongside the ability to fuse two different demons together to create a more powerful demon. Equivalents to these systems appear in the later Persona titles. The game's most recognizable battle system is the Press Turn system, first introduced in Nocturne. The Press Turn System is a turn-based battle mechanic governing both the player party and enemies, where either party are rewarded an extra turn for striking an enemy's weakness. A Moon Phase System or equivalent, in which phases of the moon or changes in the weather affected the behavior of enemies, is also featured in multiple games. The layout of the first two Megami Tensei games were noticeably different from later games: Megami Tensei used a 3D first-person perspective, while Megami Tensei II used a combination of first-person 3D displays for battle and top-down 2D displays for navigation. The change was suggested by staff members who did not want players getting lost in a large 3D environment. The 2D/first person viewpoint continued until Nocturne, which switched to a third-person perspective. This was done due to a condition similar to car sickness called "3D sickness" with first person shooters in Japan at the time: the developers wanted something for players to focus on. A first-person perspective was reintroduced in Strange Journey, and incorporated into IVs battles along with navigable 3D environments. Plots and themes Each title focuses on the extraordinary invading the ordinary world, though the two main Megami Tensei series focus on different things: Shin Megami Tensei focuses more on the main protagonist gaining the power needed to survive in a world ruled over by tyrannical deities, while Persona focuses on interpersonal relationships and the psychology of a group of people. The protagonist is generally male within the Shin Megami Tensei titles: while a female lead or the ability to choose a lead's gender is not out of the question, some staff feel that Shin Megami Tensei lead roles are better suited to a male character. Throughout its lifetime, the series has incorporated elements of Gnosticism, various world mythologies and religions including Christianity and Buddhism, early science fiction, Jungian psychology and archetypes, occultism, punk, and cyberpunk. The science fiction and fantasy elements are brought together and unified through the use of philosophical concepts, enabling a blending of concepts and aesthetics that might normally clash. The stories of the core Shin Megami Tensei titles frequently include fighting against a tyrannical God. The method of story-telling in the series can involve traditional use of cutscenes and spoken dialogue (Persona, Digital Devil Saga), or a text-based minimalist approach that places emphasis on atmosphere (Nocturne). A tradition within the core Shin Megami Tensei series is to focus on a single playable character as opposed to a group. Alongside other recurring characters is Lucifer, the fallen angel who stands against God and is portrayed in multiple forms to represent his omnipotence. Since Megami Tensei II, the series has used a morality-based decision system, where the player's actions affect the outcome of the story. In Megami Tensei II, the alignments were first defined as "Law" (the forces of God) and "Chaos" (the army of Lucifer). In future games, an additional "Neutral" route was included where the player could reject both sides. Selected games have been thematically or otherwise linked to a particular alignment. Shin Megami Tensei II, due to events prior to the story, focuses on the "Law" alignment. For Nocturne, all the characters were roughly aligned with "Chaos", which was done both to bring variety to the series and allow the development team more creative freedom. Shin Megami Tensei IV: Apocalypse is restricted to a "Neutral" alignment while still having multiple endings. The three-tiered alignment was used in Strange Journey, and continued into Shin Megami Tensei IV. Development and history Origins The Megami Tensei series began life as a media expansion of the Digital Devil Story series, a set of science-fantasy novels written by Aya Nishitani during the 1980s. The series' creators were Kouji Okada (credited as Cozy Okada in English), Ginichiro Suzuki, and Ginichiro's son Kazunari. The first book in the Digital Devil Story series, , provided the title for the original game, while the game's story was based on both the first book and the third book . The game was developed at Atlus and published by Bandai Namco (then Namco). Although they wanted to incorporate as much of the original story as possible, the limited capabilities of the Famicom made this goal nearly impossible. The game proved popular in Japan, and effectively launched the Megami Tensei franchise, with its more ambitious direct sequel following in 1990. During the development of Shin Megami Tensei, which was driven by the concept of a Super Famicom game with the company's brand on it, the team slowly decided that they wanted to break the then-current gaming status quo using its aesthetic and content. Despite this attitude, the staff considered Shin Megami Tensei to be a remake of Megami Tensei II. In many of these earlier games, staff members at Atlus had cameos. The majority of the Megami Tensei series is developed by Atlus' R&D Department 1. Other developers have been involved with the series: these include Multimedia Intelligence Transfer (Last Bible series), Lancarse (Strange Journey), CAVE (Imagine) and Nex Entertainment (Nine), and Arc System Works (Persona 4 Arena). Most of the games up to 2003 were handled by Okada, but when he departed to form his own company Gaia, Kazuma Kaneko became the series' creative director. There are two main writers in the franchise: Shogo Isogai and Ryutaro Ito. Ito first worked on Megami Tensei II, joining the team after development to write the script, along with working with the script and being part of the debug team. Isogai's first work for the series was the script for Shin Megami Tensei II. The next entry If... was also written by Ito, and designed as a departure from the grand scale of previous games, instead being set within a cloistered school environment. His final work for the series was the first Devil Summoner. Isogai also worked on Shin Megami Tensei II and If..., and later worked on multiple Devil Summoner games, Nocturne and Strange Journey. The music for the first five main Megami Tensei titles was composed by Tsukasa Masuko. For Nocturne, Shoji Meguro, who had done work on earlier spin-off titles, was brought in. He later became well known for his work on the Persona titles. Art design The Shin Megami Tensei and Persona art styles have been defined by two different artists: Kazuma Kaneko and Shigenori Soejima. Kaneko had a long history with the series, having done some work on the original Megami Tensei titles. His first prominent work for the series was on Shin Megami Tensei, who worked on both the sprite art and promotional artwork for the game's characters and demons. He was also responsible for suggesting many of the game's darker features, defining the series' eventual identity. Before designing each demon, Kaneko looks up his chosen subject to get their mythological background, and uses that in their design. Many of Kaneko's demon designs were influenced by both creatures and deities from world mythology, and monsters from popular culture like Godzilla. Alongside working on Shin Megami Tensei II, If... and Nocturne, he also did character designs for the first three Persona games. Kaneko's style has been described as "cold [and] stoic", evolving into that state over time to keep the artwork as close as possible to the in-game render. He states that he mainly does line drawings for the artwork. He starts his artwork with pencil, and then scans them onto a computer so other artists can work on them digitally. Soejima's first work for the series was as part of the digital coloring team for the first Devil Summoner. He later had minor roles in artwork and character design in the first Persona and Soul Hackers. He later did the secondary characters for the Persona 2 duology, and was also part of the team checking over the PlayStation ports of the first three Shin Megami Tensei games, as well as minor work on Nocturne. Soejima was chosen as the lead designer for Persona 3 by Kaneko, as Kaneko wanted the younger staff members to gain experience. Persona 3 proved challenging for Soejima as he needed to refine his drawing style and take the expectations of series fans into account. He would go on to design for Persona 3/FES and Portable, Persona 4, and Persona 5. Soejima's drawing style is recognized as being lighter-toned than Kaneko's work on the Shin Megami Tensei games. Other designers have also worked on the series. For Nine, the developers wanted to have a new style to suit the game's original vision, so the characters were designed by animator Yasuomi Umetsu. Another designer for the series is Masayuki Doi, who had made a name for himself with the Trauma Center series; and designed the main characters for Shin Megami Tensei IV. Inspired in his work by Kaneko's designs, he created the main characters' clothing to be a blend of Japanese and western fashions while incorporating design elements from the Star Wars series. For the Devil Survivor games, Atlus were aiming to appeal to a wider audience and reinvigorate the Megami Tensei franchise, hiring Suzuhito Yasuda as character designer for this purpose. Some monsters in the second Devil Survivor were designed by manga artist Mohiro Kitoh. Localization For a long time, the Megami Tensei franchise was not exported to western territories despite there being a recognized market. The original reasons were the heavy religious themes and symbols used, which were considered taboo in western game markets, and Nintendo's strict content guidelines for overseas releases. Later, many of these early works were prevented from coming overseas due to their age, which would have put them at a disadvantage in the modern gaming market. Early entries on the PlayStation were also blocked by Sony of America's then-current approval policies. The first title in the franchise to be localized was Jack Bros.; the first role-playing game in the franchise to receive an overseas release was the first Persona game. This was done to give Atlus' North American branch a flagship RPG franchise that could compete with the likes of Final Fantasy, Suikoden and Breath of Fire. According to Okada, the naming of creatures and enemies was adjusted from the main series and original Japanese release of Persona to make it more acceptable for an overseas audience. Though it managed to establish the franchise overseas, the localization was a taxing task due to a small staff and the need to change multiple aspects to suit a North American audience, including removing references to Japanese culture and changing one character from Japanese to African-American. This, and other changes were fixed in the re-release on the PlayStation Portable. The first Persona 2 title, Innocent Sin, needed to be passed over due to shortage of manpower and the fact that development was focused on the second title, Eternal Punishment. Nocturne was the first release in the Shin Megami Tensei series to be released overseas. After the release of Nocturne, Atlus' overseas branches decided to add the Shin Megami Tensei moniker to future releases within the Megami Tensei franchise to help market the games. Despite many of the original games not bearing the moniker, it ultimately worked in Atlus' favor as, regardless of title differences, the games chosen for localization were all part of the larger Megami Tensei franchise, and using the core Shin Megami Tensei moniker kept all the titles under a single banner. Before this decision was made, the series was given the localized title Revelations, used for the first Persona and the first Last Bible. Later, changes to titles were made to make them less unwieldy, such as with Raidou Kuzunoha vs. the Soulless Army. Called Raidō Kuzunoha vs. The Super-Powered Army in Japan, the title was altered as it sounded "goofy" in English. By the time Strange Journey was in development, the franchise had a strong presence overseas, so the team created Strange Journey with localization in mind: the two aspects actively linked with this were the game's setting in Antarctica as opposed to modern-day Japan, and the fact that it was not given a numeral. Starting with Shin Megami Tensei IV, the company decided to actively promote the franchise overseas to North America, Europe and mainland Asia. After 2016, due to Atlus USA's merger with Sega of America, Sega took over North American publishing duties, although the Atlus brand remained intact. The overseas utilization of the Shin Megami Tensei moniker was eventually done away with for Persona 4 Arena (2012), a fighting game spin-off from the Persona series. The discarding of the SMT banner eventually extended to Megami Tensei RPG titles beginning with Persona 4 Golden (also 2012), despite the original release's usage of the moniker in its localization. In general, Atlus publishes Megami Tensei games in Japan and North America, but as they lack a European branch, they publish titles in the region through third-party companies such as Ghostlight and NIS America. Their latest partnership, after their deal with NIS America ended with the publication of Odin Sphere Leifthrasir, was with European publishing firm Deep Silver to publish multiple titles in the region, including Shin Megami Tensei IV: Apocalypse and Persona 5. Atlus has occasionally published titles digitally in Europe. Reception Prior to its popularity in the west, the game was a major franchise in Japan, having sold over four million copies by 2003. Excluding the Persona series, the Megami Tensei series has sold approximately 7.2 million copies by October 2017. By October 2018, the Megami Tensei main series has shifted approximately 12.4 million packaged and digital copies (including DL of free-to-play titles) of games worldwide. In addition, the Persona sub-series has sold 9.3million copies, bringing total franchise numbers to 21.7million units (including DL of free-to-play titles). Excluding the Persona series, the Megami Tensei series has supplied 17.7 million copies by 2021, which including free-to-play titles. Japanese website 4Gamer.net referred to the series as one of Japan's biggest role-playing franchises. UGO Networks writer K. Thor Jensen cited the first Megami Tensei game as the first successful use of cyberpunk aesthetics in video games, saying that the series' mix of science fiction elements and the occult "create a truly unique fictional cyberpunk world". Nintendo Power has noted that Atlus always mixes "familiar gameplay" with surprising settings when creating games for the series, citing Persona, with its "modern-day horror stories" and "teams of Japanese high-school kids", as the perfect example. The editor also added that Strange Journey followed a similar system, calling it a "science-fiction makeover" of the series. In an article about the interaction of Japanese and Western gaming culture, 1UP.com mentioned the Shin Megami Tensei subseries alongside Nippon Ichi Software's Disgaea series. Kurt Kalata wrote: "[They] may not be big sellers, but they've garnered underground success and attracted thousands of obsessed fans." GameSpot writer Andrew Vestal referred to the series as the third biggest RPG series in Japan after Final Fantasy and Dragon Quest. IGN Matt Coleman mentioned Nocturne in the article "A History of Console RPGs", referring to its content as "challenging stuff for a genre that used to be all about princess saving and evil cleansing". Digital Devil Story: Megami Tensei II and Shin Megami Tensei both appeared on Famitsu 2006 "Top 100 Favorite Games of All Time" audience poll at No. 58 and No. 59, respectively. RPGFan's "Top 20 RPGs of the Past Decade" list was topped by the two Digital Devil Saga games, followed by Persona 3 in second place, while Persona 4 ranked fourth place. Kalata, writing for Gamasutra, referred to Nocturne as one of the 20 essential RPGs for players of the genre. GameTrailers cited the Press Turn system as one of the best JRPG battle systems in existence, with particular reference to the version used in Shin Megami Tensei IV. Alongside its critical acclaim, the series has garnered controversy both in Japan and overseas. Amongst the material cited are its demon negotiation mechanic, depictions of suicide and cannibalism, religious criticism, its use and mixture of Christian and occult imagery, political references, depictions of homosexuality, and its sometimes-strange demon designs. Specific examples have been cited by western journalists. The original release of Persona caused concern due to the title's religious implications. 1UP.com's 2007 game awards, which ran in the March 2008 issue of Electronic Gaming Monthly, Persona 3 was given the "Most controversial game that created no controversy" award: the writers said "Rockstar's Hot Coffee sex scandal and Bullys boy-on-boy kissing's got nothing on this PS2 role-player's suicide-initiated battles or subplot involving student-teacher dating." GamesRadar included the series on its list of "Controversies Waiting to Happen", saying that the lack of public outcry was due to its niche status when compared to other series with similar content. Writing for 1UP.com in a later article, Kalata traced this use of controversial content back to the Digital Devil Story novels, which depicted violence and rape committed by demons, and said that "Such violence is not particularly rare in the land of Japanese animation, but it became even more disturbing in Megami Tensei II. Notes References External links Antitheism Atlus games Cyberpunk video games Role-playing video games Sega Games franchises Video game franchises Video game franchises introduced in 1987 Video games about demons Video games featuring parallel universes
1801308
https://en.wikipedia.org/wiki/GNU%20Data%20Language
GNU Data Language
The GNU Data Language (GDL) is a free alternative to IDL (Interactive Data Language), achieving full compatibility with IDL 7 and partial compatibility with IDL 8. Together with its library routines, GDL is developed to serve as a tool for data analysis and visualization in such disciplines as astronomy, geosciences, and medical imaging. GDL is licensed under the GPL. Other open-source numerical data analysis tools similar to GDL include Julia, Jupyter Notebook, GNU Octave, NCAR Command Language (NCL), Perl Data Language (PDL), R, Scilab, SciPy, and Yorick. GDL as a language is dynamically-typed, vectorized, and has object-oriented programming capabilities. GDL library routines handle numerical calculations (e.g. FFT), data visualisation, signal/image processing, interaction with host OS, and data input/output. GDL supports several data formats, such as NetCDF, HDF (v4 & v5), GRIB, PNG, TIFF, and DICOM. Graphical output is handled by X11, PostScript, SVG, or z-buffer terminals, the last one allowing output graphics (plots) to be saved in raster graphics formats. GDL features integrated debugging facilities, such as breakpoints. GDL has a Python bridge (Python code can be called from GDL; GDL can be compiled as a Python module). GDL uses Eigen (C++ library) numerical library (similar to Intel MKL) to offer high computing performance on multi-core processors. Packaged versions of GDL are available for several Linux and BSD flavours as well as Mac OS X. The source code compiles on Microsoft Windows and other UNIX systems, including Solaris. GDL is not an official GNU package. See also Interpreter (computing) IDL (programming language) References External links Running the GNU Data Language on coLinux Linux packages: ArchLinux, Debian, Fedora, Gentoo, Ubuntu, BSD/OSX ports: Fink, FreeBSD, Macports Free science software Free software programmed in C++ Data language Numerical programming languages Software that uses wxWidgets
43597591
https://en.wikipedia.org/wiki/List%20of%20proprietary%20source-available%20software
List of proprietary source-available software
This is a list of proprietary source-available software, which has available source code, but is not classified as free software or open-source software. In some cases, this type of software is originally sold and released without the source code, and the source code becomes available later. Sometimes, the source code is released under a liberal software license at its end of life as abandonware. This type of software can also have its source code leaked or reverse engineered. While such software often later becomes open source software or public domain, other constructs and software licenses exist, for instance shared source or creative commons licenses. If the source code is given out without specified license or public domain waiver it has legally to be considered as still proprietary due to the Berne Convention. For a list of video game software with available source code, see List of commercial video games with available source code. For specifically formerly proprietary software which is now free software, see List of formerly proprietary software. See also Community source List of commercial video games with available source code List of formerly proprietary software Open-core model Source-available software References Free software lists and comparisons
9067357
https://en.wikipedia.org/wiki/Marshall%20Kirk%20McKusick
Marshall Kirk McKusick
Marshall Kirk McKusick (born January 19, 1954) is a computer scientist, known for his extensive work on BSD UNIX, from the 1980s to FreeBSD in the present day. He was president of the USENIX Association from 1990 to 1992 and again from 2002 to 2004, and still serves on the board. He is on the editorial board of ACM Queue Magazine. He is known to friends and colleagues as "Kirk". McKusick received his B.S. in electrical engineering from Cornell University, and two M.S. degrees (in 1979 and 1980 respectively) and a Ph.D. in computer science from the University of California, Berkeley in 1984. McKusick is openly gay and lives in California with Eric Allman, who has been his domestic partner since graduate school and whom he married in October, 2013. BSD McKusick started with BSD by virtue of the fact that he shared an office at Berkeley with Bill Joy, who spearheaded the beginnings of the BSD system. Some of his largest contributions to BSD have been to the file system. He helped to design the original Berkeley Fast File System (FFS). In the late 1990s, he implemented soft updates, an alternative approach to maintaining disk integrity after a crash or power outage, in FFS, and a revised version of Unix File System (UFS) known as "UFS2". The magic number used in the UFS2 super block structure reflects McKusick's birth date: #define FS_UFS2_MAGIC 0x19540119 (as found in /usr/include/ufs/ffs/fs.h on FreeBSD systems). It is included as an easter egg. He was also primarily responsible for creating the complementary features of filesystem snapshots and background fsck (file system check and repair), which both integrate closely with soft updates. After the filesystem snapshot, the filesystem can be brought up immediately after a power outage, and fsck can run as a background process. The Design and Implementation series of books are regarded as very high-quality works in computer science. They have been influential in the development of the BSD descendants. The BSD Daemon, often used to identify BSD, is copyrighted by Marshall Kirk McKusick. Bibliography S. Leffler, M. McKusick, M. Karels, J. Quarterman: The Design and Implementation of the 4.3BSD UNIX Operating System, Addison-Wesley, January 1989, . German translation published June 1990, . Japanese translation published June 1991, (out of print). S. Leffler, M. McKusick: The Design and Implementation of the 4.3BSD UNIX Operating System Answer Book, Addison-Wesley, April 1991, . Japanese translation published January 1992, M. McKusick, K. Bostic, M. Karels, J. Quarterman: The Design and Implementation of the 4.4BSD Operating System, Addison-Wesley, April 1996, . French translation published 1997, International Thomson Publishing, Paris, France, . McKusick, 1999 Twenty Years of Berkeley Unix (from the book Open Sources: Voices from the Open Source Revolution ) M. McKusick, George Neville-Neil: The Design and Implementation of the FreeBSD Operating System, Addison-Wesley, July 2004, M. McKusick, George Neville-Neil, R. Watson: The Design and Implementation of the FreeBSD Operating System, Second Edition, Addison-Wesley, September 2014, References External links McKusick's home page 1954 births BSD people American computer scientists Cornell University College of Engineering alumni University of California, Berkeley alumni Free software programmers FreeBSD people LGBT people from Delaware Living people People from Wilmington, Delaware
21439992
https://en.wikipedia.org/wiki/John%20E.%20Dwyer%20Technology%20Academy
John E. Dwyer Technology Academy
The John E. Dwyer Technology Academy is a four-year comprehensive public high school serving students in ninth through twelfth grades in Elizabeth, in Union County, New Jersey, United States, as part of the Elizabeth Public Schools. The Technology Academy shares one large building with the Admiral William Halsey Leadership Academy, the Peter B. Gold Administration Building, and the Thomas Dunn Sports Center, which together form the Main Complex most commonly known as "The Main" to students and teachers. The Main complex holds more students, teachers, and administrators than the other high school in the city. It is known as the heart of all Elizabeth Academies. The school was named in honor of John E. Dwyer, an Elizabeth educator for many years who served as a teacher, guidance counselor, Vice Principal, Principal and as Superintendent of Schools. The school has been accredited by the Middle States Association of Colleges and Schools Commission on Elementary and Secondary Schools since 2013. As of the 2019–20 school year, the school had an enrollment of 1,313 students and 104.5 classroom teachers (on an FTE basis), for a student–teacher ratio of 12.6:1. There were 930 students (70.8% of enrollment) eligible for free lunch and 127 (9.7% of students) eligible for reduced-cost lunch. Awards, recognition and rankings The school was the 329th-ranked public high school in New Jersey out of 339 schools statewide in New Jersey Monthly magazine's September 2014 cover story on the state's "Top Public High Schools", using a new ranking methodology. The school had been ranked 306th in the state of 328 schools in 2012. This school has several national honor societies including: National Honor Society, National English Honor Society, Mu Alpha Theta Math Honor Society, National Art Honor Society, and Rho Kappa Social Studies Honor Society. Curriculum Students enrolled in the John E. Dwyer Technology Academy, in addition to completing a rigorous college preparatory program, will also be given the opportunity to explore careers in areas such as computer science, architectural design, and urban planning. The school program will also guide the participating students in pioneering the ways and means by which to integrate environmental technology into the world we live in. Honors and AP classes will be offered to all interested students. All John E. Dwyer Academy students will be expected to comply with the rules, regulations, and policies of the Academy and the school district. Each John E. Dwyer Academy student will be given the opportunity to participate in one of the two strands of study offered: Students enrolling in the Industrial Technology Strand will participate in school and community activities that will give them a strong knowledge base regarding various fields that are relying more and more on technology and innovation every day. Courses offered will include classes in process technology, mechanical drafting, and architectural drafting Students enrolling in the Information Technology Strand will participate in classes and real-world activities dealing with such fields as robotics, computer science, and Cisco, Microsoft and other language and software systems. Participants in this strand will also learn about hardware design and computer infrastructure design and implementation. They will master the latest in computer management systems and data management operations The Elizabeth Public Schools is partnering with The National Academy Foundation and is currently engaged in a Year of Planning - Academy Development Process to establish the Academy of Information Technology as a career academy at the John E. Dwyer Technology Academy in September 2010. The Academy of Information Technology prepares students for career opportunities in programming, database administration, web design and administration, digital networks, and other areas in the expanding digital workplace. Further, the district will be submitting a proposal to The National Academy Foundation to implement a new Academy theme in engineering in September 2011. The Academy of Engineering educates high school students in the principles of engineering, and providing content in the fields of electronics, biotech, aerospace, civil engineering, and architecture. Graduation requirements Subject Class of 2010 Class of 2007/2008/2009 Language Arts 4 Years 4 Years Social Studies 4 Years World History U.S. History I U.S. History II 1 Elective 3 Years World History U.S. History I U.S. History II Math 4 Years Algebra I Geometry Algebra II Higher Math 3 Years Algebra I Geometry Algebra II or Higher Science 4 Years (Minimum of one science with a lab) 3 Years World Language 3 Years World Languages 1 Year World Languages Literature 2 years Fine/Performing/Practical Arts 2 Years 2 years Phys Ed & Health 16 Credits 4 Years - One year for each year in attendance Personal Finance 1 Credit Career and Consumer, Family, Life Skills or Vocational-Technical Education 5 Credits Community Service 60 Hours - 25 Hours of which is due during Senior year Latin (Gifted and Talented only) 10 Credits Credits 160 130 Testing Pass HSPA/SRA Pass HSPA/SRA College Requirements: All students who graduate from EHS will have taken the appropriate coursework to meet college entrance requirements. The colleges (4 year colleges) require students to take 16 Carnagie Units in high school. A unit is considered a year of study. Units are taken from the major subject areas of Language Arts, Social Studies, Mathematics, Science and World Language. 4 years of English = 4 units 3 years of History = 3 units 3 years of Mathematics (Algebra I, Geometry, Algebra II) = 3 units 3 years of Science = 3 units 2 years of World Language = 2 units Total Units = 15 units The additional unit is obtained by taking elective courses in Language Arts, Social Studies, Mathematics, Science or World Language. EHS students are encouraged to participate in extra-curricular activities, take college entrance exams, and Advanced Placement Tests to strengthen their college applications. Connection to the Main Complex Some extracurricular activities and sports teams are found in Dwyer Academy. The building functions as a hub central as other students from the other Elizabeth Academies and Elizabeth High School come here during the after school hours. The Main Complex also holds Elizabeth High School's swimming pool where the swim team practices and meets are held. The Main Complex campus is also famous in the student body for holding a unique courtyard, being the only campus in all the Elizabeth Academies to have one accessible to its student Athletics The John E. Dywer Technology Academy does not have its own athletic teams, instead the students participate in sports that represent the whole city of Elizabeth not just one academy. Students from all the Elizabeth Academies including Elizabeth High School compete together on one consolidated sports team against other schools outside the city. References External links John E. Dwyer Technology Academy Website Elizabeth Public Schools Data for the Elizabeth Public Schools, National Center for Education Statistics 2009 establishments in New Jersey Education in Elizabeth, New Jersey Educational institutions established in 2009 Public high schools in Union County, New Jersey
867112
https://en.wikipedia.org/wiki/HP%20OpenView
HP OpenView
HP OpenView is the former name for a Hewlett-Packard product family that consisted of network and systems management products. In 2007, HP OpenView was rebranded as HP BTO (Business Technology Optimization) Software when it became part of the HP Software Division. The products are now available as various HP products, marketed through the HP Software Division. HP OpenView software provided large-scale system and network management of an organization's IT infrastructure. It included optional modules from HP as well as third-party management software, which connected within a common framework and communicated with one another. History The foundational OpenView product was Network Node Manager (NNM), network monitoring software based on SNMP. NNM was used to manage networks and could be used in conjunction with other management software, such as CiscoWorks. In April 2004, HP bought Novadigm and its Radia suite. In December 2005, it acquired Peregrine Systems with its IT asset and service management software and integrated it into HP OpenView. In November 2006, HP completed its purchase of Mercury Interactive Corp., subsequently integrated Mercury application and software life-cycle management products (QualityCenter, LoadRunner/PerformanceCenter, WinRunner/QTP) into the HP Software & Solutions portfolio. In September 2007, HP acquired Opsware. In early 2007, alongside the integration of Mercury Interactive, Peregrine Systems and Opsware products, HP OpenView products were rebranded under the HP Software Division business, and the OpenView and Mercury names were phased out. The HP OpenCall name continues as part of the HP Software Division business. Products HP OpenView Network Node Manager (OV NNM) HP Operations Manager (OM) — monitor systems and applications using agents HP OMW - Operations Manager (Windows) (formerly OVOW, formerly VantagePoint Operations for Windows) HP OMU - Operations Manager (Unix) (formerly OVOU, formerly VantagePoint Operations for Unix, sometimes referenced as ITO, formerly Operation Center " OPC") HP OpenView ServiceManager (formerly Peregrine ServiceCenter)- now HP Software Service Manager HP OpenView AssetManager (formerly Peregrine AssetCenter) HP OpenView Connect-It - A data and process integration tool HP OpenView Service Desk (OVSD) - migration to HP Software Service Manager is optional HP OpenView Internet Services (OVIS) - Discontinued HP OpenView Service Navigator (integrated in HP Operations Manager for Unix since 1996) HP OpenView Transaction Analyzer (OVTA) - Discontinued HP OpenView SOA Manager HP OpenView Select Identity (OVSI) - Discontinued HP OpenView Select Access (OVSA) - Discontinued HP OpenView Select Audit - Discontinued HP OpenView Select Federation - Discontinued HP OpenView Service Information Portal - Discontinued HP Software Universal CMDB (UCMDB) Enterprise Systems Management (ESM) Performance HP OpenView Performance Agent (OVPA) HP OpenView Performance Insight (OVPI) HP OpenView Performance Manager (OVPM) HP OpenView Reporter (OVR) 2.0, 3.0, 3.5, 3.6 and 3.80 HP OpenView GlancePlus Fault/Resource Status Monitoring HP OpenView TeMIP — A Telecoms OSS service (formerly just known as TeMIP when owned by Compaq-Digital Equipment Corporation) Provisioning/fulfillment HP OpenView Service Activator (OVSA) - rebranded as HP Service Activator (HPSA) Storage HP Software Storage Essentials HP OpenView Storage Data Protector HP OpenView Storage Mirroring HP OpenView Storage Mirroring Exchange Failover Utility HP OpenView Dashboard (formerly Service Information Portal (SIP) — provides a web portal for HP OpenView management products) HP OpenView Storage Area Manager (OV SAM) - Discontinued HP OpenView Smart Plug-ins (SPIs) — These are add-ons products for OpenView Operations for additional management capabilities: HP OpenView SPI for BEA Tuxedo HP OpenView SPI for BEA WebLogic HP OpenView SPI for BEA WebLogic Integration HP OpenView SPI for BMC CONTROL-M Job Scheduling / Workload Automation Solution HP OpenView SPI for BlackBerry Enterprise Server (BES) HP OpenView SPI for Citrix HP OpenView SPI for Databases (Oracle, Microsoft SQL Server, Sybase, and Informix) HP OpenView SPI for Documentum HP OpenView SPI for IBM DB2 HP OpenView SPI for IBM WebSphere Application Server HP OpenView SPI for Lotus Domino/Notes HP OpenView SPI for Microsoft Exchange HP OpenView SPI for Microsoft Windows HP OpenView SPI for MySQL Databases HP OpenView SPI for OpenVMS HP OpenView SPI for Oracle Application Server HP OpenView SPI for PeopleSoft HP OpenView SPI for Remedy ARS Integration HP OpenView SPI for SAP HP OpenView SPI for Dollar Universe HP OpenView SPI for Siebel HP OpenView SPI for HP Service Manager HP OpenView SPI for Storage Area Manager HP OpenView SPI for Terminal Server HP OpenView SPI for TIBCO HP OpenView SPI for UNIX OS HP OpenView SPI for Web Servers HP OpenView SPI for VMware HP OpenView SPI for WebSPOC ITSM Integration Network Node Manager SPIs Network Node Manager for Advanced Routing (for NNM version 7 only) Network Node Manager SPI for Quality Assurance (for NNMi version 9 and later) Network Node Manager SPI for Traffic (for NNMi version 9 and later) Network Node Manager SPI for IP Telephony Network Node Manager SPI for LAN/WAN Edge (for NNM version 7 only) Network Node Manager SPI for MPLS VPN (for NNMi version 9 and later) Network Node Manager SPI for IP Multicast (for NNMi version 9 and later) HP OpenView Configuration Management HP Configuration Management software, formerly Radia from Novadigm, is now part of HP Client Automation Software which is now owned by Accelerite, a product division of Persistent Systems. The following products were part of the OpenView product set: HP OpenView Configuration Management Application Self-Service Manager HP OpenView Configuration Management Application Manager HP OpenView Configuration Management Inventory Manager HP OpenView Configuration Management OS Manager HP OpenView Configuration Management Patch Manager HP OpenView Configuration Management Application Usage Manager HP OpenView Client Configuration Manager IUM HP OpenView Internet Usage Manager (IUM) provides convergent mediation. It collects, processes and correlates usage records and events from network elements and service applications across voice and data services for prepaid, post-paid and real-time charging in wireless, wireline and cable networks. This usage data can be passed on to business support systems for usage-based billing systems, capacity management and analysis of subscriber behavior. HP Software Business Availability Center (BAC) Formerly products from Mercury Interactive and now integrated into the HP portfolio: HP Software Business Availability Center / Business Process Monitor (BAC/BPM) HP Software Business Availability Center / Real User Management (BAC/RUM) HP Software Business Availability Center / System Availability Management (BAC/SAM) HP Software Business Availability Center / Diagnostics (BAC/Diags) HP Software Business Availability Center / Universal CMDB (BAC/uCMDB) HP Software Business Availability Center / Service Level Management (BAC/SLM) HP Software Business Availability Center / Problem Isolation (BAC/PI) HP Software Business Availability Center / Business Process Insight (BAC/BPI) HP Software SiteScope (SiS) HP Software Data Center Automation (DCA) Formerly products from Opsware and now integrated into the HP portfolio: HP Software Server Automation (SA) (formerly Opsware Server Automation System (SAS)) HP Software Storage Essentials (SE) (HP existing product Storage Essentials has been merged with former Opsware Application Storage Automation System (ASAS)) HP Software Operations Orchestration (OO) (formerly Opsware Process Automation System (PAS) (formerly iConclude Orchestrator)) HP Software Network Automation (NA) (formerly Opsware Network Automation System NAS)) User Groups The OpenView Forum International was an OpenView user group. It became Vivit (in early 2007) and organized the yearly HP Software Forum. In 2007, HP took over responsibility for the HP Software Forum and renamed it HP Software Universe. While it is not an actual "user group", the ITRC is an online forum about former OpenView and HP Software products. A new user group, the HP Software Solutions Community, officially launched publicly in April 2010 and includes all former software-related communities. References External links HP Software Solutions Community HP Software Universe OpenView solution list HP OpenView Dashboard Forum HP Software User Group Partners Building on HP Software Solutions A Collection of White Papers for HP Openview Service Desk and other products Network management System administration OpenView
4359898
https://en.wikipedia.org/wiki/UDP-based%20Data%20Transfer%20Protocol
UDP-based Data Transfer Protocol
UDP-based Data Transfer Protocol (UDT), is a high-performance data transfer protocol designed for transferring large volumetric datasets over high-speed wide area networks. Such settings are typically disadvantageous for the more common TCP protocol. Initial versions were developed and tested on very high-speed networks (1 Gbit/s, 10 Gbit/s, etc.); however, recent versions of the protocol have been updated to support the commodity Internet as well. For example, the protocol now supports rendezvous connection setup, which is a desirable feature for traversing NAT firewalls using UDP. UDT has an open source implementation which can be found on SourceForge. It is one of the most popular solutions for supporting high-speed data transfer and is part of many research projects and commercial products. Background UDT was developed by Yunhong Gu during his PhD studies at the National Center for Data Mining (NCDM) of University of Illinois at Chicago in the laboratory of Dr. Robert Grossman. Dr. Gu continues to maintain and improve the protocol after graduation. The UDT project started in 2001, when inexpensive optical networks became popular and triggered a wider awareness of TCP efficiency problems over high-speed wide area networks. The first version of UDT, also known as SABUL (Simple Available Bandwidth Utility Library), was designed to support bulk data transfer for scientific data movement over private networks. SABUL used UDP for data transfer and a separate TCP connection for control messages. In October, 2003, the NCDM achieved a 6.8 gigabits per second transfer from Chicago, United States to Amsterdam, Netherlands. During the 30-minute test they transmitted approximately 1.4 terabytes of data. SABUL was later renamed to UDT starting with version 2.0, which was released in 2004. UDT2 removed the TCP control connection in SABUL and used UDP for both data and control information. UDT2 also introduced a new congestion control algorithm that allowed the protocol to run "fairly and friendly" with concurrent UDT and TCP flows. UDT3 (2006) extended the usage of the protocol to the commodity Internet. Congestion control was tuned to support relatively low bandwidth as well. UDT3 also significantly reduced the use of system resources (CPU and memory). Additionally, UDT3 allows users to easily define and install their own congestion control algorithms. UDT4 (2007) introduced several new features to better support high concurrency and firewall traversing. UDT4 allowed multiple UDT connections to bind to the same UDP port and it also supported rendezvous connection setup for easier UDP hole punching. A fifth version of the protocol is currently in the planning stage. Possible features include the ability to support multiple independent sessions over a single connection. Moreover, since the absence of a security feature for UDT has been an issue with its initial implementation in a commercial environment, Bernardo (2011) has developed a security architecture for UDT as part of his PhD studies. This architecture however is undergoing enhancement to support UDT in various network environments (i.e., optical networks). Protocol architecture UDT is built on top of User Datagram Protocol (UDP), adding congestion control and reliability control mechanisms. UDT is an application level, connection oriented, duplex protocol that supports both reliable data streaming and partial reliable messaging. Acknowledging UDT uses periodic acknowledgments (ACK) to confirm packet delivery, while negative ACKs (loss reports) are used to report packet loss. Periodic ACKs help to reduce control traffic on the reverse path when the data transfer speed is high, because in these situations, the number of ACKs is proportional to time, rather than the number of data packets. AIMD with decreasing increase UDT uses an AIMD (additive increase multiplicative decrease) style congestion control algorithm. The increase parameter is inversely proportional to the available bandwidth (estimated using the packet pair technique), thus UDT can probe high bandwidth rapidly and can slow down for better stability when it approaches maximum bandwidth. The decrease factor is a random number between 1/8 and 1/2. This helps reduce the negative impact of loss synchronization. In UDT, packet transmission is limited by both rate control and window control. The sending rate is updated by the AIMD algorithm described above. The congestion window, as a secondary control mechanism, is set according to the data arrival rate on the receiver side. Configurable congestion control The UDT implementation exposes a set of variables related to congestion control in a C++ class and allows users to define a set of callback functions to manipulate these variables. Thus, users can redefine the control algorithm by overriding some or all of these callback functions. Most TCP control algorithms can be implemented using this feature with fewer than 100 lines of code. Rendezvous connection setup Beside the traditional client/server connection setup (AKA caller/listener, where a listener waits for connection and potentially accepts multiple connecting callers), UDT supports also a new rendezvous connection setup mode. In this mode both sides listen on their port and connect to the peer simultaneously, that is, they both connect to one another. Therefore, both parties must use the same port for connection, and both parties are role-equivalent (in contrast to listener/caller roles in traditional setup). Rendezvous is widely used for firewall traversing when both peers are behind firewalls. Use scenarios UDT is widely used in high-performance computing to support high-speed data transfer over optical networks. For example, GridFTP, a popular data transfer tool in grid computing, has UDT available as a data transfer protocol. Over the commodity Internet, UDT has been used in many commercial products for fast file transfer over wide area networks. Because UDT is purely based on UDP, it has also been used in many situations where TCP is at a disadvantage to UDP. These scenarios include peer-to-peer applications, video and audio communication, and many others. Evaluation of feasible security mechanisms UDT is considered a state-of-the-art protocol, addressing infrastructure requirements for transmitting data in high-speed networks. Its development, however, creates new vulnerabilities because like many other protocols, it relies solely on the existing security mechanisms for current protocols such as the Transmission Control Protocol (TCP) and UDP. Research conducted by Dr. Danilo Valeros Bernardo of the University of Technology Sydney, a member of the Australian Technology Network focusing on practical experiments on UDT using their proposed security mechanisms and exploring the use of other existing security mechanisms used on TCP/UDP for UDT, gained interesting reviews in various network and security scientific communities. To analyze the security mechanisms, they carry out a formal proof of correctness to assist them in determining their applicability by using protocol composition logic (PCL). This approach is modular, comprising a separate proof of each protocol section and providing insight into the network environment in which each section can be reliably employed. Moreover, the proof holds for a variety of failure recovery strategies and other implementation and configuration options. They derive their technique from the PCL on TLS and Kerberos in the literature. They work on developing and validating its security architecture by using rewrite systems and automata. The result of their work, which is first in the literature, is a more robust theoretical and practical representation of a security architecture of UDT, viable to work with other high-speed network protocols. Derivative works UDT project has been a base for SRT project, which uses the transmission reliability for live video streaming over public internet. Awards The UDT team has won the prestigious Bandwidth Challenge three times during the annual ACM/IEEE Supercomputing Conference, the world's premier conference for high-performance computing, networking, storage, and analysis. At SC06 (Tampa, FL), the team transferred an astronomy dataset at 8 Gbit/s disk-to-disk from Chicago, IL to Tampa, FL using UDT. At SC08 (Austin, TX), the team demonstrated the use of UDT in a complex high-speed data transfer involving various distributed applications over a 120-node system, across four data centers in Baltimore, Chicago (2), and San Diego. At SC09 (Portland, OR), a collaborative team from NCDM, Naval Research Lab, and iCAIR showcased UDT-powered wide area data intensive cloud computing applications. See also Tsunami UDP Protocol Fast and Secure Protocol (FASP) QUIC Literature Bernardo, D.V and Hoang, D. B; "Empirical Survey: Experimentation and Implementations of High Speed Protocol Data Transfer for GRID " Proceedings of IEEE 25th International Conference on Advance Information Networking and Application Workshops, March 2011, Singapore. Yunhong Gu and Robert L. Grossman, UDT: UDP-based Data Transfer for High-Speed Wide Area Networks, Computer Networks (Elsevier). Volume 51, Issue 7. May 2007. References External links UDT Project on SourceForge UDT.Net wrapper around the native UDT protocol library UdtSharp: .NET library written in 100% managed code (C#) IETF Draft from October 12, 2010 (expired) run HTTP over UDP in Node.js with UDT Application layer protocols Free software programmed in C++ Internet protocols Software using the BSD license
4502286
https://en.wikipedia.org/wiki/ISYS%20Search%20Software
ISYS Search Software
ISYS Search Software was an Australian supplier of enterprise search software for information access, management, and re-use. The company marketed and sold a suite of embedded search, mobile access and information management infrastructure technologies. History Established in 1988 by Ian Davies, ISYS previously marketed and sold enterprise search applications. Davies developed prototype text-retrieval software that would be suitable for use in large databases. The Australian market for the prototype software developed; and, in 1991, PC Week Magazine wrote a favourable review that assisted in attempts to break into the US market. ISYS products compete with the Verity Ultraseek and the Google Search services, while its infrastructure and embedded search applications compete with Autonomy and FAST Search & Transfer (now a subsidiary of Microsoft). In 2007, ISYS entered the Linux marketplace with the release of the ISYS:sdk and ISYS:web server for Linux platforms, the company's first foray into non-Windows environments. In 2009, ISYS released several new applications and a new suite for information access (see applications listed above). Acquisition In March 2012, ISYS Search Software Pty Ltd was acquired by Lexmark International, and integrated into its Perceptive Software division. In July 2017, Hyland Software acquired Enterprise Software division of Lexmark and integrated Enterprise Search into its product suite Functionality ISYS Search is capable of searching multiple disparate data sources, including Microsoft Office, WordPerfect, Open Office, HTML files, ZIP files, all major e-mail products, all SQLdata sources, SharePoint, Lotus Notes, and Lotus Domino. Some of ISYS's content-mining capabilities include automatic categorisation, entity extraction, parametric search, hit-highlighting and navigation, relevance ranking, and multiple query methods. References External links Internet search engines Technology companies of Australia Lexmark
16423749
https://en.wikipedia.org/wiki/17314%20Aisakos
17314 Aisakos
17314 Aisakos is a Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered at the Palomar Observatory during the first Palomar–Leiden Trojan survey in 1971. The dark Jovian asteroid has a rotation period of 9.7 hours. It was named after the Trojan prince Aesacus from Greek mythology. Discovery Aisakos was discovered on 25 March 1971, by Dutch astronomer couple Ingrid and Cornelis van Houten at Leiden, on photographic plates taken by Dutch–American astronomer Tom Gehrels at Palomar Observatory in the Palomar Mountain Range, southeast of Los Angeles. The body's observation arc begins with a precovery taken at Palomar in November 1954, more than 16 years prior to its official discovery observation. Palomar–Leiden Trojan survey The survey designation "T-1" stands for the first Palomar–Leiden Trojan survey, named after the fruitful collaboration of the Palomar and Leiden Observatory in the 1960s and 1970s. Gehrels used Palomar's Samuel Oschin telescope (also known as the 48-inch Schmidt Telescope), and shipped the photographic plates to Ingrid and Cornelis van Houten at Leiden Observatory where astrometry was carried out. The trio are credited with the discovery of several thousand asteroids. Naming This minor planet was named from Greek mythology after the Trojan prince Aesacus (Aisakos), son of King Priam and his first wife Arisbe. As had been his maternal grandfather Merops, he was a seer and foresaw the downfall of Troy, brought upon by Hecuba's future son, Paris. The official naming citation was published by the Minor Planet Center on 9 March 2001 (). Orbit and classification Aisakos is a Jupiter trojan in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It is also a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.8–5.6 AU once every 11 years and 9 months (4,288 days; semi-major axis of 5.17 AU). Its orbit has an eccentricity of 0.07 and an inclination of 11° with respect to the ecliptic. Physical characteristics Aisakos is an assumed C-type asteroid, while most larger Jupiter trojans are D-types. Rotation period In October 2014, a rotational lightcurve of Aisakos was obtained from photometric observations over three consecutive nights by Robert Stephens at the Center for Solar System Studies in Landers, California. Lightcurve analysis gave a well-defined rotation period of hours with a brightness amplitude of 0.34 magnitude (). Diameter and albedo According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Aisakos measures 35.76 kilometers in diameter and its surface has an albedo of 0.072, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 36.78 kilometers based on an absolute magnitude of 10.9. Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (15001)-(20000) – Minor Planet Center Asteroid 17314 Aisakos at the Small Bodies Data Ferret 017314 Discoveries by Cornelis Johannes van Houten Discoveries by Ingrid van Houten-Groeneveld Discoveries by Tom Gehrels 1024 Minor planets named from Greek mythology Named minor planets 19710325
13758099
https://en.wikipedia.org/wiki/MIMIC%20Simulator
MIMIC Simulator
MIMIC Simulator is a product suite from Gambit Communications consisting of simulation software in the network and systems management space. The MIMIC Simulator Suite has several components related to simulation of managed networks and data centers for the purposes of software development, software testing or training, sales and marketing of network management applications (see). MIMIC SNMP simulator solves a classical simulation problem: network management or operations support system software typically manages large networks. Traditionally, in order to set up such networks for the above purposes, physical equipment had to be separately purchased and assembled in laboratories. To reduce the expense, most of the network can be simulated (e.g. see). The principle behind SNMP simulation is that the SNMP protocol is an interface that can be simulated. SNMP requests carry data values for MIB objects, which can be shaped at will by the simulator, thus representing any device which has an SNMP interface. In contrast to network simulation, where the entire network is modeled within a computer, this type of empirical simulation is visible on the network, and one can communicate with the simulator over the network. The concept can be extended to other protocols such as for cable modems, CLI such as Cisco IOS (see) or TL1, flow-based monitoring such as NetFlow or sFlow, server management based on IPMI or DMTF Redfish, IoT protocols such as MQTT, CoAP. Modern management software typically uses multiple protocols to manage networks. The simulator thus should integrate the required protocols to present authentic instrumentation. Components MIMIC IOS Simulator allows simulating the CLI protocol as encountered with Cisco IOS, JUNOS, TL/1. The low-end MIMIC Virtual Lab products can be used for training for Cisco CCNA. MIMIC NetFlow Simulator creates many custom NetFlow exporters, MIMIC sFlow Simulator does the same for sFlow. MIMIC IPMI Simulator simulates the IPMI RMCP via LAN interface for high-end servers. MIMIC Web Simulator handles HTTP / SOAP / XML / WSDL / WSMAN / Redfish interfaces for management via Web services. MIMIC IoT Simulator creates large IoT environments based on standard protocols MQTT, CoAP. Sources Virtual router labs Gambit simulates the network with virtual software Advanced HP Network Node Manager Software 12th Annual Well-Connected Awards: Network Infrastructure References Internet protocols Internet Standards Network management System administration Application layer protocols Multi-agent systems Simulation software
30063003
https://en.wikipedia.org/wiki/Odyssey%20Software%20%28mobile%20device%20management%29
Odyssey Software (mobile device management)
Odyssey Software provided mobile device management and software development tools to enterprise companies either directly (primarily through its Athena product) or through partner solutions. Its technology allowed companies to manage multiple mobile operating systems at a detailed level, including functions such as inventory collection, software management, remote control, and device configuration. It was bought by Symantec in 2012. History Odyssey Software was founded in 1996 by Mark Gentile and originally focused on building software development tools. However, it now focuses on developing software products that enable developers to architect, build, deploy, and manage enterprise applications for managing mobile and embedded devices as well as mobile device management solutions it can deliver to enterprises directly. The company has been sold to Symantec in a deal completed on March 2, 2012. Products Athena : device management software that extends Microsoft System Center solutions, adding the ability to manage, support, and control mobile and embedded devices, such as smartphones and ruggedized handhelds. AppCenter : an application manager that restricts end-user activity to a set of “authorized only” applications, preventing non-productive or unauthorized device utilization. ViaXML : a mobile and wireless application infrastructure that enables web services (which provide access to data, business logic, knowledge, and application components) to be exposed and called over the Internet and corporate intranet using open Internet standards – XML, HTTP and HTTPS. CEfusion : a set of mobile and wireless application data access infrastructure for rapidly building and deploying rich mobile enterprise applications. It extends the core Windows DNA data access technologies — ADO (ActiveX Data Objects), MTS (Microsoft Transaction Services), and MSMQ (Microsoft Message Queuing) — to the mobile application environment. Technology Mobile Device Management capabilities such as remote provisioning, inventory collection (including location-based data), software management, remote wipe, device lock, remote control, device configuration, and Microsoft Exchange ActiveSync management. Management of multiple mobile operating systems through pre-defined or administrator-defined device groups. Support for a wide range of mobile operating systems, such as iOS, BlackBerry, Android, Symbian, Windows Mobile, Windows Phone 7, Windows Embedded, and Palm webOS. The ability to easily integrate with and help power most device management solutions (including BlackBerry Enterprise Server), such as those of its partners (see below). Application management capabilities such as restricting end-user activity to enterprise-approved applications, controlling the availability of applications on a device, automatically launching applications, and restricting the use of certain functions of the Operating System. Customers Coca-Cola (Freestyle), Wegmans, OfficeMax, DHL, Intel, J. C. Penney, Plymouth Foundry, others. Partners Motorola, Microsoft, HTC MobilityNow, Trust Digital, AirWatch, Conceivium, CloudSync, Optical Phusion, Stay-Linked, JANAM, others. References Mobile device management Mobile software programming tools
59659209
https://en.wikipedia.org/wiki/Journal%20of%20Cybersecurity
Journal of Cybersecurity
The Journal of Cybersecurity is an open access peer reviewed academic journal of cybersecurity. It is published by Oxford University Press. It was first issued in 2015. Its editors in chief are Tyler Moore and David Pym. The journal is a member of the Committee on Publication Ethics (COPE). The journal concentrates on the belief that computer science approaches are critical, but are not enough to tackle cybersecurity threats. Moreover, the article maintains the belief that interdisciplinary academic contributions are needed to understand the different facets of cybersecurity. References Oxford University Press academic journals Open access journals Computer science journals
3175125
https://en.wikipedia.org/wiki/Free%20Software%20Foundation%20Latin%20America
Free Software Foundation Latin America
Free Software Foundation Latin America (FSFLA) is the Latin American sister organisation of the Free Software Foundation. It is the fourth sister organisation of FSF, after Free Software Foundation Europe and Free Software Foundation India. It was launched on November 23, 2005 in Rosario, Argentina. The founding general assembly of FSFLA elected Federico Heinz as President, Alexandre Oliva as Secretary and Beatriz Busaniche as Treasurer. The Administrative Council consisted of them as well as Enrique A. Chaparro, Mario M. Bonilla, Fernanda G. Weiden and Juan José Ciarlante. In 2006, Beatriz Busaniche, Enrique A. Chaparro, Federico Heinz, Juan José Ciarlante and Mario M. Bonilla left the FSFLA's Council. After that the original directives were modified to the current ones. A new position of “Observer of the Council” was created in the organisation to allow other important people in the free software community to participate, observe and advise the Council. The current members of the Council are Alexandre Oliva, Andres Ricardo Castelblanco, Exal Garcia-Carrillo, Octavio Rossell, Oscar Valenzuela, J. Esteban Saavedra L., Luis Alberto Guzmán García, Quiliro Ordóñez and Tomás Solar Castro. The actual Observers of the Council group is formed by Richard Stallman, Georg Greve, Adriano Rafael Gomes, Franco Iacomella, Alejandro Forero Cuervo, Alvaro Fuentes, Anahuac de Paula Gil, Christiano Anderson, Eder L. Marques, Elkin Botero, J. Esteban Saavedra L., Fabianne Balvedi, Felipe Augusto van de Wiel, Nagarjuna G., Glauber de Oliveira Costa, Gustavo Sverzut Barbieri, Henrique de Andrade, Harold Rivas, Jansen Sena, Marcelo Zunino, Mario Bonilla, Daniel Yucra, NIIBE Yutaka, Beatriz Busaniche, Octavio H. Ruiz Cervera, Omar Kaminski, and Roberto Salomon. Projects Linux-libre References External links Free Software Foundation History of computing in South America
17406344
https://en.wikipedia.org/wiki/Aufs
Aufs
aufs (short for advanced multi-layered unification filesystem) implements a union mount for Linux file systems. The name originally stood for AnotherUnionFS until version 2. Developed by Junjiro Okajima in 2006, aufs is a complete rewrite of the earlier UnionFS. It aimed to improve reliability and performance, but also introduced some new concepts, like writable branch balancing, and other improvements – some of which are now implemented in the UnionFS 2.x branch. aufs was rejected for merging into mainline Linux. Its code was criticized for being "dense, unreadable, [and] uncommented". Instead, OverlayFS was merged in the Linux kernel. After several attempts to merge aufs into mainline kernel, the author has given up. Use Aufs is included in Debian "Jessie" (v8) and Ubuntu 16.04 out of the box. Debian "Stretch" (v9) does not include aufs anymore, but provides a package aufs-dkms, which auto-compiles the aufs kernel module using Dell's dkms. Docker originally used aufs for container filesystem layers. It is still available as one of the storage backends but is deprecated in favour of the backend which uses OverlayFS. Several Linux distributions have chosen aufs as a replacement for UnionFS, including: Knoppix live CD Linux distribution – since the end of 2006, "for better stability and performance" NimbleX since version 2008. Switched simultaneously with Linux-Live Porteus LiveCD, run fully in RAM Slax (and Linux-Live scripts in general) since version 6 Xandros Linux distribution, available in the ASUS Eee PC model 901 Ubuntu 10.04 LTS Live CD Debian 6.0 Live media Gentoo Linux LiveDVD 11.0 Gentoo Linux LiveDVD 11.2 Gentoo Linux LiveDVD 12.0 Salix Live via Linux-Live scripts until version 13.1.1 and via SaLT from version 13.37 Puppy Linux versions can run fully in RAM with changes saved to disk on shutdown. For example, Slacko 5.3.3 running as a LiveCD. Manjaro Linux via their patched official kernels See also OverlayFS, the competing project that was chosen for merger to the Linux core File system Union mount, describing the concept of merging file system branches UnionFS, an older union mount project Syslinux References External links AuFS project homepage A simple example Free special-purpose file systems File systems supported by the Linux kernel Union file systems
2958015
https://en.wikipedia.org/wiki/Philosophy%20of%20artificial%20intelligence
Philosophy of artificial intelligence
The philosophy of artificial intelligence is a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental. The philosophy of artificial intelligence attempts to answer such questions as follows: Can a machine act intelligently? Can it solve any problem that a person would solve by thinking? Are human intelligence and machine intelligence the same? Is the human brain essentially a computer? Can a machine have a mind, mental states, and consciousness in the same sense that a human being can? Can it feel how things are? Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion. Important propositions in the philosophy of AI include some of the following: Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being. The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." Allen Newell and Herbert A. Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action." John Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..." Can a machine display general intelligence? Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines could do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers; to answer this question, it does not matter whether a machine is really thinking (as a person thinks) or is just acting like it is thinking. The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." Arguments against the basic premise must show that building a working AI system is impossible because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for intelligent behavior and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible. It is also possible to sidestep the connection between the two parts of the above proposal. For instance, machine learning, beginning with Turing's infamous child machine proposal essentially achieves the desired feature of intelligence without a precise design-time description as to how it would exactly work. The account on robot tacit knowledge eliminates the need for a precise description all together. The first step to answering the question is to clearly define "intelligence". Intelligence Turing test Alan Turing reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human. Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". Turing's test extends this polite convention to machines: If a machine acts as intelligently as a human being, then it is as intelligent as a human being. One criticism of the Turing test is that it only measures the "humanness" of the machine's behavior, rather than the "intelligence" of the behavior. Since human behavior and intelligent behavior are not exactly the same thing, the test fails to measure intelligence. Stuart J. Russell and Peter Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'". Intelligent agent definition Twenty-first century AI research defines intelligence in terms of intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent. "If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent." Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes or the ability to be insulted. They have the disadvantage that they can fail to differentiate between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence. Arguments that a machine can display general intelligence The brain can be simulated Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then .... we ... ought to be able to reproduce the behavior of the nervous system with some physical device".This argument, first introduced as early as 1943 and vividly described by Hans Moravec in 1988, is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029. A non-real-time simulation of a thalamocortical model that has the size of the human brain (1011 neurons) was performed in 2005 and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors. Even AI's harshest critics (such as Hubert Dreyfus and John Searle) agree that a brain simulation is possible in theory. However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes. Thus, merely simulating the functioning of a living brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind, like trying to build an jet airliner by copying a living bird precisely, feather by feather, with no theoretical understanding of aeronautical engineering. Human thinking is symbol processing In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote: "A physical symbol system has the necessary and sufficient means of general intelligent action." This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence). Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules." The "symbols" that Newell, Simon and Dreyfus discussed were word-like and high level symbols that directly correspond with objects in the world, such as <dog> and <tail>. Most AI programs written between 1956 and 1990 used this kind of symbol. Modern AI, based on statistics and mathematical optimization, does not use the high-level "symbol processing" that Newell and Simon discussed. Arguments against symbol processing These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required. Gödelian anti-mechanist arguments In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic (such as a high-level symbol manipulation program) could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. (The truth of the constructed Gödel statement is contingent on the consistency of the given system; applying the same process to a subtly inconsistent system will appear to succeed, but will actually yield a false "Gödel statement" instead.) More speculatively, Gödel conjectured that the human mind can correctly eventually determine the truth or falsity of any well-grounded mathematical statement (including any possible Gödel statement), and that therefore the human mind's power is not reducible to a mechanism. Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have championed this philosophical anti-mechanist argument. Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians (or some idealization of human mathematicians) is both consistent (completely free of error) and believes fully in its own consistency (and can make all logical inferences that follow from its own consistency, including belief in its Gödel statement) . This is provably impossible for a Turing machine to do (see Halting problem); therefore, the Gödelian concludes that human reasoning is too powerful to be captured by a Turing machine, and by extension, any digital mechanical device. However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate. This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis." Stuart Russell and Peter Norvig agree that Gödel's argument doesn't consider the nature of real-world human reasoning. It applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to be able to prove everything in order to be an intelligent person. Less formally, Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying". But, of course, the Epimenides paradox applies to anything that makes statements, whether they are machines or humans, even Lucas himself. Consider: Lucas can't assert the truth of this statement. This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless. After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. . By Penrose and Lucas's arguments, existing quantum computers are not sufficient , so Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron. However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing. Dreyfus: the primacy of implicit skills Hubert Dreyfus argued that human intelligence and expertise depended primarily on implicit skill rather than explicit symbolic manipulation, and argued that these skills would never be captured in formal rules. Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior." Turing argued in response that, just because we do not know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'" Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention. Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and learning. Statistical approaches to AI can make predictions which approach the accuracy of human intuitive guesses. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation, towards new models that are intended to capture more of our unconscious reasoning. Historian and AI researcher Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier." Can a machine have a mind, consciousness, and mental states? This is a philosophical question, related to the problem of other minds and the hard problem of consciousness. The question revolves around a position defined by John Searle as "strong AI": A physical symbol system can have a mind and mental states. Searle distinguished this position from what he called "weak AI": A physical symbol system can act intelligently. Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered. Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." There are a few researchers who believe that consciousness is an essential element in intelligence, such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of "consciousness" strays very close to "intelligence." (See artificial consciousness.) Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness". Consciousness, minds, mental states, meaning The words "mind" and "consciousness" are used by different communities in different ways. Some new age thinkers, for example, use the word "consciousness" to describe something similar to Bergson's "élan vital": an invisible, energetic fluid that permeates life and especially the mind. Science fiction writers use the word to describe some essential property that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience," "self-awareness" or "ghost" - as in the Ghost in the Shell manga and anime series - to describe this essential human property). For others , the words "mind" or "consciousness" are used as a kind of secular synonym for the soul. For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we see something, know something, mean something or understand something . "It's not hard to give a commonsense definition of consciousness" observes philosopher John Searle. What is mysterious and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking? Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem in the philosophy of mind called the "mind-body problem." A related problem is the problem of meaning or understanding (which philosophers call "intentionality"): what is the connection between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "qualia") that can be different from person to person? Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates of consciousness: the actual relationship between the machinery in our heads and its collective properties; such as the mind, experience and understanding. Some of the harshest critics of artificial intelligence agree that the brain is just a machine, and that consciousness and intelligence are the result of physical processes in the brain. The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the neurons to create minds, with mental states (like understanding or perceiving), and ultimately, the experience of consciousness? Arguments that a computer cannot have a mind and mental states Searle's Chinese room John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing test and demonstrates general intelligent action. Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person who does not speak Chinese. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room cannot be aware. The cards certainly aren't aware. Searle concludes that the Chinese room, or any other physical symbol system, cannot have a mind. Searle goes on to argue that actual mental states and consciousness require (yet to be described) "actual physical-chemical properties of actual human brains." He argues there are special "causal properties" of brains and neurons that gives rise to minds: in his words "brains cause minds." Related arguments: Leibniz' mill, Davis's telephone exchange, Block's Chinese nation and Blockhead Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought experiment of expanding the brain until it was the size of a mill. In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym". Ned Block also proposed his Blockhead argument, which is a version of the Chinese room in which the program has been re-factored into a simple set of rules of the form "see this, do that", removing all mystery from the program. Responses to the Chinese room Responses to the Chinese room emphasize several different points. The systems reply and the virtual mind reply: This reply argues that the system, including the man, the program, the room, and the cards, is what understands Chinese. Searle claims that the man in the room is the only thing which could possibly "have a mind" or "understand", but others disagree, arguing that it is possible for there to be two minds in the same physical place, similar to the way a computer can simultaneously "be" two machines at once: one physical (like a Macintosh) and one "virtual" (like a word processor). Speed, power and complexity replies: Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt. Robot reply: To truly understand, some believe the Chinese Room needs eyes and hands. Hans Moravec writes: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world." Brain simulator reply: What if the program simulates the sequence of nerve firings at the synapses of an actual brain of an actual Chinese speaker? The man in the room would be simulating an actual brain. This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese. Other minds reply and the epiphenomena reply: Several people have noted that Searle's argument is just a version of the problem of other minds, applied to machines. Since it is difficult to decide if people are "actually" thinking, we should not be surprised that it is difficult to answer the same question about machines. A related question is whether "consciousness" (as Searle understands it) exists. Searle argues that the experience of consciousness can't be detected by examining the behavior of a machine, a human being or any other animal. Daniel Dennett points out that natural selection cannot preserve a feature of an animal that has no effect on the behavior of the animal, and thus consciousness (as Searle understands it) can't be produced by natural selection. Therefore, either natural selection did not produce consciousness, or "strong AI" is correct in that consciousness can be detected by suitably designed Turing test. Is thinking a kind of computation? The computational theory of mind or "computationalism" claims that the relationship between mind and brain is similar (if not identical) to the relationship between a running program and a computer. The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules). The latest version is associated with philosophers Hilary Putnam and Jerry Fodor. This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote): Reasoning is nothing but reckoning. In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is the physical symbol system hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of computationalism claim that (as Stevan Harnad characterizes it): Mental states are just implementations of (the right) computer programs. This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese room argument (according to Harnad). Other related questions Can a machine have emotions? If "emotions" are defined only in terms of their effect on behavior or on how they function inside an organism, then emotions can be viewed as a mechanism that an intelligent agent uses to maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people". Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love." Daniel Crevier writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species." Can a machine be self-aware? "Self-awareness", as noted above, is sometimes used by science fiction writers as a name for the essential human property that makes a character fully human. Turing strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it think about itself? Viewed in this way, a program can be written that can report on its own internal states, such as a debugger. Can a machine be original or creative? Turing reduces this to the question of whether a machine can "take us by surprise" and argues that this is obviously true, as any programmer can attest. He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways. It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. (Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new mathematical truths.) Kaplan and Haenlein suggest that machines can display scientific creativity, while it seems likely that humans will have the upper hand where artistic creativity is concerned. In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge designed a robot called Adam that they believe to be the first machine to independently come up with new scientific findings. Also in 2009, researchers at Cornell developed Eureqa, a computer program that extrapolates formulas to fit the data inputted, such as finding the laws of motion from a pendulum's motion. Can a machine be benevolent or hostile? This question (like many others in the philosophy of artificial intelligence) can be presented in two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile" becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine "deliberately" set out to do harm? The latter is the question "can a machine have conscious states?" (such as intentions) in another form. The question of whether highly intelligent and completely autonomous machines would be dangerous has been examined in detail by futurists (such as the Machine Intelligence Research Institute). The obvious element of drama has also made the subject popular in science fiction, which has considered many differently possible scenarios where intelligent machines pose a threat to mankind; see Artificial intelligence in fiction. One issue is that machines may acquire the autonomy and intelligence required to be dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will suddenly become thousands or millions of times more intelligent than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism. In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue. They point to programs like the Language Acquisition Device which can emulate human interaction. Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane. Can a machine imitate all human characteristics? Turing said "It is customary... to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. ... I cannot offer any such comfort, for I believe that no such bounds can be set." Turing noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as: Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new. Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression." All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence. Can a machine have a soul? Finally, those who believe in the existence of a soul may argue that "Thinking is a function of man's immortal soul." Alan Turing called this "the theological objection". He writes In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates. Views on the role of philosophy Some scholars argue that the AI community's dismissal of philosophy is detrimental. In the Stanford Encyclopedia of Philosophy, some philosophers argue that the role of philosophy in AI is underappreciated. Physicist David Deutsch argues that without an understanding of philosophy or its concepts, AI development would suffer from a lack of progress. Conferences The main conference series on the issue is "Philosophy and Theory of AI" (PT-AI), run by Vincent C. Müller. The main bibliography on the subject, with several sub-sections, is on PhilPapers. See also AI takeover Artificial brain Artificial consciousness Artificial intelligence Artificial neural network Chatterbot Chinese room Computational theory of mind Computing Machinery and Intelligence Dreyfus' critique of artificial intelligence Existential risk from advanced artificial intelligence Functionalism Multi-agent system Philosophy of computer science Philosophy of information Philosophy of mind Physical symbol system Simulated reality Superintelligence: Paths, Dangers, Strategies Synthetic intelligence Notes References References Adam, Alison (1989). Artificial Knowing: Gender and the Thinking Machine. Routledge & CRC Press. https://www.routledge.com/Artificial-Knowing-Gender-and-the-Thinking-Machine/Adam/p/book/9780415129633 Benjamin, Ruha (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Wiley. ISBN 978-1-509-52643-7 , Bryson, Joanna (2019). The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation, 34. . Crawford, Kate (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. . . Haraway, Donna (1985). A Cyborg Manifesto. . . . . . Malabou, Catherine (2019). Morphing Intelligence: From IQ Measurement to Artificial Brains. (C. Shread, Trans.). Columbia University Press. . Page numbers above and diagram contents refer to the Lyceum PDF print of the article. Philosophy of science Philosophy of technology Philosophy of mind Open problems Articles containing video clips Artificial intelligence
30759885
https://en.wikipedia.org/wiki/TeraChem
TeraChem
TeraChem is a computational chemistry software program designed for CUDA-enabled Nvidia GPUs. The initial development started at the University of Illinois at Urbana-Champaign and was subsequently commercialized. It is currently distributed by PetaChem, LLC, located in Silicon Valley. As of 2020, the software package is still under active development. Core features TeraChem is capable of fast ab initio molecular dynamics and can utilize density functional theory (DFT) methods for nanoscale biomolecular systems with hundreds of atoms. All the methods used are based on Gaussian orbitals, in order to improve performance on contemporary (2010s) computer hardware. Press coverage Chemical and Engineering News (C&EN) magazine of the American Chemical Society first mentioned the development of TeraChem in Fall 2008. Recently, C&EN magazine has a feature article covering molecular modeling on GPU and TeraChem. According to the recent post at the Nvidia blog, TeraChem has been tested to deliver 8-50 times better performance than General Atomic and Molecular Structure System (GAMESS). In that benchmark, TeraChem was executed on a desktop machine with four (4) Tesla GPUs and GAMESS was running on a cluster of 256 quad core CPUs. TeraChem is available for free via GPU Test Drive. Media The software is featured in a series of clips on its own YouTube channel under "GPUChem" user. TeraChem v1.5 release link New kinds of science enabled: ab initio dynamics of proton transfer link Discovery mode: reactions in nanocavities link TeraChem performance on 4 GPUs: video Major release history 2017 TeraChem version 1.93P Support for Maxwell and Pascal GPUs (e.g. Titan X-Pascal, P100) Use of multiple basis sets for different elements $multibasis Use of polarizable continuum methods for ground and excited states 2016 TeraChem version 1.9 Support for Maxwell cards (e.g., GTX980, TitanX) Effective core potentials (and gradients) Time-dependent density functional theory Continuum solvation models (COSMO) 2012 TeraChem version 1.5 Full support of polarization functions: energy, gradients, ab initio dynamics and range-corrected DFT functionals (CAMB3LYP, wPBE, wB97x) 2011 TeraChem version 1.5a (pre-release) Alpha version with the full support of d-functions: energy, gradients, ab initio dynamics TeraChem version 1.43b-1.45b Beta version with polarization functions for energy calculation (HF/DFT levels) as well as other improvements. TeraChem version 1.42 This version was first deployed at National Center for Supercomputing Applications' (NCSA) Lincoln supercomputer for National Science Foundation (NSF) TeraGrid users as announced in NCSA press release. 2010 TeraChem version 1.0 TeraChem version 1.0b The very first initial beta release was reportedly downloaded more than 4,000 times. Publication list Charge Transfer and Polarization in Solvated Proteins from Ab Initio Molecular Dynamics I. S. Ufimtsev, N. Luehr and T. J. Martinez Journal of Physical Chemistry Letters, Vol. 2, 1789-1793 (2011) Excited-State Electronic Structure with Configuration Interaction Singles and Tamm-Dancoff Time-Dependent Density Functional Theory on Graphical Processing Units C. M. Isborn, N. Luehr, I. S. Ufimtsev and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 7, 1814-1823 (2011) Dynamic Precision for Electron Repulsion Integral Evaluation on Graphical Processing Units (GPUs) N. Luehr, I. S. Ufimtsev, and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 7, 949-954 (2011) Quantum Chemistry on Graphical Processing Units. 3. Analytical Energy Gradients and First Principles Molecular Dynamics I. S. Ufimtsev and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 5, 2619-2628 (2009) Quantum Chemistry on Graphical Processing Units. 2. Direct Self-Consistent Field Implementation I. S. Ufimtsev and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 5, 1004-1015 (2009) Quantum Chemistry on Graphical Processing Units. 1. Strategies for Two-Electron Integral Evaluation I. S. Ufimtsev and T. J. Martinez Journal of Chemical Theory and Computation, Vol. 4, 222-231 (2008) Graphical Processing Units for Quantum Chemistry I. S. Ufimtsev and T. J. Martinez Computing in Science and Engineering, Vol. 10, 26-34 (2008) Preparation and characterization of stable aqueous higher-order fullerenes Nirupam Aich, Joseph R V Flora and Navid B Saleh Nanotechnology, Vol. 23, 055705 (2012) Filled Pentagons and Electron Counting Rule for Boron Fullerenes Kregg D. Quarles, Cherno B. Kah, Rosi N. Gunasinghe, Ryza N. Musin, and Xiao-Qian Wang Journal of Chemical Theory Computation, Vol. 7, 2017–2020 (2011) Sensitivity Analysis of Cluster Models for Calculating Adsorption Energies for Organic Molecules on Mineral Surfaces M. P. Andersson and S. L. S. Stipp Journal of Physical Chemistry C, Vol. 115, 10044–10055 (2011) Dispersion corrections in the boron buckyball and nanotubes Rosi N. Gunasinghe, Cherno B. Kah, Kregg D. Quarles, and Xiao-Qian Wang Applied Physics Letters 98, 261906 (2011) Structural and electronic stability of a volleyball-shaped B80 fullerene Xiao-Qian Wang Physical Review B 82, 153409 (2010) Ab Initio Molecular Dynamics Simulations of Ketocyanine Dyes in Organic Solvents Andrzej Eilmes Lecture Notes in Computer Science, 7136/2012, 276-284 (2012) State Equation of a Model Methane Clathrate Cage Ruben Santamaria, Juan-Antonio Mondragon-Sanchez and Xim Bokhimi J. Phys. Chem. A, ASAP (2012) See also Quantum chemistry computer programs Molecular design software Molecule editor Comparison of software for molecular mechanics modeling List of software for Monte Carlo molecular modeling References Molecular modelling Computational chemistry Computational chemistry software Electronic structure methods
53984289
https://en.wikipedia.org/wiki/Contributory%20copyright%20infringement
Contributory copyright infringement
Contributory copyright infringement is a way of imposing secondary liability for infringement of a copyright. It is a means by which a person may be held liable for copyright infringement even though he or she did not directly engage in the infringing activity. In the United States, the Copyright Act does not itself impose liability for contributory infringement expressly. It is one of the two forms of secondary liability apart from vicarious liability. Contributory infringement is understood to be a form of infringement in which a person is not directly violating a copyright but, induces or authorises another person to directly infringe the copyright. This doctrine is a development of general tort law and is an extension of the principle in tort law that in addition to the tortfeasor, anyone who contributed to the tort should also be held liable. Requirements The requirements for fulfilling the threshold of contributory infringement and imposing liability for copyright infringement on a party are- The defendant having knowledge of a direct infringement; and The defendant materially contributing to that infringement. Contributory infringement leads to imposition of liability in two situations. First situation is when the defendant, through his conduct, assists in the infringement, and the second situation is when the means for facilitating the infringement such as machinery is provided by the defendant. Knowledge The knowledge requirement for contributory infringement is an objective assessment and stands fulfilled if the defendant has actual or constructive knowledge of an infringement, i.e., if he or she has reason to believe that an infringement is taking place. But, constructive knowledge need not be imputed to the defendant if the product was capable of significant noninfringing uses. Material contribution Material contribution is the second requirement of contributory infringement. For instance, merely providing facilities or the site for an infringement might amount to material contribution. But, some courts put emphasis on the contribution to be 'substantial' and therefore, would hold that providing equipment and facilities for infringement is not in itself determinative of material contribution. Difference from vicarious liability Vicarious liability is another form of secondary liability for copyright infringement through which a person who himself has not directly infringed a copyright can, nevertheless, be held liable. The requirements for attracting vicarious liability under copyright law are- The defendant had the right to control the infringing activity; and The defendant derives a financial or commercial benefit from the infringement Unlike contributory infringement, vicarious liability can be imposed even in the absence of any intent or knowledge on part of the defendant. In the Napster case, the Court of Appeals for the Ninth Circuit observed: "In the context of copyright law, vicarious liability extends beyond an employer/employee relationship to cases in which a defendant "has the right and ability to supervise the infringing activity and also has a direct financial interest in such activities." In the United States In the United States of America, the doctrine of contributory infringement is based on the 1911 case of Kalem v Harper Brothers. The ingredients of contributory infringement were laid down in the Second Circuit Court of Appeals decision in Gershwin Publishing Corp v Columbia Artists Management Inc. in which the court said that contributory infringement is said to happen when someone, with knowledge of the infringing activity, induces, causes, or materially contributes to the infringing conduct of another. This doctrine was developed in the context of the 1909 Copyright Act which did not have any reference to contributory infringement. But, the 1976 Act recognised the exclusive right of a copyright owner 'to do and to authorize' the rights attached to a copyright enumerated in the Act. The words 'to authorize' were meant to bring contributory infringements within the purview of the Act. But, still, the Act did not specify the requirements of such forms of infringement and left its application to the discretion of courts. Sony Betamax case The case of Sony Corp v Universal City Studios Inc, commonly known as the Betamax case, gave the United States Supreme Court its first opportunity to comprehensively look into and interpret the rules regarding secondary liability and contributory infringement in context of the 1976 Copyright statute. The primary issue in this case was whether a VCR manufacturing company could be held liable for copyright infringements done by its customers. The court held that secondary liability for copyright infringements was not a foreign concept to US Copyright law and it was well enshrined in the copyright law of the United States. In the court's own words But, in this case, the Court held that Sony did not have actual knowledge of the infringing activities of its customers. At the most it could be argued that Sony had constructive knowledge of the fact that "its customers may use that equipment to make unauthorised copies of copyrighted material." The court then relied on the "staple article of commerce" doctrine of patent law and applied it to copyrights. The 'staple article of commerce' defence is available under Patent law in the United States and it lays down that when an infringing article is capable of 'substantial non infringing uses', it would become a 'staple article of commerce' and therefore, not attract any liability for infringement. Based on this reasoning, it was held Since the Betamax was capable of "significant noninfringing uses", Sony was not held liable for contributory infringement. Contributory infringement in P2P services Contributory infringement has been the central issue in the cases involving 'peer to peer' services such as Napster, Aimster, Grokster and Morpheus. The courts have applied the Sony Betamax ratio differently in all these cases. For instance, Napster was held liable for contributory infringement. But, a similar service like Grokster was not held liable for contributory infringement as in this case, a district court, grounding its reasoning on the Sony Betamax decision, held that secondary liability could not be applied to peer to peer services. A&M Records v Napster Napster was the first peer to peer service to be subject to copyright infringement litigation. In the Napster case, the issue was regarding the infringement of copyrights through the 'Music Share' software of Napster. Whenever this software was used on a computer system, it would collect information about the MP3 files stored on the computer and send it to Napster servers. Based on this information, the Napster created a centralized index of files available for download on the Napster network. When someone wanted to download that file, the Music Share software would use the Napster index to locate the user who already had that file on their system and then connect the two users directly to facilitate the download of the MP3 file, without routing the file through Napster's servers. The Ninth Circuit Court of Appeals found Napster liable for both "contributory infringement" and "vicarious infringement". Regarding the issue of contributory infringement, the court held that Napster had "actual knowledge" of infringing activity, and providing its software and services to the infringers meant that it had "materially contributed" to the infringement. It was held that the defense in Sony was of "limited assistance to Napster". The test whether a technology is capable of substantial non infringing uses was relevant only for imputing knowledge of infringement to the technology provider. But, in Napster's case, it was found that Napster had "actual, specific knowledge of direct infringement", and therefore, the Sony test would not be applicable. In Re Aimster In In re Aimster, the Seventh Circuit was called upon to decide the liability of peer to peer sharing of music files through the Instant Messaging services provided by Aimster. Aimster had argued that the transmission of files between its users was encrypted and because of that, Aimster could not possibly know the nature of files being transmitted using its services. But, the Seventh circuit Court of Appeals affirmed the decision of the district court which had issued a preliminary injunction against Aimster. It was found that Aimster had knowledge of the infringing activity. Its tutorial showed examples of copyrighted music files being shared. Also, the 'Club Aimster' service provided a list of 40 most popular songs made available on the service. It was also held that the encrypted nature of the transmission was not a valid defence as it was merely a means to avoid liability by purposefully remaining ignorant. It was held that 'wilful blindness is knowledge, in copyright law.." The Sony defence raised by Aimster was also rejected because of the inability of Aimster to bring on record any evidence to show that its service could be used for non infringing uses. Lastly, Aimster could also not get benefit of DMCA 'safe harbor' provisions because it had not done anything to comply with the requirements of Section 512. Instead, it encouraged infringement. MGM Studios v Grokster The District Court for the Central District of California, in MGM Studios v Grokster, held that the peer to peer services Morpheus and Grokster were not liable for copyright infringements carried out by their users. Unlike Napster, these services did not maintain a centralised index. Instead, they created ad hoc indices known as supernodes on the users computers. Sometimes, the software operated without creating any index at all. Thus, it was held that Grokster and Morpheus had no way of controlling the behaviour of their users once their software had been sold, just like Sony did with Betamax. It was found that the defendants did have knowledge of infringement because of the legal notices sent to them. But, it was also held that to attract liability under contributory infringement, there should be knowledge of a specific infringement at the precise moment when it would be possible for the defendant to limit such infringement. Also, it was found that there was no material contribution. For this, the court relied on Sony and compared the technology to that of a VCR or a photocopier to hold that the technology was capable of both infringing as well as non infringing uses. Grokster differs from Sony, as it looks at the intent of the defendant rather than just the design of the system. As per Grokster, a plaintiff must show that the defendant actually induced the infringement. The test was reformulated as "one who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties." Digital Millennium Copyright Act The Digital Millennium Copyright Act's Title II, known as the Online Copyright Infringement Liability Limitation Act, provides a safe harbor for online service providers and internet service providers against secondary liability for copyright infringements provided that certain requirements are met. Most importantly, the service provider must expeditiously take down or limit access to infringing material on its network if it receives a notification of an infringement. Communications Decency Act immunity under the communications Decency Act does not apply to copyright infringement as a cause of action. Inducing Infringements of Copyright Bill The Inducing Infringement of Copyrights Act, or the INDUCE Act, was a 2004 proposal in the United States Senate which meant to insert a new subsection '(g)' to the existing Section 501 of the Copyright Act which defines 'infringement'. The proposed amendment would provide that whoever intentionally induces a violation of subsection (a) would be liable as an infringer. The term 'intentionally induces' has been defined in the bill as- "intentionally aids, abets, induces, or procures, and intent may be shown by acts from which a reasonable person would find intent to induce infringement based upon all relevant information about such acts then reasonably available to the actor, including whether the activity relies on infringement for its commercial viability". In the European Union In the European Union, the European Court of Justice has issued several rulings on related matters, mainly based on the InfoSoc Directive and E-Commerce Directive and focused on what constitutes an act of "communication to the public" or of "making available". In India Section 51 of the Copyright Act, 1957 deals with copyright infringement in India. Section 51(a)(i) provides for when an infringement of copyright is deemed to have taken place. It states that when somebody does anything, the exclusive right to which is conferred on a copyright owner, without first securing a license to do so from the copyright owner or in contravention of a license, the copyright shall be deemed to have been infringed. The basis for contributory infringement under Indian copyright law can be found in Section 51(a)(ii) which states that when someone 'permits for profit any place to be used for the communication of the work to the public where such communication constitutes an infringement of the copyright in the work, unless he was not aware and had no reasonable ground for believing that such communication to the public would be an infringement of copyright', then also, the copyright shall be deemed to have been infringed. Secondary infringement itself can be subdivided into two categories- activities that assist primary infringements, and activities that accentuate the effects of the primary infringement. Section 51(a)(ii) deals with cases in which somebody assist the primary infringement. Section 51(a)(ii) itself gives the defense which can be taken by a defendant to avoid liability under this provision, i.e., the defendant was not aware or had no reasonable ground for believing that the communication to the public would be an infringement of the copyright. Section 51(b) deals with situations in which the effects of an already existing primary infringement are accentuated by the actions of the defendant. Section 51(b) provides that a copyright infringement will also be deemed to have taken place if a person sells, distributes, imports or exhibits in public by way of trade an infringing copy of a copyright-protected work. Therefore, Section 51(a)(ii) and Section 51(b) are the statutory basis for secondary liability in India including contributory infringement. Information Technology Act, 2000 The Information Technology Act, 2000 ("IT Act") contains specific provisions dealing with liabilities of Internet Service Providers. These provisions provide for 'safe harbors' for Internet Service Providers. Section 2(w) of the IT Act defines an 'intermediary' as 'intermediary with respect to any particular electronic message means any person who on behalf of another person receives, stores or transmits that message or provides any service with respect to that message'. Due to this wide definition, almost every entity, including ISPs, search engines and online service providers can get the benefit of the safe harbor provisions in the IT Act. Section 79 of the IT Act provides that an intermediary shall not be liable for any third-party information, data, or communication link made available or hosted by the intermediary. But, an intermediary will get the benefit of the safe harbor provisions only if it satisfies certain conditions. The intermediaries function should be limited to providing access to a communication system, the intermediary should not initiate the transmission, select the receiver or modify the transmission and should observe the guidelines formulated by the Central Government in this regard. The 'IT (Intermediary guidelines) Rules 2011' have been formulated to specify the conditions that an intermediary must satisfy to get the protection of safe harbor provisions. As per these guidelines, the intermediary must observe due diligence measures specified under Rule 3 of the guidelines. For instance, the intermediary should take down any infringing material on its network within thirty-six hours of the infringement being brought to its notice. My Space Inc. vs Super Cassettes Industries Ltd. In December, 2016, the Delhi High Court reversed the judgment passed by a single judge bench earlier to hold that unless 'actual knowledge' was proved, an intermediary could not be held liable for contributory copyright infringement. In 2008, T-Series (Super Cassettes) had instituted a copyright infringement suit against MySpace for hosting infringing material in which Super Cassettes was the copyright owner, without first obtaining a license. The infringing material primarily consisted of sound recordings. It was alleged that MySpace was commercially exploiting the works of T Series by including advertisements with the works made available by it. The Single Judge had held that MySpace was guilty of copyright infringement under Section 51 of the Copyright Act and the benefit of safe harbor provisions under Section 79 of the IT Act were not available to it in light of Section 81 of the IT Act. The judgment of the single judge was reversed on the following grounds- No Actual Knowledge Liability under Section 51(a)(ii) can be avoided by the defendant if he or she is able to show that he or she did not have any knowledge of the infringing act or that he or she did not have any reason to believe that the communication would amount to an infringement. Super Cassettes had argued that 'place' under Section 51(a)(ii) includes a virtual space similar to the one provided by MySpace. It was argued that MySpace had knowledge of the infringement based on the fact that it had incorporated safeguard tools to weed out infringing material and that it invited users to upload and share content. Therefore, it was argued that there was implied knowledge. The Court held that to qualify as knowledge there should be awareness in the form of "actual knowledge" as opposed to just general awareness. Without specific knowledge of infringements, the intermediary could not be said to have reason to believe that it was carrying infringing material. Therefore, there was a duty on the plaintiff to first identify specific infringing material before knowledge could be imputed to the defendant. Safe Harbor under Section 79 of IT Act Section 79 of the IT Act provides safe harbor to intermediaries provided certain conditions are met by them. But, Section 81 of the IT Act also states that nothing in the IT Act shall restrict the rights of any person under the Copyright Act, 1957. The single judge had interpreted Section 81 to mean that safe harbor under IT Act is not applicable in cases of Copyright Infringement. The Court reversed this and held that Section 79 starts with a non obstante clause and precludes the application of any other law including Copyright law. Thus, any restriction on safe harbor provisions such as Section 81 can be read-only within the limits of Section 79. Also, the IT Act and the Copyright Act should be construed harmoniously given their complementary nature. Further, MySpace's role was limited to providing access to a communication system. It only modified the format and not the content and even this was an automated process. Therefore, there was no material contribution also. To amount to an infringement under Section 51 of the Copyright Act, the authorization to do something which was part of an owner's exclusive rights requires more than merely providing the means or place for communication. To be held liable for being an infringer on the grounds of authorization, it was necessary to show active participation or inducement. Therefore, Section 79 is available in cases of copyright infringement also provided the conditions under the Act and Intermediary Guidelines, 2011 are fulfilled. Since MySpace had fulfilled these requirements, it was given the protection of Section 79 of IT Act. See also Copyright law of India Copyright law of the United States of America Secondary Liability Copyright Act, 1976 Digital Millennium Copyright Act Information technology Act, 2000 References Infringement Organized crime Organized crime activity