id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
36818944 | https://en.wikipedia.org/wiki/Chematica | Chematica | Chematica is a software that uses algorithms and a collective database to predict synthesis pathways for molecules. The software development, led by Bartosz A. Grzybowski, was finally publicized in August, 2012.
In 2017, the software and database were wholly purchased by Merck KGaA | MRK. The software has been made available commercially since the acquisition as Synthia.
Features
The software was designed to combine long synthesis paths into shorter and more economical paths.
The software complements other attempts such as manual searching or semi-automated search tools.
A molecule can be specified in multiple ways, including searching by Beilstein Registry Number, CAS registry number, chemical name, SMILES structure, or by drawing the molecule diagram itself. It supports optimization of reactions by cost. The program also supports 3D modeling of individual molecules, as well as labeling of functional groups.
The program also notes regulated and unregulated compounds, and uses specialized algorithms that avoid these. It also gives the classification and reasons for regulation.
References
Chemical databases |
56150501 | https://en.wikipedia.org/wiki/Military%20Planning%20and%20Conduct%20Capability | Military Planning and Conduct Capability | The Military Planning and Conduct Capability (MPCC) is a permanent operational headquarters (OHQ) at the military strategic level for military operations of up to 2500 troops (i.e. the size of one battle group) deployed as part of the Common Security and Defence Policy (CSDP) of the European Union (EU) by the end of 2020. Since its inception in 2017, the MPCC has commanded three non-executive training missions in Somalia, Mali and the Central African Republic.
The MPCC is part of the EU Military Staff (EUMS), a directorate-general of the European External Action Service (EEAS), and the Director General of the EUMS also serves as Director of the MPCC - exercising command and control over the operations.
Through the Joint Support Coordination Cell (JSCC), the MPCC cooperates with its civilian counterpart, the Civilian Planning and Conduct Capability (CPCC).
The MPCC is situated in the Kortenberg building in Brussels, Belgium, along with a number of other CSDP bodies.
History
2016-2020: MPCC established for non-executive missions
In 2016, the European Union Global Strategy was adopted a British referendum was held and resulted in favour of UK withdrawal (Brexit). In its November 2016 Conclusions on implementing the Global Strategy in the area of security and defence, the Council of the EU invited High Representative Morgherini to propose ‘a permanent operational planning and conduct capability at the strategic level for non-executive military missions’ under political control and strategic direction of the Political and Security Committee (PSC).
On 8 June 2017, the Council of the European Union (EU) decided to establish a Military Planning and Conduct Capability (MPCC), albeit not permitted to run executive missions in order to avoid a British veto.
A non-executive military mission is defined as an operation conducted in support of a host nation which has an advisory role only. In comparison to an executive military operation which is mandated to conduct actions in replacement of the host nation. Combat operations would fall into this category.
2020-: First mandate extension
EU officials have indicated that a review in 2018 might extend the MPCC's mandate to also include operations with combat elements - or so-called executive missions. Diplomats have also indicated that the MPCC will be 'rebranded' as the EU's Operational Headquarters (OHQ) after the British withdrawal from the Union, which was scheduled to happen on 31 October 2019. On 20 November 2018 the MPCC's mandate was expanded to include executive operations (i.e. with combat elements) also by the end of 2020. As such, the MPCC takes over role of the previous European Union Operations Centre (EU OPCEN).
Second mandate extension
A further review of the MPCC's roles and responsibilities has also been agreed with a view to completion by the end of 2020. It is expected that the review will recommend the expansion of the MPCC's role even further and establish it as the EU military planning HQ that several member states have long hoped for and the UK has always opposed.
This should be seen in connection with a Permanent Structured Cooperation project titled Strategic Command and Control System for CSDP Missions and Operations. This aims to "improve the command and control systems of EU missions and operations at the strategic level. Once implemented, the project will enhance the military decision-making process, improve the planning and conduct of missions, and the coordination of EU forces. The Strategic Command and Control (C2) System for CSDP Missions will connect users by delivering information systems and decision-making support tools that will assist strategic commanders carry out their missions. Integration of information systems would include intelligence, surveillance, command and control, and logistics systems."
Structure
The MPCC is a single military strategic command and control structure, responsible for the operational planning and conduct of military missions of up to 2500 troops. This includes the building up, deployment, sustaining and recovery of EU forces. The MPCC will at present control the three EU training missions in Central African Republic, Mali and Somalia.
The MPCC will be reporting to the Political and Security Committee (PSC), and informing the EU Military Committee (EUMC). The MPCC will also cooperate with its existing civilian counterpart, the Civilian Planning and Conduct Capability (CPCC), through a Joint Support Coordination Cell (JSCC).
The MPCC has a maximum of 60 personnel, in addition to personnel seconded from member states.
Director
The Director General of the EUMS also serves as the Director of the MPCC and in that capacity assume the function of the single commander for all non-executive military missions, exercising command and control over the current three training Missions and other possible future non-executive military Missions.
The current three Mission Commanders will become ‘Mission Force Commanders’ who will act under the command of the Director of the MPCC and will remain responsible for exercising military command authority on the ground. The Director of the MPCC will assume the same role, tasks and command relationships as those attributed to a military Operation Commander (OpCdr). He will also exercise the responsibilities related to deployment and recovery of the missions as well as overall budgeting, auditing and reporting.
See also
Common Security and Defence Policy
European External Action Service
European Union Military Staff
Civilian Planning and Conduct Capability
List of military and civilian missions of the European Union
Allied Command Operations of the North Atlantic Treaty Organization
Supreme Headquarters Allied Powers Europe
References
Further reading
A permanent headquarters under construction? The Military Planning and Conduct Capability as a proximate principal, Yf Reykers
The EU Military Staff: a frog in boiling water?, Militaire Spectator
External links
EU defence cooperation: Council establishes a Military Planning and Conduct Capability (MPCC), Council of the European Union
Factsheet 2017
Factsheet 2018
EU military HQ to take charge of Africa missions, EUobserver
European Union Military Staff
Joint military headquarters |
5059992 | https://en.wikipedia.org/wiki/Slingsby%20T67%20Firefly | Slingsby T67 Firefly | The Slingsby T67 Firefly, originally produced as the Fournier RF-6, is a two-seat aerobatic training aircraft, built by Slingsby Aviation in Kirkbymoorside, Yorkshire, England.
It has been used as a trainer aircraft by several armed forces, as well as civilian operators. In the mid-1990s, the aircraft became controversial in the United States after three fatal accidents during US Air Force training operations. The Firefly has poor spin recovery, and has been involved in at least 36 fatal accidents.
Development
The RF-6 was designed by René Fournier, and first flew on 12 March 1974. An all-wooden construction, it featured a high aspect-ratio wing echoing his earlier motorglider designs. Fournier set up his own factory at Nitray to manufacture the design, but after only around 40 had been built, the exercise proved financially unviable, and he was forced to close down production. A four-seat version was under development by Sportavia as the RF-6C, but this demonstrated serious stability problems that eventually led to an almost complete redesign as the Sportavia RS-180.
In 1981, Fournier sold the development rights of the RF-6B to Slingsby, which renamed it the T67. The earliest examples, the T67A, were virtually identical to the Fournier-built aircraft, but the design was soon revised to replace the wooden structure with one of composite material. Slingsby produced several versions developing the airframe and adding progressively larger engines. The Slingsby T67M, aimed at the military (hence "M") training market, was the first to include a constant-speed propeller and inverted fuel and oil systems. Over 250 aircraft have been built, mainly the T67M260 and closely related T-3A variants.
Operational history
The largest Firefly operator was the United States Air Force, where it was given the designation T-3A Firefly. The Firefly was selected in 1992 to replace the T-41 aircraft for the command's Enhanced Flight Screening Program, which would include aerobatic maneuvers. From 1993 to 1995, 113 aircraft were purchased and delivered to Hondo Municipal Airport in Texas, and the U.S. Air Force Academy in Colorado. The type was meant to not only replace the Cessna T-41 introductory trainer, but also to meet the Enhanced Flight Screening Program (EFSP) requirements. The Commander of the Air Education and Training Command stood down the entire T-3A fleet in July 1997 as a result of uncommanded engine stoppages during flight and ground operations. A major factor driving the decision were the three T-3A Class A mishaps. Three Air Force Academy cadets and three instructors were killed in these T-3A mishaps. The US Air Force has no replacement for this type, as it no longer provides training to non-fliers. The aircraft were eventually declared in excess of need in the early 2000s and disposed of by scrapping in 2006.
The Royal Air Force used 22 Slingsby T67M260s as their basic trainer between 1995 and 2010. Over 100,000 flight hours were flown out of RAF Barkston Heath by Army, Royal Navy and Royal Marines students, and at RAF Church Fenton with RAF and foreign students. It was flown by Prince Harry as a basic trainer during his Army Air Corps flying training course, based at RAF Barkston Heath, including his first solo flight in Slingsby T67M260 registration G-BWXG in 2009.
The Firefly has also been used by the Royal Hong Kong Auxiliary Air Force, and the Royal Jordanian Air Force (still currently used).
The Firefly was used in Britain for basic aerobatic training in the 2000s. In December 2012, the National Flying Laboratory Centre at Cranfield University in the UK acquired a T67M260 to supplement its Scottish Aviation Bulldog aerobatic trainer for MSc student flight experience and training. As of 2019 the Firefly is used in UPRT courses.
Variants
RF-6B
Main Fournier production series with Rolls-Royce-built Continental O-200 engine (43 built)
RF-6B/120
RF-6B with Lycoming O-235 engine, one built
RF-6C
Four-seat version of RF-6B built by Sportavia with Lycoming O-320 engine, four built, developed into Sportavia RS-180
T67A
Slingsby-built RF-6B/120 certified on 1 October 1981, O-235 118hp engine, wooden construction, 2 blade fixed prop, fuel in firewall tank, single piece canopy, ten built
T67M Firefly
First flown on 5 December 1982 and certified on 2 August 1983, the T67M was developed from the T67A as a glass-reinforced plastic aircraft for a role as a military trainer. The T67M has a fuel-injected Lycoming AEIO320-D1B and a two-blade Hoffman HO-V72L-V/180CB constant-speed propeller, single piece canopy, fuel in firewall tank. The fuel-injected engine with inverted fuel and oil systems allowed the aircraft to perform sustained negative-G (inverted) aerobatics, although inverted spins were never formally approved. A total of 32 T67Ms (including the later T67M MkII) were produced.
T67B
First flown on 16 April 1981 and certified on 18 September 1984, the T67B was effectively the T67A made, like the T67M, in glassfibre reinforced plastic, but without the up-rated engine and propeller. O-235 118hp engine, 2 blade fixed prop, fuel in firewall tank, single piece canopy. A total of 14 T67Bs were produced.
T67M MkII Firefly
Certified on 20 December 1985, AEIO-320 fuel injected 160hp engine, 2 blade constant speed prop, inverted fuel and oil systems. The T67M MkII replaced the single-piece canopy of the T67M with a two-piece design, and the single fuselage fuel tank with two, larger tanks in the wings.
T67M200 Firefly
Certified on 19 June 1987, the T67M200 was had a more powerful Lycoming AEIO360-A1E with a three-bladed Hoffman propeller, inverted fuel and oil systems. A total of 26 T67M-200s were produced.
T67C Firefly
Certified on 15 December 1987, the T67C was the last of the "civilian" variants, based on the T67B with an uprated Lycoming O-320 engine, but without fuel injection and inverted-flight systems found on the T67M variants. Two blade constant speed prop. Two further sub-versions of the T67C copied the two-piece canopy (T67C-2) and wing tanks (T67C-3, sometimes known as the T67D) from the T67M MkII. A total of 28 T67Cs were produced across the three versions.
T67M260 Firefly
Certified on 11 November 1993, the T67M260 added even more power with the six-cylinder Lycoming AEIO540-D4A5 engine, three blade constant speed prop. Unusually for side-by-side light aircraft, the T67M260 was built to be flown solo from the right-hand seat to allow student pilots to immediately get used to the left-hand throttle found in most military aircraft – earlier models of the T67M had a second throttle on the left-hand sidewall of the cabin. A total of 51 T67M-260s were produced. They were used to successfully train hundreds of RAF, RN, British Army, and foreign and Commonwealth pilots through Joint Elementary Flying Training School until late 2010.
T67M260-T3A Firefly
Certified on 15 December 1993, the last military version of the T67 family was the T67M260-T3A, of which the entire production run of 114 was purchased by the United States Air Force, where it was known as the T-3A. The T-3A was basically the T67M260 with the addition of air conditioning. Although the US media claimed the aircraft was to blame after four aircraft were destroyed in accidents, no engine stoppages or vapour-lock problems with the fuel system were found during very thorough tests at Edwards AFB. All three instructors killed in the accidents came from the C-141, a large-transport aircraft. Their only prior aerobatic experience was in Air Force pilot training in the T-37 and T-38 jet trainers. This, combined with thinner air at the higher density altitude of the Academy airfield and training areas, meant spin recovery was delayed and/or improper spin prevent/recovery techniques were used. Parachutes were not worn on the first fatal accident but were worn on the second and third fatal accidents. Both of these accidents were caused by low altitude spins. Following the three fatal accidents and an engine failure in the Academy landing pattern the fleet was grounded in 1997 and stored without maintenance until being destroyed in 2006.
CT-111 Firefly
Designation by the Canadian Forces internally only as aircraft are registered as civilian aircraft
Operators
Military operators
Bahrain Air Force operates three T67M-260
Belize Defence Force Air Wing – 1 x T67M-260
Royal Jordanian Air Force- 14 × T67M-260
Dutch pilot selection centre:
The Firefly is used by the Royal Netherlands Air Force during pilot selection which is contracted out to TTC at Seppe Airport.
Former military operators
Canadian Forces
The Firefly was used as a basic military training aircraft in Canada. The Canadian Fireflies entered service in 1992 replacing the CT 134 Musketeer. They were, in turn, replaced in 2006 by the German-made Grob G-120 when the contract ended. The aircraft were owned and operated by Bombardier Aerospace under contract to the Canadian Forces.
Royal Hong Kong Auxiliary Air Force
Royal Air Force, Royal Navy, British Army
The Firefly was used as a basic military trainer in the United Kingdom until spring 2010, when they were replaced by Grob Tutor aircraft. The aircraft are owned and operated under contract by a civilian company on behalf of the military. In the UK, it was under a scheme known as "Contractor Owned Contractor Operated" (CoCo).
United States Air Force
Civil operators
/
Royal Hong Kong Auxiliary Air Force/Hong Kong Government Flying Service – retired all four T-67M-200 aircraft after 1996
Hong Kong Aviation Club – used for pilot aerobatics training
New Zealand
Auckland Aero Club – one T67B – used for pilot aerobatics training and high-visibility scenic flight.
North Shore Aero Club – one T67M200 – Used for pilot aerobatics training.
Spain
FTEJerez – one T67M MarkII – used to provide upset training to graduates
Turkey
Turkish Aeronautical Association (Türk Hava Kurumu) – used to give basic flight training to ATPL trainees (T67M200)
United Kingdom
Swift Aircraft purchased 21 Slingsby T.67M260 Aircraft from Babcock Defense Services in June 2011, to be offered for sale or lease.
Cranfield University operates one T67 to provide flight initiations for aerospace engineers.
Leading Edge Aviation operates one ex Royal Hong Kong Auxiliary Air Force T67 to operate A-UPRT flights for its student pilots.
Specifications (T-3A)
See also
References
Hoyle, Craig. "World Air Forces Directory". Flight International, Vol. 180, No. 5231, 13–19 December 2011, pp. 26–52. ISSN 0015-3710.
External links
https://www.ntsb.gov/_layouts/ntsb.aviation/brief.aspx?ev_id=20141024X52246&key=1
Official Canadian Forces T67 Firefly page
UK Type Certificate for the T67 Firefly family
Marshall Slingsby's Retrieved 18 June 2012
Tom Cassell's Retrieved 18 June 2012
Air Force Scrapping Troubled Plane Retrieved 9 September 2006
AF link: Officials announce T-3A Firefly final disposition with public domain picture.
1970s French sport aircraft
Fournier aircraft
Firefly
Single-engined tractor aircraft
Low-wing aircraft
Aircraft first flown in 1974 |
57202948 | https://en.wikipedia.org/wiki/ACCS | ACCS | ACCS may refer to:
Adarsh Credit Cooperative Society, an Indian credit society
Air Command and Control System, a NATO project to replace outdated technology
Association of Classical and Christian Schools, an organization that encourages the formation of Christian schools using a model of classical education
See also
List of United States Air Force airborne command and control squadrons for Airborne Command and Control Squadron
American Association of Christian Colleges and Seminaries for American Christian College and Seminary, any school that is part of the association |
5087621 | https://en.wikipedia.org/wiki/Python%20License | Python License | The Python License is a deprecated permissive computer software license created by the Corporation for National Research Initiatives (CNRI). It was used for versions 1.6 and 2.0 of the Python programming language, both released in the year 2000.
The Python License is similar to the BSD License and, while it is a free software license, its wording in some versions meant that it was incompatible with the GNU General Public License (GPL) used by a great deal of free software including the Linux kernel. For this reason CNRI retired the license in 2001, and the license of current releases is the Python Software Foundation License.
Origin
Python was created by Guido van Rossum and the initial copyright was held by his employer, the Centrum Wiskunde & Informatica (CWI). During this time Python was distributed under a GPL-compatible variant of the Historical Permission Notice and Disclaimer license. CNRI obtained ownership of Python when Van Rossum became employed there, and after some years they drafted a new license for the language.
Retirement
The Python License includes a clause stating that the license is governed by the State of Virginia, United States. The Python Software Foundation License; Python 1.6.1 differs from Python 1.6 only in some minor bug fixes and new GPL-compatible licensing terms.
References
Python (programming language)
Free and open-source software licenses
Permissive software licenses |
481929 | https://en.wikipedia.org/wiki/Not%20a%20typewriter | Not a typewriter | In computing, "Not a typewriter" or ENOTTY is an error code defined in the errno.h found on many Unix systems. This code is now used to indicate that an invalid ioctl (input/output control) number was specified in an ioctl system call.
Details
This error originated in early UNIX. In Version 6 UNIX and earlier, I/O control was limited to serial-connected terminal devices, typically a teletype (abbreviated TTY), through the gtty and stty system calls. If an attempt was made to use these calls on a non-terminal device, the error generated was ENOTTY. When the stty/gtty system calls were replaced with the more general ioctl (I/O control) call, the ENOTTY error code was retained.
Early computers and Unix systems used electromechanical typewriters as terminals. The abbreviation TTY, which occurs widely in modern UNIX systems, stands for "Teletypewriter." For example, the original meaning of the SIGHUP signal is that it Hangs UP the phone line on the teletypewriter which uses it. The generic term "typewriter" was probably used because "Teletype" was a registered trademark of AT&T subsidiary Teletype Corporation and was too specific. The name "Teletype" was derived from the more general term, "teletypewriter"; using "typewriter" was a different contraction of the same original term.
POSIX sidesteps this issue by describing ENOTTY as meaning "not a terminal".
Because ioctl is now supported on other devices than terminals, some systems display a different message such as "Inappropriate ioctl for device" instead.
Occurrence
In some cases, this message will occur even when no ioctl has been issued by the program. This is due to the way the isatty() library routine works. The error code errno is only set when a system call fails. One of the first system calls made by the C standard I/O library is in an isatty() call used to determine if the program is being run interactively by a human (in which case isatty() will succeed and the library will write its output a line at a time so the user sees a regular flow of text) or as part of a pipeline (in which case it writes a block at a time for efficiency). If a library routine fails for some reason unrelated to a system call (for example, because a user name wasn't found in the password file) and a naïve programmer blindly calls the normal error reporting routine perror() on every failure, the leftover ENOTTY will result in an utterly inappropriate "Not a typewriter" (or "Not a teletype", or "Inappropriate ioctl for device") being delivered to the user.
For many years the UNIX mail program sendmail contained this bug: when mail was delivered from another system, the mail program was being run non-interactively. If the destination address was local, but referred to a user name not found in the local password file, the message sent back to the originator of the email was the announcement that the person they were attempting to communicate with was not a typewriter.
See also
lp0 on fire
References
External links
Computer error messages
Input/output
POSIX error codes |
41087955 | https://en.wikipedia.org/wiki/Accelerator%20physics%20codes | Accelerator physics codes | A charged particle accelerator is a complex machine that takes elementary charged particles and accelerates them to very high energies. Accelerator physics is a field of physics encompassing all the aspects required to design and operate the equipment and to understand the resulting dynamics of the charged particles. There are software packages associated with each such domain. There are a large number of such codes. The 1990 edition of the Los Alamos Accelerator Code Group's compendium provides summaries of more than 200 codes. Certain of those codes are still in use today although many are obsolete. Another index of existing and historical accelerator simulation codes is located at
Single particle dynamics codes
For many applications it is sufficient to track a single particle through the relevant electric and magnetic fields.
Old codes no longer maintained by their original authors or home institutions include: BETA, AGS, ALIGN, COMFORT, DESIGN, DIMAD, HARMON, LEGO, LIAR, MAGIC, MARYLIE, PATRICIA, PETROS, RACETRACK, SYNCH, TRANSPORT, TURTLE, and UAL. Some legacy codes are maintained by commercial organizations for academic, industrial and medical accelerator facilities that continue to use those codes. TRANSPORT, TRACE 3-D and TURTLE are among the historic codes that are commercially maintained.
Major maintained codes include:
Columns
Spin Tracking
Tracking of a particle's spin.
Taylor Maps
Construction of Taylor series maps to high order that can be used for simulating particle motion and also can be used for such things as extracting single particle resonance strengths.
Weak-Strong Beam-Beam Interaction
Can simulate the beam-beam interaction with the simplification that one beam is essentially fixed in size. See below for a list of strong-strong interaction codes.
Electromagnetic Field Tracking
Can track (ray trace) a particle through arbitrary electromagnetic fields.
Higher Energy Collective effects
The interactions between the particles in the beam can have important effects on the behavior, control and dynamics. Collective effects take different forms from Intrabeam Scattering (IBS) which is a direct particle-particle interaction to wakefields which are mediated by the vacuum chamber wall of the machine the particles are traveling in. In general, the effect of direct particle-particle interactions is less with higher energy particle beams. At very low energies, space charge has a large effect on a particle beam and thus becomes hard to calculate. See below for a list of programs that can handle low energy space charge forces.
Synchrotron radiation tracking
Ability to track the synchrotron radiation (mainly X-rays) produced by the acceleration of charged particles.
Wakefields
The electro-magnetic interaction between the beam and the vacuum chamber wall enclosing the beam are known as wakefields. Wakefields produce forces that affect the trajectory of the particles of the beam and can potentially destabilize the trajectories.
Extensible
Open source and object oriented coding to make it relatively easy to extend the capabilities.
Space Charge Codes
The self interaction (e.g. space charge) of the charged particle beam can cause growth of the beam, such as with bunch lengthening, or intrabeam scattering. Additionally, space charge effects may cause instabilities and associated beam loss. Typically, at relatively low energies (roughly for energies where the relativistic gamma factor is less than 10 or so), the Poisson equation is solved at intervals during the tracking using Particle-in-cell algorithms. Space charge effects lessen at higher energies so at higher energies the space charge effects may be modeled using simpler algorithms that are computationally much faster than the algorithms used at lower energies.
Codes that handle low energy space charge effects include:
ASTRA
Bmad
CST Studio Suite
GPT
IMPACT
mbtrack
ORBIT, PyORBIT
OPAL
PyHEADTAIL
Synergia
TraceWin
Tranft
VSim
Warp
At higher energies, space charge effects include Touschek scattering and coherent synchrotron radiation (CSR). Codes that handle higher energy space charge include:
Bmad
ELEGANT
MaryLie
SAD
"Strong-Strong" Beam-beam effects codes
When two beams collide, the electromagnetic field of one beam will then have strong effects on the other one, called beam-beam effects. So called "weak-strong" simulations model one beam (called the "strong" beam since it affects the other beam) as a fixed distribution (typically a Gaussian distribution) which interacts with the particles of the other "weak" beam. This greatly simplifies the simulation. A full "strong-strong" simulation is more complicated and takes more simulation time. Strong-strong codes include
GUINEA-PIG
Impedance computation codes
An important class of collective effects may be summarized in terms of the beams response to an "impedance". An important job is thus the computation of this impedance for the machine. Codes for this computation include
ABCI
ACE3P
CST Studio Suite
GdfidL
TBCI
VSim
Magnet and other hardware-modeling codes
To control the charged particle beam, appropriate electric and magnetic fields must be created. There are software packages to help in the design and understanding of the magnets, RF cavities, and other elements that create these fields. Codes include
ACE3P
COMSOL Multiphysics
CST Studio Suite
OPERA
VSim
Lattice file format and data interchange issues
Given the variety of modelling tasks, there is not one common data format that has developed.
For describing the layout of an accelerator and the corresponding elements, one uses a so-called "lattice file".
There have been numerous attempts at unifying the lattice file formats used in different codes. One unification attempt is the Accelerator Markup Language, and the Universal Accelerator Parser. Another attempt at a unified approach to accelerator codes is the UAL or Universal Accelerator Library.
The file formats used in
MAD may be the most common, with translation routines available to convert to an input form needed for a different code.
Associated with the Elegant code is a data format called SDDS, with an associated suite of tools. If one uses a Matlab-based code, such as Accelerator Toolbox, one has available all the tools within Matlab.
Codes in applications of particle accelerators
There are many applications of particle accelerators. For example, two important applications are elementary particle physics and synchrotron radiation production. When performing a modeling task for any accelerator operation, the results of charged particle beam dynamics simulations must feed into the associated application. Thus, for a full simulation, one must include the codes in associated applications. For particle physics, the simulation may be continued in a detector with a code such as Geant4.
For a synchrotron radiation facility, for example, the electron beam produces an x-ray beam that then travels down a beamline before reaching the experiment. Thus, the electron beam modeling software must interface with the x-ray optics modelling software such as SRW, Shadow, McXTrace, or Spectra. Bmad can model both X-rays and charged particle beams. The x-rays are used in an experiment which may be modeled and analyzed with various software, such as the DAWN science platform. OCELOT also includes both synchrotron radiation calculation and x-ray propagation models.
Industrial and medical accelerators represent another area of important applications. A 2013 survey estimated that there were about 27,000 industrial accelerators and another 14,000 medical accelerators world wide, and those numbers have continued to increase since that time. Codes used at those facilities vary considerably and often include a mix of traditional codes and custom codes developed for specific applications. The Advanced Orbit Code (AOC) developed at Ion_Beam_Applications is an example.
See also
List of codes from UCLA Particle Beam Physics Laboratory
Comparison of Accelerator Codes
References
Accelerator physics
Scientific simulation software |
15684278 | https://en.wikipedia.org/wiki/Zarafa%20%28software%29 | Zarafa (software) | Zarafa was an open-source groupware application that originated in the city of Delft in the Netherlands. The company that developed Zarafa, previously known as Connectux, is also called Zarafa. The Zarafa groupware provided email storage on the server side and offered its own Ajax-based mail client called WebAccess and a HTML5-based, WebApp. Advanced features were available in commercially supported versions ("Small Business", "Professional" and "Enterprise" (different feature levels)). Zarafa has been superseded by Kopano.
Zarafa was originally designed to integrate with Microsoft Office Outlook and was intended as an alternative to the Microsoft Exchange Server. Connectivity with Microsoft Outlook was provided via a proprietary client-side plugin. Support for the plugin has been discontinued after Q1/2016, though Outlook from then on can use its own ActiveSync implementation instead. The WebApp (and WebAccess) has the same "look-and-feel" as the Outlook OWA. The software handles a personal address-book, calendar, notes and tasks, "Public Folders", a shared calendar (inviting internal and external users, resource management), exchange of files, and video chat. The open source edition does not support any MAPI-based Outlook users, while the community edition supports three Outlook users.
All server-side components and the WebApp/WebAccess of Zarafa are published under the Affero General Public License (AGPL), based on the GNU General Public License, version 2 (GPLv2). Introducing and maintaining a dual-licensing strategy, on 18 September 2008 Zarafa released the full core software, that is the server side software stack, under the GNU Affero General Public License, version 3 (AGPLv3).
Technology
Zarafa provides its groupware functionality by connecting the Linux-based server with Outlook clients using MAPI. The communication between server and client is based upon SOAP technology. The connection to Outlook clients can be secured using TLS/SSL, either directly between the Zarafa server program and the client, or via an HTTPS proxy.
All data is generally stored in a MySQL database, although attachments can be saved on the filesystem. The Zarafa server can get its user information from LDAP, Active Directory, Unix user accounts or the MySQL database.
The webmail is based on HTML5 (WebApp) and AJAX technology (WebAccess), with a PHP backend using a MAPI PHP extension.
Other clients can connect via POP3, IMAP and iCalendar/CalDAV.
Zarafa initiated a project called Z-push in October 2007. It supports Exchange ActiveSync compatible devices (Symbian, Pocket PC, iPhone (firmware 2.0 and higher), Android (version 2.1 and higher), Nokia (mail4Exchange)) implementing the ActiveSync protocol and using the Incremental Change System (ICS) provided by the PHP-MAPI extension.
See also
List of collaborative software
List of applications with iCalendar support
References
Publications
Peter van Wijngaarden: Linux Magazine NL, Sep/2006, nr 4 -- Zarafa extended with real-time LDAP coupling
Sebastian Kummer und Manfred Kutas: Linux Magazine PRO (USA) Feb/2008 -- Zarafa - Exchange Alternative, Linux New Media AG, München, 2007
Roberto Galoppini and Davide Galletti: Open Source Messaging & Collaboration: Zarafa, SOS Open Source 2011
External links
Official website
Free email software
Free groupware
Wireless email
Data synchronization
Collaborative software
Groupware
Ajax (programming)
Groupware
Software using the GNU AGPL license |
594109 | https://en.wikipedia.org/wiki/Wolfram%20Research | Wolfram Research | Wolfram Research, Inc. ( ) is an American multinational company that creates computational technology. Wolfram's flagship product is the technical computing program Wolfram Mathematica, first released on June 23, 1988. Other products include WolframAlpha, Wolfram SystemModeler, Wolfram Workbench, gridMathematica, Wolfram Finance Platform, webMathematica, the Wolfram Cloud, and the Wolfram Programming Lab. Wolfram Research founder Stephen Wolfram is the CEO. The company is headquartered in Champaign, Illinois, United States.
History
The company launched Wolfram Alpha, an answer engine on May 16, 2009. It brings a new approach to knowledge generation and acquisition that involves large amounts of curated computable data in addition to semantic indexing of text.
Wolfram Research acquired MathCore Engineering AB on March 30, 2011.
On July 21, 2011, Wolfram Research launched the Computable Document Format (CDF). CDF is an electronic document format designed to allow easy authoring of dynamically generated interactive content.
In June 2014, Wolfram Research officially introduced the Wolfram Language as a new general multi-paradigm programming language. It is the primary programming language used in Mathematica.
On April 15, 2020, Wolfram Research received $5,575,000 to help pay its employees during the COVID-19 pandemic as part of the U.S. government's Paycheck Protection Program administered by the Small Business Administration.
Products and resources
Mathematica
Mathematica is technical computing software with features for neural networks, machine learning, image processing, geometry, data science, visualizations. Mathematica includes a notebook interface and can produce slides for presentations.
Wolfram Alpha
Wolfram Alpha is a free online service that answers factual queries directly by computing the answer from externally sourced curated data, rather than providing a list of documents or web pages that might contain the answer as a search engine might. Users submit queries and computation requests via a text field and Wolfram Alpha then computes answers and relevant visualizations.
On February 8, 2012, Wolfram Alpha Pro was released, offering users additional features(e.g., the ability to upload many common file types and data — including raw tabular data, images, audio, XML, and dozens of specialized scientific, medical, and mathematical formats — for automatic analysis) for a monthly subscription fee.
In 2016, Wolfram Alpha Enterprise, a business-focused analytics tool, was launched. The program combines data supplied by a corporation with the algorithms from Wolfram Alpha to answer questions related to that corporation.
Wolfram SystemModeler
Wolfram SystemModeler is a platform for engineering as well as life-science modeling and simulation based on the Modelica language. It provides an interactive graphical modeling and simulation environment and a customizable set of component libraries. The primary interface, ModelCenter, is an interactive graphical environment including a customizable set of component libraries. The software also provides a tight integration with Mathematica. Users can develop, simulate, document, and analyze their models within Mathematica notebooks.
Publishing
Wolfram Research publishes several free websites including the MathWorld and ScienceWorld encyclopedias. ScienceWorld, which launched in 2002, is divided into sites on chemistry, physics, astronomy and scientific biography. In 2005, the physics site was deemed a "valuable resource" by American Scientist magazine. However, by 2009, the astronomy site was said to suffer from outdated information, incomplete articles and link rot.
The Wolfram Demonstrations Project is a collaborative site hosting interactive technical demonstrations powered by a free Mathematica Player runtime.
Wolfram Research publishes The Mathematica Journal. Wolfram has also published several books via Wolfram Media, Wolfram's publishing arm. In addition, they have experimented with electronic textbook creation.
Media activities
Wolfram Research served as the mathematical consultant for the CBS television series Numb3rs, a show about the mathematical aspects of crime-solving.
See also
A New Kind of Science
Ed Pegg, Jr.
Eric W. Weisstein
References
External links
Official Wolfram Research Twitter Account
Hoovers Fact Sheet on Wolfram Research, Inc.
The Mathematics Behind NUMB3RS, Wolfram's site on NUMB3RS mathematics.
Champaign, Illinois
Cloud computing providers
Data companies
Mathematical software
Multinational companies headquartered in the United States
Software companies based in Illinois
Software companies established in 1987
Software companies of the United States |
29598441 | https://en.wikipedia.org/wiki/1985%20Aloha%20Bowl | 1985 Aloha Bowl | The 1985 Aloha Bowl, part of the 1985 bowl game season, took place on December 28, 1985, at Aloha Stadium in Honolulu, Hawaii. The competing teams were the Alabama Crimson Tide, representing the Southeastern Conference (SEC), and the USC Trojans of the Pacific-10 Conference (Pac-10). Alabama was victorious in by a final score of 24–3. Alabama running back Gene Jelks and linebacker Cornelius Bennett were named the game's co-MVPs.
Teams
Alabama
The 1985 Alabama squad finished the regular season with losses to Penn State and Tennessee and a tie against LSU to compile an 8–2–1 record. Following their victory over Auburn, the Crimson Tide accepted an invitation to play in the Aloha Bowl on November 30 after Tennessee defeated Vanderbilt to clinch a berth in the 1986 Sugar Bowl. The appearance marked the first for Alabama in the Aloha Bowl, and their 38th overall bowl game.
USC
The 1985 USC squad finished the regular season with losses to Baylor, Arizona State, Notre Dame, California and Washington to finish with a record of 6–5. Following their victory over UCLA, the Trojans accepted an invitation to play in the Aloha Bowl on November 25. The appearance marked the first for USC in the Aloha Bowl, and their 29th overall bowl game.
Game summary
In a first half dominated by both defenses, both the Crimson Tide and the Trojans traded field goals resulting in a 3–3 tie at the half. Van Tiffin hit a 48-yard shot in the first for Alabama and Don Shafer hit a 24-yard shot in the second quarter. Craig Turner scored the first touchdown of the contest on a 1-yard run to complete a 10-play, 58-yard drive in the third quarter. The Crimson Tide closed out the game with a pair of fourth-quarter touchdowns. The first came on a 24-yard pass from Mike Shula to Clay Whitehurst and the second on a 14-yard Al Bell run. For their performances, running back Gene Jelks and linebacker Cornelius Bennett were named the game's co-MVPs.
References
1985–86 NCAA football bowl games
1985
1985
1985
December 1985 sports events in the United States
Aloha |
24976836 | https://en.wikipedia.org/wiki/Global%20Technology%20Associates | Global Technology Associates | Global Technology Associates, Inc. (GTA) is a developer and pioneer of Internet firewalls. The company is privately held with its headquarters, development and support facilities based in Orlando, Florida.
History
GTA was founded by a group of software engineers in 1992, and in 1994 was one of the first companies to introduced a commercial firewall. The original firewall they introduced was the GFX-94. The GFX-94 was a stateful firewall with unique dual walled design that consisted of two separate hardware devices comprising the inner and outer firewall. The GFX-94 was in the first group of firewalls certified by the NCSA (now ICSA). In 1996 GTA introduced the first tiny footprint firewall, the GNAT Box firewall, which fit entirely on a 3.5" floppy diskette. The GNAT Box firewall evolved into the current GB-OS. GB-OS is the operating system for all GTA firewalls and carries the ICSA Firewall Certification.
External links
GTA Web Site
GTA Forum
References
ItDefense Magazine March 2006
"Stop 'em with a box", Network World Aug 8, 2000 David Strom
"Tiny Firewalls Fill a Niche." Network World, November 30, 1998 Christopher Null
Companies based in Orlando, Florida
Companies established in 1992
Networking companies of the United States
Networking hardware companies |
9637706 | https://en.wikipedia.org/wiki/Unicru | Unicru | Unicru was a United States computer software company which produced a human resources software line built to aid companies in evaluating job applicants and their suitability for particular positions by giving them personality tests. Many of their customers were large retailers such as Big Y, Lowe's, Hollywood Video, Hastings Entertainment, Albertsons, Toys R Us, PetSmart, Best Buy, and Blockbuster Video. According to its vendor, Unicru was used in 16% of major retail hiring in the United States as of early 2009.
History
Unicru was founded in 1987 as Decision Point Data and is headquartered in Beaverton, Oregon. It acquired two other software companies: Guru.com in 2003 and Xperius (formerly Personic) in 2004. The Guru.com URL and logo were subsequently sold to eMoonlighter.com which now operates under the Guru.com brand. In August 2006, Kronos announced it had acquired Unicru.
According to the Wall Street Journal, cheating on the tests, using answer keys available online, became more common during the late-2000s recession, though Kronos denies that cheating is common or significantly affects the test's validity.
See also
Industrial and organizational psychology
References
2006 mergers and acquisitions
Defunct software companies of the United States
Software companies established in 1987
Software companies disestablished in 2006 |
173231 | https://en.wikipedia.org/wiki/UUCP | UUCP | UUCP is an acronym of Unix-to-Unix Copy. The term generally refers to a suite of computer programs and protocols allowing remote execution of commands and transfer of files, email and netnews between computers.
A command named is one of the programs in the suite; it provides a user interface for requesting file copy operations. The UUCP suite also includes (user interface for remote command execution), (the communication program that performs the file transfers), (reports statistics on recent activity), (execute commands sent from remote machines), and (reports the UUCP name of the local system). Some versions of the suite include / (convert 8-bit binary files to 7-bit text format and vice versa).
Although UUCP was originally developed on Unix in the 1970s and 1980s, and is most closely associated with Unix-like systems, UUCP implementations exist for several non-Unix-like operating systems, including DOS, OS/2, OpenVMS (for VAX hardware only), AmigaOS, classic Mac OS, and even CP/M.
History
UUCP was originally written at AT&T Bell Laboratories by Mike Lesk. By 1978 it was in use on 82 UNIX machines inside the Bell system, primarily for software distribution. It was released in 1979 as part of Version 7 Unix. The original UUCP was rewritten by AT&T researchers Peter Honeyman, David A. Nowitz, and Brian E. Redman around 1983. The rewrite is referred to as HDB or HoneyDanBer uucp, which was later enhanced, bug fixed, and repackaged as BNU UUCP ("Basic Network Utilities").
Each of these versions was distributed as proprietary software, which inspired Ian Lance Taylor to write a new free software version from scratch in 1991.
Taylor UUCP was released under the GNU General Public License. Taylor UUCP addressed security holes which allowed some of the original network worms to remotely execute unexpected shell commands. Taylor UUCP also incorporated features of all previous versions of UUCP, allowing it to communicate with any other version and even use similar config file formats from other versions.
UUCP was also implemented for non-UNIX operating systems, most-notably DOS systems. Packages such as UUSLAVE/GNUUCP (John Gilmore, Garry Paxinos, Tim Pozar), UUPC/extended (Drew Derbyshire of Kendra Electronic Wonderworks) and FSUUCP (Christopher Ambler of IODesign), brought early Internet connectivity to personal computers, expanding the network beyond the interconnected university systems. FSUUCP formed the basis for many bulletin board system (BBS) packages such as Galacticomm's Major BBS and Mustang Software's Wildcat! BBS to connect to the UUCP network and exchange email and Usenet traffic. As an example, UFGATE (John Galvin, Garry Paxinos, Tim Pozar) was a package that provided a gateway between networks running Fidonet and UUCP protocols.
FSUUCP was the only other implementation of Taylor's enhanced 'i' protocol, a significant improvement over the standard 'g' protocol used by most UUCP implementations.
Technology
Before the widespread availability of Internet access, computers were only connected by smaller local area networks within a company or organization. They were also often equipped with modems so they could be used remotely from character-mode terminals via dial-up telephone lines. UUCP used the computers' modems to dial out to other computers, establishing temporary, point-to-point links between them. Each system in a UUCP network has a list of neighbor systems, with phone numbers, login names and passwords, etc. When work (file transfer or command execution requests) is queued for a neighbor system, the program typically calls that system to process the work. The program can also poll its neighbors periodically to check for work queued on their side; this permits neighbors without dial-out capability to participate.
Over time, dial-up links were replaced by Internet connections, and UUCP added a number of new link layer protocols. These newer connections also reduced the need for UUCP at all, as newer application protocols developed to take advantage of the new networks. Today, UUCP is rarely used over dial-up links, but is occasionally used over TCP/IP.
The number of systems involved, as of early 2006, ran between 1500 and 2000 sites across 60 enterprises. UUCP's longevity can be attributed to its low cost, extensive logging, native failover to dialup, and persistent queue management.
Sessions
UUCP is normally started by having a user log into the target system and then running the UUCP program. In most cases, this is automated by logging into a known user account used for transfers, whose account's shell has been set to . Thus, for automated transfers, another machine simply has to open a modem connection to the called machine and log into the known account.
When uucico runs, it will expect to receive commands from another UUCP program on the caller's machine and begin a session. The session has three distinct stages:
Initial handshake
File request(s)
Final handshake
Initial handshake
On starting, uucico will respond by sending an identification string, \20Shere=hostname\0, where \20 is the control-P character, and \0 is a trailing null. The caller's UUCP responds with \20Scallername options\0, where options is a string containing zero or more Unix-like option switches. These can include packet and window sizes, the maximum supported file size, debugging options, and others.
Depending on the setup of the two systems, the call may end here. For instance, when the caller responds with their system name, the called system may optionally hang up if it does not recognize the caller, sending the RYou are unknown to me\0 response string and then disconnecting.
File requests
If the two systems successfully handshake, the caller will now begin to send a series of file requests. There are four types:
S causes a file to be Sent from the caller to the called system (upload). The from and to names are provided, allowing the filename to be changed on the receiver. When the S command is received on the called system, it responds with SY if it succeeded and it is ready to accept the file, or SNx if it failed, where x is a failure reason. If an SY is received by the caller, it begins uploading the file using the protocol selected during the initial handshake (see below). When the transfer is complete, the called system responds with CY if it successfully received the file, or CN5 if it failed.
R is a Request for the called system to send a file to the caller (download). It is otherwise similar to S, using RY and RN to indicate the command was accepted and it will begin to send data or had a problem, and expecting a CY and CN5 from the caller at the end of the transfer.
X uploads commands to be eXecuted on the called system. This can be used to make that system call another and deliver files to it. The called system responds with XY if it succeeded, or XN if it failed.
H, for Hangup, indicates the caller is done. The called system responds with HY if it succeeded, or HN if it failed.
Final handshake
After sending an H command, the calling system sends a final packet (control-P, six ohs, null-terminator) and the called system responds with (control-P, seven ohs, null-terminator). Some systems will simply hang up on the successful reception of the H command and not bother with the final handshake.
g-protocol
Within the suite of protocols in UUCP, the underlying g-protocol is responsible for transferring information in an error-free form. The protocol originated as a general-purpose system for packet delivery, and thus offers a number of features that are not used by the UUCP package as a whole. These include a secondary channel that can send command data interspersed with a file transfer, and the ability to renegotiate the packet and window sizes during transmission. These extra features may not be available in some implementations of the UUCP stack.
The packet format consisted of a 6-byte header and then between zero and 4096 bytes in the payload. The packet starts with a single \020 (control-P). This is followed by a single byte, known as "K", containing a value of 1 to 8 indicating a packet size from 32 to 4096 bytes, or a 9 indicating a control packet. Many systems only supported K=2, meaning 64 bytes. The next two bytes were a 16-bit checksum of the payload, not including the header. The next byte is the data type and finally, the last byte is the XOR of the header, allowing it to be checked separately from the payload.
The control byte consists of three bit-fields in the format TTXXXYYY. TT is the packet type, 0 for control packets (which also requires K=9 to be valid), 1 for alternate data (not used in UUCP), 2 for data, and 3 indicates a short packet that re-defines the meaning of K. In a data packet, XXX is the packet number for this packet from 0 to 7, and YYY is the last that was received correctly. This provides up to 8 packets in a window. In a control packet, XXX indicates the command and YYY is used for various parameters. For instance, transfers are started by sending a short control packet with TT=0 (control), XXX=7 and YYY the number of packets in a window, then sending another packet with XXX=6 and YYY as the packet length (encoded as it would be in K) and then a third packet that is identical to the first but XXX=5.
g-protocol uses a simple sliding window system to deal with potentially long latencies between endpoints. The protocol allows packets to size from 64 to 4096 8-bit bytes, and windows that include 1 to 7 packets. In theory, a system using 4k packets and 7 packet windows (4096x7) would offer performance matching or beating the best file-transfer protocols like ZMODEM. In practice, many implementations only supported a single setting of 64x3. As a result, the g-protocol has an undeserved reputation for poor performance. Confusion over the packet and window sizes led to the G-protocol, differing only in that it always used 4096x3. Taylor UUCP did not support G, but did support any valid requested window or packet size, so remote systems starting G would work fine with Taylor's g, while two Taylor systems could negotiate even faster connections.
Telebit modems used protocol spoofing to improve the performance of g-protocol transfers by noticing end-of-packet markers being sent to the remote system and immediately sending an back to the local host, pretending that the remote system had already received the packet and decoded it correctly. This triggered the software stack to send the next packet, so rapidly that the transfer became almost continuous. The data between the two modems was error-corrected using a proprietary protocol based on MNP that ran over Telebit's half-duplex connections much better than g-protocol would normally, because in the common 64x3 case the remote system would be sending a constant stream of s that would overflow the low-speed return channel. Combined with the modem's naturally higher data rates, they greatly improved overall throughput and generally performed about seven times the speed of a 2400 bps modem. They were widely used on UUCP hosts as they could quickly pay for themselves in reduced long-distance charges.
Other protocols
UUCP implementations also include other transfer protocols for use over certain links.
f-protocol is designed to run over 7-bit error-corrected links. This was originally intended for use on X.25 links, which were popular for a time in the 1980s. It does not packetize data, instead, the entire file is sent as a single long string followed by a whole-file checksum. The similar x-protocol appears to have seen little or no use. d-protocol was similar to x, but intended for use on Datakit networks that connected many of Bell Labs offices.
t-protocol originated in the BSD versions of UUCP and is designed to run over 8-bit error-free TCP/IP links. It has no error correction at all, and the protocol consists simply of breaking up command and file data into 512 or 1024-byte packets to easily fit within typical TCP frames. The less-used e-protocol, which originated the HoneyDanBer versions as opposed to t from BSD, differs only in that commands are not packetized and are instead sent as normal strings, while files are padded to the nearest 20 bytes.
Mail routing
The and capabilities could be used to send email between machines, with suitable mail user interfaces and delivery agent programs. A simple UUCP mail address was formed from the adjacent machine name, an exclamation mark (often pronounced bang), followed by the user name on the adjacent machine. For example, the address barbox!user would refer to user user on adjacent machine barbox.
Mail could furthermore be routed through the network, traversing any number of intermediate nodes before arriving at its destination. Initially, this had to be done by specifying the complete path, with a list of intermediate host names separated by bangs. For example, if machine barbox is not connected to the local machine, but it is known that barbox is connected to machine foovax which does communicate with the local machine, the appropriate address to send mail to would be foovax!barbox!user.
User barbox!user would generally publish their UUCP email address in a form such as …!bigsite!foovax!barbox!user. This directs people to route their mail to machine bigsite (presumably a well-known and well-connected machine accessible to everybody) and from there through the machine foovax to the account of user user on barbox. Publishing a full path would be pointless, because it would be different, depending on where the sender was. (e.g. Ann at one site may have to send via path gway!tcol!canty!uoh!bigsite!foovax!barbox!user, whereas from somewhere else, Bill has to send via the path pdp10!router22!bigsite!foovax!barbox!user). Many users would suggest multiple routes from various large well-known sites, providing even better and perhaps faster connection service from the mail sender.
Bang path
An email address of this form was known as a bang path.
Bang paths of eight to ten machines (or hops) were not uncommon in 1981, and late-night dial-up UUCP links could cause week-long transmission times. Bang paths were often selected by both transmission time and reliability, as messages would often get lost. Some hosts went so far as to try to "rewrite" the path, sending mail via "faster" routes—this practice tended to be frowned upon.
The "pseudo-domain" ending .uucp was sometimes used to designate a hostname as being reachable by UUCP networking, although this was never formally registered in the domain name system (DNS) as a top-level domain. The uucp community administered itself and did not mesh well with the administration methods and regulations governing the DNS; .uucp works where it needs to; some hosts punt mail out of SMTP queue into uucp queues on gateway machines if a .uucp address is recognized on an incoming SMTP connection.
Usenet traffic was originally transmitted over the UUCP protocol using bang paths. These are still in use within Usenet message format Path header lines. They now have only an informational purpose, and are not used for routing, although they can be used to ensure that loops do not occur.
In general, like other older e-mail address formats, bang paths have now been superseded by the "@ notation", even by sites still using UUCP. A UUCP-only site can register a DNS domain name, and have the DNS server that handles that domain provide MX records that cause Internet mail to that site to be delivered to a UUCP host on the Internet that can then deliver the mail to the UUCP site.
UUCPNET and mapping
UUCPNET was the name for the totality of the network of computers connected through UUCP. This network was very informal, maintained in a spirit of mutual cooperation between systems owned by thousands of private companies, universities, and so on. Often, particularly in the private sector, UUCP links were established without official approval from the companies' upper management. The UUCP network was constantly changing as new systems and dial-up links were added, others were removed, etc.
The UUCP Mapping Project was a volunteer, largely successful effort to build a map of the connections between machines that were open mail relays and establish a managed namespace. Each system administrator would submit, by e-mail, a list of the systems to which theirs would connect, along with a ranking for each such connection. These submitted map entries were processed by an automatic program that combined them into a single set of files describing all connections in the network. These files were then published monthly in a newsgroup dedicated to this purpose. The UUCP map files could then be used by software such as "pathalias" to compute the best route path from one machine to another for mail, and to supply this route automatically. The UUCP maps also listed contact information for the sites, and so gave sites seeking to join UUCPNET an easy way to find prospective neighbors.
Connections with the Internet
Many UUCP hosts, particularly those at universities, were also connected to the Internet in its early years, and e-mail gateways between Internet SMTP-based mail and UUCP mail were developed. A user at a system with UUCP connections could thereby exchange mail with Internet users, and the Internet links could be used to bypass large portions of the slow UUCP network. A "UUCP zone" was defined within the Internet domain namespace to facilitate these interfaces.
With this infrastructure in place, UUCP's strength was that it permitted a site to gain Internet e-mail and Usenet connectivity with only a dial-up modem link to another cooperating computer. This was at a time when true Internet access required a leased data line providing a connection to an Internet Point of Presence, both of which were expensive and difficult to arrange. By contrast, a link to the UUCP network could usually be established with a few phone calls to the administrators of prospective neighbor systems. Neighbor systems were often close enough to avoid all but the most basic charges for telephone calls.
Remote commands
uux is remote command execution over UUCP. The uux command is used to execute a command on a remote system, or to execute a command on the local system using files from remote systems. The command is run by the daemon, which handles remote execution requests as simply another kind of file to batch-send to the remote system whenever a next-hop node is available. The remote system will then execute the requested command and return the result, when the original system is available. Both of these transfers may be indirect, via multi-hop paths, with arbitrary windows of availability. Even when executing a command on an always-available neighbor, uux is not instant.
Decline
UUCP usage began to die out with the rise of Internet service providers offering inexpensive SLIP and PPP services. The UUCP Mapping Project was formally shut down in late 2000.
The UUCP protocol has now mostly been replaced by the Internet TCP/IP based protocols SMTP for mail and NNTP for Usenet news.
In July 2012, Dutch Internet provider XS4ALL closed down its UUCP service, claiming it was "probably one of the last providers in the world that still offered it"; it had only 13 users at that time (however prior to its shut-down it had refused requests from new users for several years).
Current uses and legacy
One surviving feature of UUCP is the chat file format, largely inherited by the Expect software package.
UUCP was in use over special-purpose high cost links (e.g. marine satellite links) long after its disappearance elsewhere, and still remains in legacy use . In addition to legacy use, in 2021 new and innovative UUCP uses are growing, especially for telecommunications in the HF band, for example, for communities in the Amazon rainforest for email exchange and other uses. A patch to Ian's UUCP was contributed to UUCP Debian Linux package to adapt for the HERMES (High-Frequency Emergency and Rural Multimedia Exchange System) project, which provides UUCP HF connectivity.
In the mid 2000s, UUCP over TCP/IP (often encrypted, using the SSH protocol) was proposed for use when a computer does not have any fixed IP addresses but is still willing to run a standard mail transfer agent (MTA) like Sendmail or Postfix.
Bang-like paths are still in use within the Usenet network, though not for routing; they are used to record, in the header of a message, the nodes through which that message has passed, rather than to direct where it will go next. "Bang path" is also used as an expression for any explicitly specified routing path between network hosts. That usage is not necessarily limited to UUCP, IP routing, email messaging, or Usenet.
The concept of delay-tolerant networking protocols was revisited in the early 2000s. Similar techniques as those used by UUCP can apply to other networks that experience delay or significant disruption.
See also
Routing
Sitename
Mesh networking
FidoNet
Berknet
References
External links
Using & Managing UUCP. Ed Ravin, Tim O'Reilly, Dale Doughtery, and Grace Todino. 1996, O'Reilly & Associates, Inc.
Mark Horton (1986). : UUCP Mail Interchange Format Standard. Internet Engineering Task Force Requests for Comment.
Setting up Taylor UUCP + qmail on FreeBSD 5.1
Taylor UUCP is a GPL licensed UUCP package.
Taylor UUCP Documentation – useful information about UUCP in general and various uucp protocols.
The UUCP Project:
The UUCP Mapping Project
UUHECNET - Hobbyist UUCP network that offers free feeds
Network file transfer protocols
Network protocols
Usenet
Unix SUS2008 utilities
File transfer software |
23708 | https://en.wikipedia.org/wiki/PL/I | PL/I | PL/I (Programming Language One, pronounced and sometimes written PL/1) is a procedural, imperative computer programming language developed and published by IBM. It is designed for scientific, engineering, business and system programming. It has been used by academic, commercial and industrial organizations since it was introduced in the 1960s, and is still used.
PL/I's main domains are data processing, numerical computation, scientific computing, and system programming. It supports recursion, structured programming, linked data structure handling, fixed-point, floating-point, complex, character string handling, and bit string handling. The language syntax is English-like and suited for describing complex data formats with a wide set of functions available to verify and manipulate them.
Early history
In the 1950s and early 1960s, business and scientific users programmed for different computer hardware using different programming languages. Business users were moving from Autocoders via COMTRAN to COBOL, while scientific users programmed in Fortran, ALGOL, GEORGE, and others. The IBM System/360 (announced in 1964 and delivered in 1966) was designed as a common machine architecture for both groups of users, superseding all existing IBM architectures. Similarly, IBM wanted a single programming language for all users. It hoped that Fortran could be extended to include the features needed by commercial programmers. In October 1963 a committee was formed composed originally of three IBMers from New York and three members of SHARE, the IBM
scientific users group, to propose these extensions to Fortran. Given the constraints of Fortran, they were unable to do this and embarked on the design of a new programming language based loosely on ALGOL labeled NPL. This acronym conflicted with that of the UK's National Physical Laboratory and was replaced briefly by MPPL (MultiPurpose Programming Language) and, in 1965, with PL/I (with a Roman numeral "I"). The first definition appeared in April 1964.
IBM took NPL as a starting point and completed the design to a level that the first compiler could be written: the NPL definition was incomplete in scope and in detail. Control of the PL/I language was vested initially in the New York Programming Center and later at the IBM UK Laboratory at Hursley. The SHARE and GUIDE user groups were involved in extending the language and had a role in IBM's process for controlling the language through their PL/I Projects. The experience of defining such a large language showed the need for a formal definition of PL/I. A project was set up in 1967 in IBM Laboratory Vienna to make an unambiguous and complete specification. This led in turn to one of the first large scale Formal Methods for development, VDM.
Fred Brooks is credited with ensuring PL/I had the CHARACTER data type.
The language was first specified in detail in the manual "PL/I Language Specifications. C28-6571", written in New York in 1965, and superseded by "PL/I Language Specifications. GY33-6003", written by Hursley in 1967. IBM continued to develop PL/I in the late sixties and early seventies, publishing it in the GY33-6003 manual. These manuals were used by the Multics group and other early implementers.
The first compiler was delivered in 1966. The Standard for PL/I was approved in 1976.
Goals and principles
The goals for PL/I evolved during the early development of the language. Competitiveness with COBOL's record handling and report writing was required. The language's scope of usefulness grew to include system programming and event-driven programming. Additional goals for PL/I were:
Performance of compiled code competitive with that of Fortran (but this was not achieved)
Extensibility for new hardware and new application areas
Improved productivity of the programming process, transferring effort from the programmer to the compiler
Machine independence to operate effectively on the main computer hardware and operating systems
To achieve these goals, PL/I borrowed ideas from contemporary languages while adding substantial new capabilities and casting it with a distinctive concise and readable syntax. Many principles and capabilities combined to give the language its character and were important in meeting the language's goals:
Block structure, with underlying semantics (including recursion), similar to Algol 60. Arguments are passed using call by reference, using dummy variables for values where needed (call by value).
A wide range of computational data types, program control data types, and forms of data structure (strong typing).
Dynamic extents for arrays and strings with inheritance of extents by procedure parameters.
Concise syntax for expressions, declarations, and statements with permitted abbreviations. Suitable for a character set of 60 glyphs and sub-settable to 48.
An extensive structure of defaults in statements, options, and declarations to hide some complexities and facilitate extending the language while minimizing keystrokes.
Powerful iterative processing with good support for structured programming.
There were to be no reserved words (although the function names DATE and TIME initiallyproved to be impossible to meet this goal). New attributes, statements and statement options could be added to PL/I without invalidating existing programs. Not even IF, THEN, ELSE, and DO were reserved.
Orthogonality: each capability to be independent of other capabilities and freely combined with other capabilities wherever meaningful. Each capability to be available in all contexts where meaningful, to exploit it as widely as possible and to avoid "arbitrary restrictions". Orthogonality helps make the language "large".
Exception handling capabilities for controlling and intercepting exceptional conditions at run time.
Programs divided into separately compilable sections, with extensive compile-time facilities (a.k.a. macros), not part of the standard, for tailoring and combining sections of source code into complete programs. External names to bind separately compiled procedures into a single program.
Debugging facilities integrated into the language.
Language summary
The language is designed to be all things to all programmers. The summary is extracted from the ANSI PL/I Standard
and the ANSI PL/I General-Purpose Subset Standard.
A PL/I program consists of a set of procedures, each of which is written as a sequence of statements. The %INCLUDE construct is used to include text from other sources during program translation. All of the statement types are summarized here in groupings which give an overview of the language (the Standard uses this organization).
(Features such as multi-tasking and the PL/I preprocessor are not in the Standard but are supported in the PL/I F compiler and some other implementations are discussed in the Language evolution section.)
Names may be declared to represent data of the following types, either as single values, or as aggregates in the form of arrays, with a lower-bound and upper-bound per dimension, or structures (comprising nested structure, array and scalar variables):
The arithmetic type comprises these attributes:
The base, scale, precision and scale factor of the Picture-for-arithmetic type is encoded within the picture-specification. The mode is specified separately, with the picture specification applied to both the real and the imaginary parts.
Values are computed by expressions written using a specific set of operations and builtin functions, most of which may be applied to aggregates as well as to single values, together with user-defined procedures which, likewise, may operate on and return aggregate as well as single values. The assignment statement assigns values to one or more variables.
There are no reserved words in PL/I. A statement is terminated by a semi-colon. The maximum length of a statement is implementation defined. A comment may appear anywhere in a program where a space is permitted and is preceded by the characters forward slash, asterisk and is terminated by the characters asterisk, forward slash (i.e. ). Statements may have a label-prefix introducing an entry name (ENTRY and PROCEDURE statements) or label name, and a condition prefix enabling or disabling a computational condition e.g. (NOSIZE)). Entry and label names may be single identifiers or identifiers followed by a subscript list of constants (as in L(12,2):A=0;).
A sequence of statements becomes a group when preceded by a DO statement and followed by an END statement. Groups may include nested groups and begin blocks. The IF statement specifies a group or a single statement as the THEN part and the ELSE part (see the sample program). The group is the unit of iteration. The begin block (BEGIN; stmt-list END;) may contain declarations for names and internal procedures local to the block. A procedure starts with a PROCEDURE statement and is terminated syntactically by an END statement. The body of a procedure is a sequence of blocks, groups, and statements and contains declarations for names and procedures local to the procedure or EXTERNAL to the procedure.
An ON-unit is a single statement or block of statements written to be executed when one or more of these conditions occur:
a computational condition,
or an Input/Output condition,
or one of the conditions:
AREA, CONDITION (identifier), ERROR, FINISH
A declaration of an identifier may contain one or more of the following attributes (but they need to be mutually consistent):
Current compilers from Micro Focus, and particularly that from IBM implement many extensions over the standardized version of the language. The IBM extensions are summarised in the Implementation sub-section for the compiler later. Although there are some extensions common to these compilers the lack of a current standard means that compatibility is not guaranteed.
Standardization
Language standardization began in April 1966 in Europe with ECMA TC10. In 1969 ANSI established a "Composite Language Development Committee", nicknamed "Kludge", later renamed X3J1 PL/I. Standardization became a joint effort of ECMA TC/10 and ANSI X3J1. A subset of the GY33-6003 document was offered to the joint effort by IBM and became the base document for standardization. The major features omitted from the base document were multitasking and the attributes for program optimization (e.g. NORMAL and ABNORMAL).
Proposals to change the base document were voted upon by both committees. In the event that the committees disagreed, the chairs, initially Michael Marcotty of General Motors and C.A.R. Hoare representing ICL had to resolve the disagreement. In addition to IBM, Honeywell, CDC, Data General, Digital Equipment Corporation, Prime Computer, Burroughs, RCA, and Univac served on X3J1 along with major users Eastman Kodak, MITRE, Union Carbide, Bell Laboratories, and various government and university representatives. Further development of the language occurred in the standards bodies, with continuing improvements in structured programming and internal consistency, and with the omission of the more obscure or contentious features.
As language development neared an end, X3J1/TC10 realized that there were a number of problems with a document written in English text. Discussion of a single item might appear in multiple places which might or might not agree. It was difficult to determine if there were omissions as well as inconsistencies. Consequently, David Beech (IBM), Robert Freiburghouse (Honeywell), Milton Barber (CDC), M. Donald MacLaren (Argonne National Laboratory), Craig Franklin (Data General), Lois Frampton (Digital Equipment Corporation), and editor, D.J. Andrews of IBM undertook to rewrite the entire document, each producing one or more complete chapters. The standard is couched as a formal definition using a "PL/I Machine" to specify the semantics. It was the first, and possibly the only, programming language standard to be written as a semi-formal definition.
A "PL/I General-Purpose Subset" ("Subset-G") standard was issued by ANSI in 1981 and a revision published in 1987. The General Purpose subset was widely adopted as the kernel for PL/I implementations.
Implementations
IBM PL/I F and D compilers
PL/I was first implemented by IBM, at its Hursley Laboratories in the United Kingdom, as part of the development of System/360. The first production PL/I compiler was the PL/I F compiler for the OS/360 Operating System, built by John Nash's team at Hursley in the UK: the runtime library team was managed by I.M. (Nobby) Clarke. The PL/I F compiler was written entirely in System/360 assembly language. Release 1 shipped in 1966. OS/360 is a real-memory environment and the compiler was designed for systems with as little as 64 kilobytes of real storage – F being 64 kB in S/360 parlance. To fit a large compiler into the 44 kilobytes of memory available on a 64-kilobyte machine, the compiler consists of a control phase and a large number of compiler phases (approaching 100). The phases are brought into memory from disk, one at a time, to handle particular language features and aspects of compilation. Each phase makes a single pass over the partially-compiled program, usually held in memory.
Aspects of the language were still being designed as PL/I F was implemented, so some were omitted until later releases. PL/I RECORD I/O was shipped with PL/I F Release 2. The list processing functions Based Variables, Pointers, Areas and Offsets and LOCATE-mode I/O were first shipped in Release 4. In a major attempt to speed up PL/I code to compete with Fortran object code, PL/I F Release 5 does substantial program optimization of DO-loops facilitated by the REORDER option on procedures.
A version of PL/I F was released on the TSS/360 timesharing operating system for the System/360 Model 67, adapted at the IBM Mohansic Lab. The IBM La Gaude Lab in France developed "Language Conversion Programs" to convert Fortran, Cobol, and Algol programs to the PL/I F level of PL/I.
The PL/I D compiler, using 16 kilobytes of memory, was developed by IBM Germany for the DOS/360 low end operating system. It implements a subset of the PL/I language requiring all strings and arrays to have fixed extents, thus simplifying the run-time environment. Reflecting the underlying operating system, it lacks dynamic storage allocation and the controlled storage class. It was shipped within a year of PL/I F.
Multics PL/I and derivatives
Compilers were implemented by several groups in the early 1960s. The Multics project at MIT, one of the first to develop an operating system in a high-level language, used Early PL/I (EPL), a subset dialect of PL/I, as their implementation language in 1964. EPL was developed at Bell Labs and MIT by Douglas McIlroy, Robert Morris, and others. The influential Multics PL/I compiler was the source of compiler technology used by a number of manufacturers and software groups. EPL was a system programming language and a dialect of PL/I that had some capabilities absent in the original PL/I.
The Honeywell PL/I compiler (for Series 60) is an implementation of the full ANSI X3J1 standard.
IBM PL/I optimizing and checkout compilers
The PL/I Optimizer and Checkout compilers produced in Hursley support a common level of PL/I language and aimed to replace the PL/I F compiler. The checkout compiler is a rewrite of PL/I F in BSL, IBM's PL/I-like proprietary implementation language (later PL/S). The performance objectives set for the compilers are shown in an IBM presentation to the BCS. The compilers had to produce identical results the Checkout Compiler is used to debug programs that would then be submitted to the Optimizer. Given that the compilers had entirely different designs and were handling the full PL/I language this goal was challenging: it was achieved.
IBM introduced new attributes and syntax including BUILTIN, case statements (SELECT/WHEN/OTHERWISE), loop controls (ITERATE and LEAVE) and null argument lists to disambiguate, e.g., DATE().
The PL/I optimizing compiler took over from the PL/I F compiler and was IBM's workhorse compiler from the 1970s to the 1990s. Like PL/I F, it is a multiple pass compiler with a 44 kilobyte design point, but it is an entirely new design. Unlike the F compiler, it has to perform compile time evaluation of constant expressions using the run-time library, reducing the maximum memory for a compiler phase to 28 kilobytes. A second-time around design, it succeeded in eliminating the annoyances of PL/I F such as cascading diagnostics. It was written in S/360 Macro Assembler by a team, led by Tony Burbridge, most of whom had worked on PL/I F. Macros were defined to automate common compiler services and to shield the compiler writers from the task of managing real-mode storage, allowing the compiler to be moved easily to other memory models. The gamut of program optimization techniques developed for the contemporary IBM Fortran H compiler were deployed: the Optimizer equaled Fortran execution speeds in the hands of good programmers. Announced with IBM S/370 in 1970, it shipped first for the DOS/360 operating system in August 1971, and shortly afterward for OS/360, and the first virtual memory IBM operating systems OS/VS1, MVS, and VM/CMS. (The developers were unaware that while they were shoehorning the code into 28 kb sections, IBM Poughkeepsie was finally ready to ship virtual memory support in OS/360). It supported the batch programming environments and, under TSO and CMS, it could be run interactively. This compiler went through many versions covering all mainframe operating systems including the operating systems of the Japanese plug-compatible machines (PCMs).
The compiler has been superseded by "IBM PL/I for OS/2, AIX, Linux, z/OS" below.
The PL/I checkout compiler, (colloquially "The Checker") announced in August 1970 was designed to speed and improve the debugging of PL/I programs. The team was led by Brian Marks. The three-pass design cut the time to compile a program to 25% of that taken by the F Compiler. It can be run from an interactive terminal, converting PL/I programs into an internal format, "H-text". This format is interpreted by the Checkout compiler at run-time, detecting virtually all types of errors. Pointers are represented in 16 bytes, containing the target address and a description of the referenced item, thus permitting "bad" pointer use to be diagnosed. In a conversational environment when an error is detected, control is passed to the user who can inspect any variables, introduce debugging statements and edit the source program. Over time the debugging capability of mainframe programming environments developed most of the functions offered by this compiler and it was withdrawn (in the 1990s?)
DEC PL/I
Perhaps the most commercially successful implementation aside from IBM's was Digital Equipment Corporation's VAX PL/I, later known as DEC PL/I. The implementation is "a strict superset of the ANSI X3.4-1981 PL/I General Purpose Subset and provides most of the features of the new ANSI X3.74-1987 PL/I General Purpose Subset", and was first released in 1988. It originally used a compiler backend named the VAX Code Generator (VCG) created by a team led by Dave Cutler. The front end was designed by Robert Freiburghouse, and was ported to VAX/VMS from Multics. It runs on VMS on VAX and Alpha, and on Tru64. During the 1990s, Digital sold the compiler to UniPrise Systems, who later sold it to a company named Kednos. Kednos marketed the compiler as Kednos PL/I until October 2016 when the company ceased trading.
Teaching subset compilers
In the late 1960s and early 1970s, many US and Canadian universities were establishing time-sharing services on campus and needed conversational compiler/interpreters for use in teaching science, mathematics, engineering, and computer science. Dartmouth was developing BASIC, but PL/I was a popular choice, as it was concise and easy to teach. As the IBM offerings were unsuitable, a number of schools built their own subsets of PL/I and their own interactive support. Examples are:
In the 1960s and early 1970s, Allen-Babcock implemented the Remote Users of Shared Hardware (RUSH) time sharing system for an IBM System/360 Model 50 with custom microcode and subsequently implemented IBM's CPS, an interactive time-sharing system for OS/360 aimed at teaching computer science basics, offered a limited subset of the PL/I language in addition to BASIC and a remote job entry facility.
PL/C, a dialect for teaching, a compiler developed at Cornell University, had the unusual capability of never failing to compile any program through the use of extensive automatic correction of many syntax errors and by converting any remaining syntax errors to output statements. The language was almost all of PL/I as implemented by IBM. PL/C was a very fast compiler.
(Student Language/1, Student Language/One or Subset Language/1) was a PL/I subset, initially available late 1960s, that ran interpretively on the IBM 1130; instructional use was its strong point.
PLAGO, created at the Polytechnic Institute of Brooklyn, used a simplified subset of the PL/I language and focused on good diagnostic error messages and fast compilation times.
The Computer Systems Research Group of the University of Toronto produced the SP/k compilers which supported a sequence of subsets of PL/I called SP/1, SP/2, SP/3, ..., SP/8 for teaching programming. Programs that ran without errors under the SP/k compilers produced the same results under other contemporary PL/I compilers such as IBM's PL/I F compiler, IBM's checkout compiler or Cornell University's PL/C compiler.
Other examples are PL0 by P. Grouse at the University of New South Wales, PLUM by Marvin Zelkowitz at the University of Maryland., and PLUTO from the University of Toronto.
IBM PL/I for OS/2, AIX, Linux, z/OS
In a major revamp of PL/I, IBM Santa Teresa in California launched an entirely new compiler in 1992. The initial shipment was for OS/2 and included most ANSI-G features and many new PL/I features. Subsequent releases provided additional platforms (MVS, VM, OS/390, AIX and Windows), but as of 2021, the only supported platforms are z/OS and AIX. IBM continued to add functions to make PL/I fully competitive with other languages (particularly C and C++) in areas where it had been overtaken. The corresponding "IBM Language Environment" supports inter-operation of PL/I programs with Database and Transaction systems, and with programs written in C, C++, and COBOL, the compiler supports all the data types needed for intercommunication with these languages.
The PL/I design principles were retained and withstood this major extension, comprising several new data types, new statements and statement options, new exception conditions, and new organisations of program source. The resulting language is a compatible super-set of the PL/I Standard and of the earlier IBM compilers. Major topics added to PL/I were:
New attributes for better support of user-defined data types – the DEFINE ALIAS, ORDINAL, and DEFINE STRUCTURE statement to introduce user-defined types, the HANDLE locator data type, the TYPE data type itself, the UNION data type, and built-in functions for manipulating the new types.
Additional data types and attributes corresponding to common PC data types (e.g. UNSIGNED, VARYINGZ).
Improvements in readability of programs – often rendering implied usages explicit (e.g. BYVALUE attribute for parameters)
Additional structured programming constructs.
Interrupt handling additions.
Compile time preprocessor extended to offer almost all PL/I string handling features and to interface with the Application Development Environment
The latest series of PL/I compilers for z/OS, called Enterprise PL/I for z/OS, leverage code generation for the latest z/Architecture processors (z14, z13, zEC12, zBC12, z196, z114) via the use of ARCHLVL parm control passed during compilation, and was the second High level language supported by z/OS Language Environment to do so (XL C/C++ being the first, and Enterprise COBOL v5 the last.)
Data types
ORDINAL is a new computational data type. The ordinal facilities are like those in Pascal,
e.g. DEFINE ORDINAL Colour (red, yellow, green, blue, violet);
but in addition the name and internal values are accessible via built-in functions. Built-in functions provide access to an ordinal value's predecessor and successor.
The DEFINE-statement (see below) allows additional TYPEs to be declared composed from PL/I's built-in attributes.
The HANDLE(data structure) locator data type is similar to the POINTER data type, but strongly typed to bind only to a particular data structure. The => operator is used to select a data structure using a handle.
The UNION attribute (equivalent to CELL in early PL/I specifications) permits several scalar variables, arrays, or structures to share the same storage in a unit that occupies the amount of storage needed for the largest alternative.
Competitiveness on PC and with C
These attributes were added:
The string attributes VARYINGZ (for zero-terminated character strings), HEXADEC, WIDECHAR, and GRAPHIC.
The optional arithmetic attributes UNSIGNED and SIGNED, BIGENDIAN and LITTLEENDIAN. UNSIGNED necessitated the UPTHRU and DOWNTHRU option on iterative groups enabling a counter-controlled loop to be executed without exceeding the limit value (also essential for ORDINALs and good for documenting loops).
The DATE(pattern) attribute for controlling date representations and additions to bring time and date to best current practice. New functions for manipulating dates include DAYS and DAYSTODATE for converting between dates and number of days, and a general DATETIME function for changing date formats.
New string-handling functions were added to centre text, to edit using a picture format, and to trim blanks or selected characters from the head or tail of text, VERIFYR to VERIFY from the right. and SEARCH and TALLY functions.
Compound assignment operators a la C e.g. +=, &=, -=, ||= were added. A+=1 is equivalent to A=A+1.
Additional parameter descriptors and attributes were added for omitted arguments and variable length argument lists.
Program readability – making intentions explicit
The VALUE attribute declares an identifier as a constant (derived from a specific literal value or restricted expression).
Parameters can have the BYADDR (pass by address) or BYVALUE (pass by value) attributes.
The ASSIGNABLE and NONASSIGNABLE attributes prevent unintended assignments.
DO FOREVER; obviates the need for the contrived construct DO WHILE ( '1'B );.
The DEFINE-statement introduces user-specified names (e.g. INTEGER) for combinations of built-in attributes (e.g. FIXED BINARY(31,0)). Thus DEFINE ALIAS INTEGER FIXED BINARY(31.0) creates the TYPE name INTEGER as an alias for the set of built-in attributes FIXED BINARY(31.0). DEFINE STRUCTURE applies to structures and their members; it provides a TYPE name for a set of structure attributes and corresponding substructure member declarations for use in a structure declaration (a generalisation of the LIKE attribute).
Structured programming additions
A LEAVE statement to exit a loop, and an ITERATE to continue with the next iteration of a loop.
UPTHRU and DOWNTHRU options on iterative groups.
The package construct consisting of a set of procedures and declarations for use as a unit. Variables declared outside of the procedures are local to the package, and can use STATIC, BASED or CONTROLLED storage. Procedure names used in the package also are local, but can be made external by means of the EXPORTS option of the PACKAGE-statement.
Interrupt handling
The RESIGNAL-statement executed in an ON-unit terminates execution of the ON-unit, and raises the condition again in the procedure that called the current one (thus passing control to the corresponding ON-unit for that procedure).
The INVALIDOP condition handles invalid operation codes detected by the PC processor, as well as illegal arithmetic operations such as subtraction of two infinite values.
The ANYCONDITION condition is provided to intercept conditions for which no specific ON-unit has been provided in the current procedure.
The STORAGE condition is raised when an ALLOCATE statement is unable to obtain sufficient storage.
Other mainframe and minicomputer compilers
A number of vendors produced compilers to compete with IBM PL/I F or Optimizing compiler on mainframes and minicomputers in the 1970s. In the 1980s the target was usually the emerging ANSI-G subset.
In 1974 Burroughs Corporation announced PL/I for the B6700 and B7700.
UNIVAC released a UNIVAC PL/I, and in the 1970s also used a variant of PL/I, PL/I PLUS, for system programming.
From 1978 Data General provided PL/I on its Eclipse and Eclipse MV platforms running the AOS, AOS/VS & AOS/VS II operating systems. A number of operating system utility programs were written in the language.
Paul Abrahams of NYU's Courant Institute of Mathematical Sciences wrote CIMS PL/I in 1972 in PL/I, bootstrapping via PL/I F. It supported "about 70%" of PL/I compiling to the CDC 6600
CDC delivered an optimizing subset PL/I compiler for Cyber 70, 170 and 6000 series.
Fujitsu delivered a PL/I compiler equivalent to the PL/I Optimizer.
Stratus Technologies PL/I is an ANSI G implementation for the VOS operating system.
IBM Series/1 PL/I is an extended subset of ANSI Programming Language PL/I (ANSI X3.53-1976) for the IBM Series/1 Realtime Programming System.
PL/I compilers for Microsoft .NET
In 2011, Raincode designed a full legacy compiler for the Microsoft .NET and .NET Core platforms, named The Raincode PL/I compiler.
PL/I compilers for personal computers and Unix
In the 1970s and 1980s Digital Research sold a PL/I compiler for CP/M (PL/I-80), CP/M-86 (PL/I-86) and Personal Computers with DOS. It was based on Subset G of PL/I and was written in PL/M.
Micro Focus implemented Open PL/I for Windows and UNIX/Linux systems, which they acquired from Liant.
IBM delivered PL/I for OS/2 in 1994, and PL/I for AIX in 1995.
Iron Spring PL/I for OS/2 and later Linux was introduced in 2007.
PL/I dialects
PL/S, a dialect of PL/I, initially called BSL was developed in the late 1960s and became the system programming language for IBM mainframes. Almost all IBM mainframe system software in the 1970s and 1980s was written in PL/S. It differed from PL/I in that there were no data type conversions, no run-time environment, structures were mapped differently, and assignment was a byte by byte copy. All strings and arrays had fixed extents, or used the REFER option. PL/S was succeeded by PL/AS, and then by PL/X, which is the language currently used for internal work on current operating systems, OS/390 and now z/OS. It is also used for some z/VSE and z/VM components. Db2 for z/OS is also written in PL/X.
PL/C, is an instructional dialect of the PL/I computer programming language, developed at Cornell University in the 1970s.
Two dialects of PL/I named PL/MP (Machine Product) and PL/MI (Machine Interface) were used by IBM in the system software of the System/38 and AS/400 platforms. PL/MP was used to implement the so-called Vertical Microcode of these platforms, and targeted the IMPI instruction set. PL/MI targets the Machine Interface of those platforms, and is used in the Control Program Facility and the XPF layer of OS/400. The PL/MP code was mostly replaced with C++ when OS/400 was ported to the IBM RS64 processor family, although some was retained and retargeted for the PowerPC/Power ISA architecture. The PL/MI code was not replaced, and remains in use in IBM i.
PL/8 (or PL.8), so-called because it was about 80% of PL/I, was originally developed by IBM Research in the 1970s for the IBM 801 architecture. It later gained support for the Motorola 68000 and System/370 architectures. It continues to be used for several IBM internal systems development tasks (e.g. millicode and firmware for z/Architecture systems) and has been re-engineered to use a 64-bit gcc-based backend.
Honeywell, Inc. developed PL-6 for use in creating the CP-6 operating system.
Prime Computer used two different PL/I dialects as the system programming language of the PRIMOS operating system: PL/P, starting from version 18, and then SPL, starting from version 19.
XPL is a dialect of PL/I used to write other compilers using the XPL compiler techniques. XPL added a heap string datatype to its small subset of PL/I.
HAL/S is a real-time aerospace programming language, best known for its use in the Space Shuttle program. It was designed by Intermetrics in the 1970s for NASA. HAL/S was implemented in XPL.
IBM and various subcontractors also developed another PL/I variant in the early 1970s to support signal processing for the Navy called SPL/I.
SabreTalk, a real-time dialect of PL/I used to program the Sabre airline reservation system.
Usage
PL/I implementations were developed for mainframes from the late 1960s, mini computers in the 1970s, and personal computers in the 1980s and 1990s. Although its main use has been on mainframes, there are PL/I versions for DOS, Microsoft Windows, OS/2, AIX, OpenVMS, and Unix.
It has been widely used in business data processing and for system use for writing operating systems on certain platforms. Very complex and powerful systems have been built with PL/I:
The SAS System was initially written in PL/I; the SAS data step is still modeled on PL/I syntax.
The pioneering online airline reservation system Sabre was originally written for the IBM 7090 in assembler. The S/360 version was largely written using SabreTalk, a purpose built subset PL/I compiler for a dedicated control program.
The Multics operating system was largely written in PL/I.
PL/I was used to write an executable formal definition to interpret IBM's System Network Architecture.
PL/I did not fulfill its supporters' hopes that it would displace Fortran and COBOL and become the major player on mainframes. It remained a minority but significant player. There cannot be a definitive explanation for this, but some trends in the 1970s and 1980s militated against its success by progressively reducing the territory on which PL/I enjoyed a competitive advantage.
First, the nature of the mainframe software environment changed. Application subsystems for database and transaction processing (CICS and IMS and Oracle on System 370) and application generators became the focus of mainframe users' application development. Significant parts of the language became irrelevant because of the need to use the corresponding native features of the subsystems (such as tasking and much of input/output). Fortran was not used in these application areas, confining PL/I to COBOL's territory; most users stayed with COBOL. But as the PC became the dominant environment for program development, Fortran, COBOL and PL/I all became minority languages overtaken by C++, Java and the like.
Second, PL/I was overtaken in the system programming field. The IBM system programming community was not ready to use PL/I; instead, IBM developed and adopted a proprietary dialect of PL/I for system programming. – PL/S. With the success of PL/S inside IBM, and of C outside IBM, the unique PL/I strengths for system programming became less valuable.
Third, the development environments grew capabilities for interactive software development that, again, made the unique PL/I interactive and debugging strengths less valuable.
Fourth, features such as structured programming, character string operations, and object orientation were added to COBOL and Fortran, which further reduced PL/I's relative advantages.
On mainframes there were substantial business issues at stake too. IBM's hardware competitors had little to gain and much to lose from success of PL/I. Compiler development was expensive, and the IBM compiler groups had an in-built competitive advantage. Many IBM users wished to avoid being locked into proprietary solutions. With no early support for PL/I by other vendors it was best to avoid PL/I.
Evolution of the PL/I language
This article uses the PL/I standard as the reference point for language features. But a number of features of significance in the early implementations were not in the Standard; and some were offered by non-IBM compilers. And the de facto language continued to grow after the standard, ultimately driven by developments on the Personal Computer.
Significant features omitted from the standard
Multi tasking
Multi tasking was implemented by PL/I F, the Optimizer and the newer AIX and Z/OS compilers. It comprised the data types EVENT and TASK, the TASK-option on the CALL-statement (Fork), the WAIT-statement (Join), the DELAY(delay-time), EVENT-options on the record I/O statements and the UNLOCK statement to unlock locked records on EXCLUSIVE files. Event data identify a particular event and indicate whether it is complete ('1'B) or incomplete ('0'B): task data items identify a particular task (or process) and indicate its priority relative to other tasks.
Preprocessor
The first IBM Compile time preprocessor was built by the IBM Boston Advanced Programming Center located in Cambridge, Mass, and shipped with the PL/I F compiler. The %INCLUDE statement was in the Standard, but the rest of the features were not. The DEC and Kednos PL/I compilers implemented much the same set of features as IBM, with some additions of their own. IBM has continued to add preprocessor features to its compilers. The preprocessor treats the written source program as a sequence of tokens, copying them to an output source file or acting on them. When a % token is encountered the following compile time statement is executed: when an identifier token is encountered and the identifier has been DECLAREd, ACTIVATEd, and assigned a compile time value, the identifier is replaced by this value. Tokens are added to the output stream if they do not require action (e.g. +), as are the values of ACTIVATEd compile time expressions. Thus a compile time variable PI could be declared, activated, and assigned using %PI='3.14159265'. Subsequent occurrences of PI would be replaced by 3.14159265.
The data type supported are FIXED DECIMAL integers and CHARACTER strings of varying length with no maximum length. The structure statements are:
%[label-list:]DO iteration: statements; %[label-list:]END;
%procedure-name: PROCEDURE (parameter list) RETURNS (type); statements...;
%[label-list:]END;
%[label-list:]IF...%THEN...%ELSE..
and the simple statements, which also may have a [label-list:]
%ACTIVATE(identifier-list) and %DEACTIVATE
assignment statement
%DECLARE identifier-attribute-list
%GO TO label
%INCLUDE
null statement
The feature allowed programmers to use identifiers for constants e.g. product part numbers or mathematical constants and was superseded in the standard by named constants for computational data. Conditional compiling and iterative generation of source code, possible with compile-time facilities, was not supported by the standard. Several manufacturers implemented these facilities.
Structured programming additions
Structured programming additions were made to PL/I during standardization but were not accepted into the standard. These features were the LEAVE-statement to exit from an iterative DO, the UNTIL-option and REPEAT-option added to DO, and a case statement of the general form:
SELECT (expression) {WHEN (expression) group}... OTHERWISE group
These features were all included in IBM's PL/I Checkout and Optimizing compilers and in DEC PL/I.
Debug facilities
PL/I F had offered some debug facilities that were not put forward for the standard but were implemented by others notably the CHECK(variable-list) condition prefix, CHECK on-condition and the SNAP option. The IBM Optimizing and Checkout compilers added additional features appropriate to the conversational mainframe programming environment (e.g. an ATTENTION condition).
Significant features developed since the standard
Several attempts had been made to design a structure member type that could have one of several datatypes (CELL in early IBM). With the growth of classes in programming theory, approaches to this became possible on a PL/I base UNION, TYPE etc. have been added by several compilers.
PL/I had been conceived in a single-byte character world. With support for Japanese and Chinese language becoming essential, and the developments on International Code Pages, the character string concept was expanded to accommodate wide non-ASCII/EBCDIC strings.
Time and date handling were overhauled to deal with the millennium problem, with the introduction of the DATETIME function that returned the date and time in one of about 35 different formats. Several other date functions deal with conversions to and from days and seconds.
Criticisms
Implementation issues
Though the language is easy to learn and use, implementing a PL/I compiler is difficult and time-consuming. A language as large as PL/I needed subsets that most vendors could produce and most users master. This was not resolved until "ANSI G" was published. The compile time facilities, unique to PL/I, took added implementation effort and additional compiler passes. A PL/I compiler was two to four times as large as comparable Fortran or COBOL compilers, and also that much slower—supposedly offset by gains in programmer productivity. This was anticipated in IBM before the first compilers were written.
Some argue that PL/I is unusually hard to parse. The PL/I keywords are not reserved so programmers can use them as variable or procedure names in programs. Because the original PL/I(F) compiler attempts auto-correction when it encounters a keyword used in an incorrect context, it often assumes it is a variable name. This leads to "cascading diagnostics", a problem solved by later compilers.
The effort needed to produce good object code was perhaps underestimated during the initial design of the language. Program optimization (needed to compete with the excellent program optimization carried out by available Fortran compilers) is unusually complex owing to side effects and pervasive problems with aliasing of variables. Unpredictable modification can occur asynchronously in exception handlers, which may be provided by "ON statements" in (unseen) callers. Together, these make it difficult to reliably predict when a program's variables might be modified at runtime. In typical use, however, user-written error handlers (the ON-unit) often do not make assignments to variables. In spite of the aforementioned difficulties, IBM produced the PL/I Optimizing Compiler in 1971.
PL/I contains many rarely used features, such as multitasking support (an IBM extension to the language) which add cost and complexity to the compiler, and its co-processing facilities require a multi-programming environment with support for non-blocking multiple threads for processes by the operating system. Compiler writers were free to select whether to implement these features.
An undeclared variable is, by default, declared by first occurrence—thus misspelling might lead to unpredictable results. This "implicit declaration" is no different from FORTRAN programs. For PL/I(F), however, an attribute listing enables the programmer to detect any misspelled or undeclared variable.
Programmer issues
Many programmers were slow to move from COBOL or Fortran due to a perceived complexity of the language and immaturity of the PL/I F compiler. Programmers were sharply divided into scientific programmers (who used Fortran) and business programmers (who used COBOL), with significant tension and even dislike between the groups. PL/I syntax borrowed from both COBOL and Fortran syntax. So instead of noticing features that would make their job easier, Fortran programmers of the time noticed COBOL syntax and had the opinion that it was a business language, while COBOL programmers noticed Fortran syntax and looked upon it as a scientific language.
Both COBOL and Fortran programmers viewed it as a "bigger" version of their own language, and both were somewhat intimidated by the language and disinclined to adopt it. Another factor was pseudo-similarities to COBOL, Fortran, and ALGOL. These were PL/I elements that looked similar to one of those languages, but worked differently in PL/I. Such frustrations left many experienced programmers with a jaundiced view of PL/I, and often an active dislike for the language. An early UNIX fortune file contained the following tongue-in-cheek description of the language:
Speaking as someone who has delved into the intricacies of PL/I, I am sure that only Real Men could have written such a machine-hogging, cycle-grabbing, all-encompassing monster. Allocate an array and free the middle third? Sure! Why not? Multiply a character string times a bit string and assign the result to a float decimal? Go ahead! Free a controlled variable procedure parameter and reallocate it before passing it back? Overlay three different types of variable on the same memory location? Anything you say! Write a recursive macro? Well, no, but Real Men use rescan. How could a language so obviously designed and written by Real Men not be intended for Real Man use?
On the positive side, full support for pointers to all data types (including pointers to structures), recursion, multitasking, string handling, and extensive built-in functions meant PL/I was indeed quite a leap forward compared to the programming languages of its time. However, these were not enough to persuade a majority of programmers or shops to switch to PL/I.
The PL/I F compiler's compile time preprocessor was unusual (outside the Lisp world) in using its target language's syntax and semantics (e.g. as compared to the C preprocessor's "#" directives).
Special topics in PL/I
Storage classes
PL/I provides several 'storage classes' to indicate how the lifetime of variables' storage is to be managed STATIC, AUTOMATIC, CONTROLLED and BASED. The simplest to implement is STATIC, which indicates that memory is allocated and initialized at load-time, as is done in COBOL "working-storage" and early Fortran. This is the default for EXTERNAL variables.
PL/I's default storage class for INTERNAL variables is AUTOMATIC, similar to that of other block-structured languages influenced by ALGOL, like the "auto" storage class in the C language, and default storage allocation in Pascal and "local-storage" in IBM COBOL. Storage for AUTOMATIC variables is allocated upon entry into the BEGIN-block, procedure, or ON-unit in which they are declared. The compiler and runtime system allocate memory for a stack frame to contain them and other housekeeping information. If a variable is declared with an INITIAL-attribute, code to set it to an initial value is executed at this time. Care is required to manage the use of initialization properly. Large amounts of code can be executed to initialize variables every time a scope is entered, especially if the variable is an array or structure. Storage for AUTOMATIC variables is freed at block exit: STATIC, CONTROLLED or BASED variables are used to retain variables' contents between invocations of a procedure or block. CONTROLLED storage is also managed using a stack, but the pushing and popping of allocations on the stack is managed by the programmer, using ALLOCATE and FREE statements. Storage for BASED variables is managed using ALLOCATE/FREE, but instead of a stack these allocations have independent lifetimes and are addressed through OFFSET or POINTER variables.
The AREA attribute is used to declare programmer-defined heaps. Data can be allocated and freed within a specific area, and the area can be deleted, read, and written as a unit.
Storage type sharing
There are several ways of accessing allocated storage through different data declarations. Some of these are well defined and safe, some can be used safely with careful programming, and some are inherently unsafe and/or machine dependent.
Passing a variable as an argument to a parameter by reference allows the argument's allocated storage to be referenced using the parameter. The DEFINED attribute (e.g. DCL A(10,10), B(2:9,2:9) DEFINED A) allows part or all of a variable's storage to be used with a different, but consistent, declaration. The language definition includes a CELL attribute (later renamed UNION) to allow different definitions of data to share the same storage. This was not supported by many early IBM compilers. These usages are safe and machine independent.
Record I/O and list processing produce situations where the programmer needs to fit a declaration to the storage of the next record or item, before knowing what type of data structure it has. Based variables and pointers are key to such programs. The data structures must be designed appropriately, typically using fields in a data structure to encode information about its type and size. The fields can be held in the preceding structure or, with some constraints, in the current one. Where the encoding is in the preceding structure, the program needs to allocate a based variable with a declaration that matches the current item (using expressions for extents where needed). Where the type and size information are to be kept in the current structure ("self defining structures") the type-defining fields must be ahead of the type dependent items and in the same place in every version of the data structure. The REFER-option is used for self-defining extents (e.g. string lengths as in DCL 1 A BASED, 2 N BINARY, 2 B CHAR(LENGTH REFER A.N.), etc where LENGTH is used to allocate instances of the data structure. For self-defining structures, any typing and REFERed fields are placed ahead of the "real" data. If the records in a data set, or the items in a list of data structures, are organised this way they can be handled safely in a machine independent way.
PL/I implementations do not (except for the PL/I Checkout compiler) keep track of the data structure used when storage is first allocated. Any BASED declaration can be used with a pointer into the storage to access the storage inherently unsafe and machine dependent. However, this usage has become important for "pointer arithmetic" (typically adding a certain amount to a known address). This has been a contentious subject in computer science. In addition to the problem of wild references and buffer overruns, issues arise due to the alignment and length for data types used with particular machines and compilers. Many cases where pointer arithmetic might be needed involve finding a pointer to an element inside a larger data structure. The ADDR function computes such pointers, safely and machine independently.
Pointer arithmetic may be accomplished by aliasing a binary variable with a pointer as in
DCL P POINTER, N FIXED BINARY(31) BASED(ADDR(P));
N=N+255;
It relies on pointers being the same length as FIXED BINARY(31) integers and aligned on the same boundaries.
With the prevalence of C and its free and easy attitude to pointer arithmetic, recent IBM PL/I compilers allow pointers to be used with the addition and subtraction operators to giving the simplest syntax (but compiler options can disallow these practices where safety and machine independence are paramount).
ON-units and exception handling
When PL/I was designed, programs only ran in batch mode, with no possible intervention from the programmer at a terminal. An exceptional condition such as division by zero would abort the program yielding only a hexadecimal core dump. PL/I exception handling, via ON-units, allowed the program to stay in control in the face of hardware or operating system exceptions and to recover debugging information before closing down more gracefully. As a program became properly debugged, most of the exception handling could be removed or disabled: this level of control became less important when conversational execution became commonplace.
Computational exception handling is enabled and disabled by condition prefixes on statements, blocks(including ON-units) and procedures. – e.g. (SIZE, NOSUBSCRIPTRANGE): A(I)=B(I)*C; . Operating system exceptions for Input/Output and storage management are always enabled.
The ON-unit is a single statement or BEGIN-block introduced by an ON-statement. Executing the ON statement enables the condition specified, e.g., ON ZERODIVIDE ON-unit. When the exception for this condition occurs and the condition is enabled, the ON-unit for the condition is executed. ON-units are inherited down the call chain. When a block, procedure or ON-unit is activated, the ON-units established by the invoking activation are inherited by the new activation. They may be over-ridden by another ON-statement and can be reestablished by the REVERT-statement. The exception can be simulated using the SIGNAL-statement – e.g. to help debug the exception handlers. The dynamic inheritance principle for ON-units allows a routine to handle the exceptions occurring within the subroutines it uses.
If no ON-unit is in effect when a condition is raised a standard system action is taken (often this is to raise the ERROR condition). The system action can be reestablished using the SYSTEM option of the ON-statement. With some conditions it is possible to complete executing an ON-unit and return to the point of interrupt (e.g., the STRINGRANGE, UNDERFLOW, CONVERSION, OVERFLOW, AREA and FILE conditions) and resume normal execution. With other conditions such as (SUBSCRIPTRANGE), the ERROR condition is raised when this is attempted. An ON-unit may be terminated with a GO TO preventing a return to the point of interrupt, but permitting the program to continue execution elsewhere as determined by the programmer.
An ON-unit needs to be designed to deal with exceptions that occur in the ON-unit itself. The ON ERROR SYSTEM; statement allows a nested error trap; if an error occurs within an ON-unit, control might pass to the operating system where a system dump might be produced, or, for some computational conditions, continue execution (as mentioned above).
The PL/I RECORD I/O statements have relatively simple syntax as they do not offer options for the many situations from end-of-file to record transmission errors that can occur when a record is read or written. Instead, these complexities are handled in the ON-units for the various file conditions. The same approach was adopted for AREA sub-allocation and the AREA condition.
The existence of exception handling ON-units can have an effect on optimization, because variables can be inspected or altered in ON-units. Values of variables that might otherwise be kept in registers between statements, may need to be returned to storage between statements. This is discussed in the section on Implementation Issues above.
GO TO with a non-fixed target
PL/I has counterparts for COBOL and FORTRAN's specialized GO TO statements.
Syntax for both COBOL and FORTRAN exist for coding two special two types of GO TO, each of which has a target that is not always the same.
ALTER (COBOL), ASSIGN (FORTRAN):
ALTER paragraph_name_xxx TO PROCEED TO para_name_zzz.
There are other/helpful restrictions on these, especially "in programs ... RECURSIVE attribute, in methods, or .. THREAD option."
ASSIGN 1860 TO IGOTTAGOGO TO IGOTTAGO
One enhancement, which adds built-in documentation, isGO TO IGOTTAGO (1860, 1914, 1939)
(which restricts the variable's value to "one of the labels in the list.")
GO TO ... based on a variable's subscript-like value.
GO TO (1914, 1939, 2140), MYCHOICE
GO TO para_One para_Two para_Three DEPENDING ON IDECIDE.
PL/I has statement label variables (with the LABEL attribute), which can store the value of a statement label, and later be used in a GOTO statement.
LABL1: ....
.
.
LABL2: ...
.
.
.
MY_DEST = LABL1;
.
GO TO MY_DEST;
GO TO HERE(LUCKY_NUMBER); /* minus 1, zero, or ... */
HERE(-1): PUT LIST ("I O U"); GO TO Lottery;
HERE(0): PUT LIST ("No Cash"); GO TO Lottery;
HERE(1): PUT LIST ("Dollar Bill"); GO TO Lottery;
HERE(2): PUT LIST ("TWO DOLLARS"); GO TO Lottery;
Statement label variables can be passed to called procedures, and used to return to a different statement in the calling routine.
Sample programs
Hello world program
Hello2: proc options(main);
put list ('Hello, World!');
end Hello2;
Search for a string
/* Read in a line, which contains a string,
/* and then print every subsequent line that contains that string. */
find_strings: procedure options (main);
declare pattern character (100) varying;
declare line character (100) varying;
declare line_no fixed binary;
on endfile (sysin) stop;
get edit (pattern) (L);
line_no = 1;
do forever;
get edit (line) (L);
if index(line, pattern) > 0 then
put skip list (line_no, line);
line_no = line_no + 1;
end;
end find_strings;
See also
List of programming languages
Timeline of programming languages
Notes
References
Textbooks
Standards
ANSI ANSI X3.53-1976 (R1998) Information Systems - Programming Language - PL/I
ANSI ANSI X3.74-1981 (R1998) Information Systems - Programming Language - PL/I General-Purpose Subset
ANSI ANSI X3.74-1987 (R1998) Information Systems - Programming Language - PL/I General-Purpose Subset
ECMA 50 Programming Language PL/I, 1st edition, December 1976
ISO 6160:1979 Programming languages—PL/I
ISO/IEC 6522:1992 Information technology—Programming languages—PL/I general purpose subset
Reference manuals
Burroughs Corporation, "B 6700 / B 7700 PL/I Language Reference", 5001530. Detroit, 1977.
CDC. R. A. Vowels, "PL/I for CDC Cyber". Optimizing compiler for the CDC Cyber 70 series.
Digital Equipment Corporation, "decsystem10 Conversational Programming Language User's Manual", DEC-10-LCPUA-A-D. Maynard, 1975.
Fujitsu Ltd, "Facom OS IV PL/I Reference Manual", 70SP5402E-1,1974. 579 pages. PL/I F subset.
Honeywell, Inc., "Multics PL/I Language Specification", AG94-02. 1981.
IBM, IBM Operating System/360 PL/I: Language Specifications, C28-6571. 1965.
IBM, OS PL/I Checkout and Optimizing Compilers: Language Reference Manual, GC33-0009. 1976.
IBM, IBM, "NPL Technical Report", December 1964.
IBM, Enterprise PL/I for z/OS Version 4 Release 1 Language Reference Manual, SC14-7285-00. 2010.
IBM, OS/2 PL/I Version 2: Programming: Language Reference, 3rd Ed., Form SC26-4308, San Jose. 1994.
Kednos PL/I for OpenVMS Systems. Reference Manual, AA-H952E-TM. Nov 2003.
Liant Software Corporation (1994), Open PL/I Language Reference Manual, Rev. Ed., Framingham (Mass.).
Nixdorf Computer, "Terminalsystem 8820 Systemtechnischer Teil PL/I-Subset",05001.17.8.93-01, 1976.
Ing. C. Olivetti, "Mini PL/I Reference Manual", 1975, No. 3970530 V
Q1 Corporation, "The Q1/LMC Systems Software Manual", Farmingdale, 1978.
External links
IBM PL/I Compilers for z/OS, AIX, MVS, VM and VSE
Iron Spring Software, PL/I for Linux and OS/2
Micro Focus’ Mainframe PL/I Migration Solution
OS PL/I V2R3 grammar Version 0.1
Pliedit, PL/I editor for Eclipse
Power vs. Adventure - PL/I and C, a side-by-side comparison of PL/I and C.
Softpanorama PL/1 page
The PL/I Language
PL1GCC project in SourceForge
PL/1 program to print signs
An open source PL/I Compiler for Windows NT
Procedural programming languages
PL/I programming language family
Structured programming languages
Concurrent programming languages
Systems programming languages
IBM software
Programming languages created in 1964
Programming languages with an ISO standard |
21385561 | https://en.wikipedia.org/wiki/Application%20service%20management | Application service management | Application service management (ASM) is an emerging discipline within systems management that focuses on monitoring and managing the performance and quality of service of business transactions.
ASM can be defined as a well-defined process and use of related tools to detect, diagnose, remedy and report the service quality of complex business transactions to ensure that they meet or exceed end-users’ Performance measurements relate to how fast transactions are completed or information is delivered to the end user by the aggregate of applications, operating systems, hypervisors (if applicable), hardware platforms, and network interconnects. The critical components of ASM include application discovery & mapping, application "health" measurement & management, transaction-level visibility, and incident-related triage. Thus, the ASM tools and processes are commonly used by such roles as Sysop, DevOps, and AIOps.
ASM is related to application performance management (APM), but serves as a more pragmatic, "top-down" approach that focuses on delivery of business services. In strict definition, ASM differs from APM in two critical ways.
APM focuses exclusively on the performance of an instance of an application, ignoring the complex set of interdependencies that may exist behind that application in the data center. ASM specifically mandates that each application or infrastructure software, operating system, hardware platform, and transactional "hop" be discretely measurable, even if that measurement is inferential. This is critical to ASM's requirement to be able to isolate the source of service-impacting conditions.
APM often requires instrumentation of the application for management and measurability. ASM advocates an application-centric approach, asserting that the application and operating system have comprehensive visibility of an application's transactions, dependencies, whether on-machine or off-machine, as well as the operating system itself and the hardware platform it is running on. Further, an in-context agent can also infer network latencies with a high degree of accuracy, and with a lesser degree of accuracy when the transaction occurs between instrumented and non-instrumented platforms.
Application service management extends the concepts of end-user experience management and real user monitoring in that measuring the experience of real users is a critical data point. However, ASM also requires the ability to quickly isolate the root cause of those slow-downs, thereby expanding the scope of real user monitoring/management.
The use of application service management is common for complex, multi-tier transactional applications. Further, the introduction of service-oriented architecture and microservices approaches together with hypervisor-based virtualization technologies have proven a catalyst for the adoption of ASM technologies, as complex applications are disproportionately impacted by the introduction of hypervisors into an existing environment A study by the Aberdeen Group indicates that most deployments of virtualization technologies are hampered by their impact on complex transactional applications.
More and more often ASM approaches are equipped in automated adaptive controllers that consider service-level agreement, cloud computing, real-time and energy-aware application controller targets.
References
See also
Application performance management
Business transaction management
Systems management
Integrated business planning
Network management
System administration
System administration
Software performance management |
34358 | https://en.wikipedia.org/wiki/Yacc | Yacc | Yacc (Yet Another Compiler-Compiler) is a computer program for the Unix operating system developed by Stephen C. Johnson. It is a Look Ahead Left-to-Right Rightmost Derivation (LALR) parser generator, generating a LALR parser (the part of a compiler that tries to make syntactic sense of the source code) based on a formal grammar, written in a notation similar to Backus–Naur Form (BNF). Yacc is supplied as a standard utility on BSD and AT&T Unix. GNU-based Linux distributions include Bison, a forward-compatible Yacc replacement.
History
In the early 1970s, Stephen C. Johnson, a computer scientist at Bell Labs / AT&T, developed Yacc because he wanted to insert an exclusive or operator into a B language compiler (developed using McIlroy's TMG compiler-compiler), but it turned out to be a hard task. As a result, he was directed by his colleague at Bell Labs Al Aho to Donald Knuth's work on LR parsing, which served as the basis for Yacc. Yacc was influenced by and received its name in reference to TMG compiler-compiler.
Yacc was originally written in the B programming language, but was soon rewritten in C. It appeared as part of Version 3 Unix, and a full description of Yacc was published in 1975.
Johnson used Yacc to create the Portable C Compiler. Bjarne Stroustrup also attempted to use Yacc to create a formal specification of C++, but "was defeated by C's syntax". While finding it unsuitable for a formal specification of the language, Stroustrup did proceed to use Yacc to implement Cfront, the first implementation of C++.
In a 2008 interview, Johnson reflected that "the contribution Yacc made to the spread of Unix and C is what I'm proudest of".
Description
The input to Yacc is a grammar with snippets of C code (called "actions") attached to its rules. Its output is a shift-reduce parser in C that executes the C snippets associated with each rule as soon as the rule is recognized. Typical actions involve the construction of parse trees. Using an example from Johnson, if the call constructs a binary parse tree node with the specified and children, then the rule
expr : expr '+' expr { $$ = node('+', $1, $3); }
recognizes summation expressions and constructs nodes for them. The special identifiers , and refer to items on the parser's stack.
Yacc produces only a parser (phrase analyzer); for full syntactic analysis this requires an external lexical analyzer to perform the first tokenization stage (word analysis), which is then followed by the parsing stage proper. Lexical analyzer generators, such as Lex or Flex, are widely available. The IEEE POSIX P1003.2 standard defines the functionality and requirements for both Lex and Yacc.
Some versions of AT&T Yacc have become open source. For example, source code is available with the standard distributions of Plan 9.
Impact
Yacc and similar programs (largely reimplementations) have been very popular. Yacc itself used to be available as the default parser generator on most Unix systems, though it has since been supplanted by more recent, largely compatible, programs such as Berkeley Yacc, GNU Bison, MKS Yacc, and Abraxas PCYACC. An updated version of the original AT&T Yacc is included as part of Sun's OpenSolaris project. Each offers slight improvements and additional features over the original Yacc, but the concept and basic syntax have remained the same.
Among the languages that were first implemented with Yacc are AWK, C++, eqn and Pic. Yacc was also used on Unix to implement the Portable C Compiler, as well as parsers for such programming languages as FORTRAN 77, Ratfor, APL, bc, m4, etc.
Yacc has also been rewritten for other languages, including OCaml, Ratfor, ML, Ada, Pascal, Java, Python, Ruby, Go, Common Lisp and Erlang.
Berkeley Yacc: The Berkeley implementation of Yacc quickly became more popular than AT&T Yacc itself because of its performance and lack of reuse restrictions.
LALR parser: The underlying parsing algorithm in Yacc-generated parsers.
Bison: The GNU version of Yacc.
Lex (and Flex lexical analyser), a token parser commonly used in conjunction with Yacc (and Bison).
BNF is a metasyntax used to express context-free grammars: that is a formal way to describe context-free languages.
PLY (Python Lex-Yacc) is an alternative implementation of Lex and Yacc in Python.
See also
Compiler-compiler
hoc (programming language)
References
External links
Compiling tools
Parser generators
Unix programming tools
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands |
1429083 | https://en.wikipedia.org/wiki/Comparison%20of%20file%20transfer%20protocols | Comparison of file transfer protocols | This article lists communication protocols that are designed for file transfer over a telecommunications network.
Protocols for shared file systems—such as 9P and the Network File System—are beyond the scope of this article, as are file synchronization protocols.
Protocols for packet-switched networks
A packet-switched network transmits data that is divided into units called packets. A packet comprises a header (which describes the packet) and a payload (the data). The Internet is a packet-switched network, and most of the protocols in this list are designed for its protocol stack, the IP protocol suite.
They use one of two transport layer protocols: the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). In the tables below, the "Transport" column indicates which protocol(s) the transfer protocol uses at the transport layer. Some protocols designed to transmit data over UDP also use a TCP port for oversight.
The "Server port" column indicates the port from which the server transmits data. In the case of FTP, this port differs from the listening port. Some protocols—including FTP, FTP Secure, FASP, and Tsunami—listen on a "control port" or "command port", at which they receive commands from the client.
Similarly, the encryption scheme indicated in the "Encryption" column applies to transmitted data only, and not to the authentication system.
Overview
Features
The "Managed" column indicates whether the protocol is designed for managed file transfer (MFT). MFT protocols prioritise secure transmission in industrial applications that require such features as auditable transaction records, monitoring, and end-to-end data security. Such protocols may be preferred for electronic data interchange.
Ports
In the table below, the data port is the network port or range of ports through which the protocol transmits file data. The control port is the port used for the dialogue of commands and status updates between client and server.
The column "Assigned by IANA" indicates whether the port is listed in the Service Name and Transport Protocol Port Number Registry, which is curated by the Internet Assigned Numbers Authority (IANA). IANA devotes each port number in the registry to a specific service with a specific transport protocol. The table below lists the transport protocol in the "Transport" column.
Serial protocols
The following protocols were designed for serial communication, mostly for the RS-232 standard. They are used for uploading and downloading computer files via modem or serial cable (e.g., by null modem or direct cable connection). UUCP is one protocol that can operate with either RS-232 or the Transmission Control Protocol as its transport. The Kermit protocol can operate over any computer-to-computer transport: direct serial, modem, or network (notably TCP/IP, including on connections secured by SSL, SSH, or Kerberos). OBject EXchange is a protocol for binary object wireless transfer via the Bluetooth standard. Bluetooth was conceived as a wireless replacement for RS-232.
Overview
Features
See also
Notes
References
Further reading
Lists of software
File transfer
File transfer protocols |
15444639 | https://en.wikipedia.org/wiki/Katrina%20Hacker | Katrina Hacker | Katrina Hacker (born May 31, 1990) is an American former figure skater. She placed sixth at the 2008 Four Continents and fifth at the 2009 World Junior Championships.
Career
Hacker won the novice-level bronze medal at the 2005 U.S. Championships and was then sent to the 2005 Triglav Trophy where she won the junior gold medal.
In the 2006–07 season, Hacker placed fifth at a Junior Grand Prix competition in Romania, her only JGP event. After not qualifying for the 2007 U.S. Championships, she decided to move to Boston in order to train with coaches Mark Mitchell and Peter Johansson at the Skating Club of Boston. She had a hip injury in summer 2007.
She subsequently won the 2008 New England Regionals and 2008 Eastern Sectionals. At the 2008 U.S. Championships, she placed 6th and was the third-highest-placing age-eligible skater for the senior World Championships. Hacker was not selected for Worlds—former World champion Kimmie Meissner received the third spot—but was selected for the 2008 Four Continents Championships, where she made her senior international debut. She was the top finisher among the American ladies at Four Continents.
In the 2008–09 season, Hacker received two senior Grand Prix assignments, the 2008 Cup of China and the 2008 NHK Trophy. She placed eighth in China and sixth in Japan. Hacker was assigned to the 2009 World Junior Championships and placed fifth. In May 2009, she said she would not compete in the 2009–10 season, and would instead focus on her studies.
Personal life
In January 2008, Hacker was selected for the U.S. Figure Skating Scholastics Honors Team. She graduated from high school in spring 2008. She deferred her admission into Princeton University for a year to focus on her skating career. She is pursuing a PhD in clinical psychology at The New School. She identifies as queer.
Programs
Results
Detailed results
2008–2009 season
References
External links
Katrina Hacker at Tracings.net
1990 births
American female single skaters
Living people
Sportspeople from New York City
21st-century American women |
59248142 | https://en.wikipedia.org/wiki/SimThyr | SimThyr | SimThyr is a free continuous dynamic simulation program for the pituitary-thyroid feedback control system. The open-source program is based on a nonlinear model of thyroid homeostasis. In addition to simulations in the time domain the software supports various methods of sensitivity analysis. Its simulation engine is multi-threaded and supports multiple processor cores. SimThyr provides a GUI, which allows for visualising time series, modifying constant structure parameters of the feedback loop (e.g. for simulation of certain diseases), storing parameter sets as XML files (referred to as "scenarios" in the software) and exporting results of simulations in various formats that are suitable for statistical software. SimThyr is intended for both educational purposes and in-silico research.
Mathematical model
The underlying model of thyroid homeostasis is based on fundamental biochemical, physiological and pharmacological principles, e.g. Michaelis-Menten kinetics, non-competitive inhibition and empirically justified kinetic parameters. The model has been validated in healthy controls and in cohorts of patients with hypothyroidism and thyrotoxicosis.
Scientific uses
Multiple studies have employed SimThyr for in silico research on the control of thyroid function.
The original version was developed to check hypotheses about the generation of pulsatile TSH release. Later and expanded versions of the software were used to develop the hypothesis of the TSH-T3 shunt in the hypothalamus-pituitary-thyroid axis, to assess the validity of calculated parameters of thyroid homeostasis (including SPINA-GT and SPINA-GD) and to study allostatic mechanisms leading to non-thyroidal illness syndrome.
SimThyr was also used to show that the release rate of thyrotropin is controlled by multiple factors other than T4 and that the relation between free T4 and TSH may be different in euthyroidism, hypothyroidism and thyrotoxicosis.
Public perception, reception and discussion of the software
SimThyr is free and open-source software. This ensures the sourcecode to be available, which facilitates scientific discussion and reviewing of the underlying model. Additionally, the fact that it is freely available may result in economical benefits.
The software provides an editor that enables users to modify most structure parameters of the information processing structure. This functionality fosters simulation of several functional diseases of the thyroid and the pituitary gland. Parameter sets may be stored as MIRIAM- and MIASE-compliant XML files.
On the other hand, the complexity of the user interface and the lack of the ability to model treatment effects have been criticized.
See also
Hypothalamic–pituitary–thyroid axis
Thyroid function tests
References
External links
of the SimThyr project
Curated information at Zenodo
Curated information at SciCrunch
Free science software
Free biosimulation software
Medical simulation
Free software programmed in Pascal
Scientific simulation software
Mathematical and theoretical biology
Computational biology
Cross-platform software
Biomedical cybernetics
Human homeostasis
Thyroid homeostasis |
1752006 | https://en.wikipedia.org/wiki/QuickBooks | QuickBooks | QuickBooks is an accounting software package developed and marketed by Intuit. First introduced in 1983, QuickBooks products are geared mainly toward small and medium-sized businesses and offer on-premises accounting applications as well as cloud-based versions that accept business payments, manage and pay bills, and payroll functions.
History
Intuit was founded in 1983 by Scott Cook and Tom Proulx in Mountain View, California, USA. After the success of its Quicken product for individual financial management, the company developed similar services for small business owners.
Initial release
The initial Quicken software did not function as a "double-entry" accounting package. The initial release of QuickBooks was the DOS version that was based on the Quicken codebase. The Windows and Mac versions shared a different codebase that was based on In-House Accountant, which Intuit had acquired. The software was popular among small business owners who had no formal accounting training. As such, the software soon claimed up to 85 percent of the US small business accounting software market. It continued to command the vast majority of this market as of 2013. Professional accountants, however, were not satisfied with early versions of the system, citing poor security controls, such as no audit trail, as well as non-conformity with traditional accounting standards.
Subsequent releases
Intuit sought to bridge the gap with these accounting professionals, eventually providing full audit trail capabilities, double-entry accounting functions and increased functions. By 2000, Intuit had developed Basic and Pro versions of the software and, in 2003, started offering industry-specific versions, with workflow processes and reports designed for each of these business types along with terminology associated with the trades.
Options now include versions for manufacturers, wholesalers, professional service firms, contractors, non-profit entities and retailers, in addition to one specifically designed for professional accounting firms who service multiple small business clients. In May 2002 Intuit launched QuickBooks Enterprise Solutions for medium-sized businesses.
In September 2005, QuickBooks had 74% of the market in the US. A June 19, 2008 Intuit Press Release said that as of March 2008, QuickBooks' share of retail units in the business accounting category reached 94.2 percent, according to NPD Group. It also says that more than 50,000 accountants, CPAs and independent business consultants are members of the QuickBooks ProAdvisor program. By then Brad Smith was the new CEO, though former CEO Steve Bennett had nearly tripled Intuit revenue and quadrupled earnings in eight years.
On September 22, 2014, Intuit announced the release of QuickBooks 2015 with features that users have been requesting from the past versions. The release includes improved income tracker, pinned notes, improved registration process and insights on homepage.
In September 2015, Intuit released QuickBooks 2016 that contains several improvements to the existing ones and new features such as batch transaction, bill tracking, continuous feed label printer support, batch delete/void transactions etc.
In September 2016, Intuit released QuickBooks 2017 with several improvements like automated reports, smart search and improved viewing of report filters among other things.
In 2017, Intuit released QuickBooks 2018 to give its users a better experience by adding features such as mobile inventory barcode scanning, multi-monitor support, search in the chart of accounts, mobile inventory scanning etc.
On September 17, 2018, Intuit announced the release of QuickBooks 2019 with some unique features requested by its users, including history tracker for customer invoices, ability to transfer credits between other jobs of the same customer, payroll adjustment feature, and more.
On September 16, 2019, QuickBooks 2020 was launched with the aim to improve the reliability and experience of using the software. All the desktop versions - Pro, Premier, Accountant, and Enterprise are packed with new features like the ability to add customer PO number in email subject lines, send batch invoices to customers, automatic payment reminders, collapse and expand columns, easy QuickBooks version update etc.
On September 04, 2020, Intuit rolled out QuickBooks 2021 with improved payment process and automated features. All the desktop editions in this version have streamlined bank feeds, automated receipt management, rule-based customer groups, payment reminders, customized payment receipts, data level permissions, and batch delete sales orders.
International versions
Versions of this product are available in many different markets. Intuit's Canadian, British and Australian divisions offer versions of QuickBooks that support the unique tax calculation needs of each region, such as Canada's GST, HST or PST sales tax, VAT for the United Kingdom edition and Australia's GST sales tax. The QuickBooks UK edition also includes support for Irish and South African VAT.
QuickBooks Enterprise was withdrawn from the UKI market in 2014.
QuickBooks Desktop is only available on a rental/subscription basis for users in UK and Ireland, and is to be withdrawn from sale with no desktop software replacement with the final version being the 2021 edition.
The Mac (macOS) version is available only in the United States.
Features
Intuit has integrated several web-based features into QuickBooks, including remote access capabilities, remote payroll assistance and outsourcing, electronic payment functions, online banking and reconciliation, mapping features through integration with Google Maps, marketing options through Google, and improved e-mail functionality through Microsoft Outlook and Outlook Express. For the 2008 version, the company has also added import from Excel spreadsheets, additional employee time tracking options, pre-authorization of electronic funds and new Help functions. In June 2007, Intuit announced that QuickBooks Enterprise Solutions would run on Linux servers, whereas previously it required a Windows server to run.
QuickBooks Online
Intuit also offers a cloud service called QuickBooks Online (QBO). The user pays a monthly subscription fee rather than an upfront fee and accesses the software exclusively through a secure logon via a Web browser. Intuit provides patches, and regularly upgrades the software automatically, but also includes pop-up ads within the application for additional paid services.
, QuickBooks Online had the most subscribers for an online accounting platform, with 624,000 subscribers. compared to Xero, which reported 284,000 customers as of July 2014.
The cloud version is a distinct product from the desktop version of QuickBooks, and has many features that work differently than they do in desktop versions.
In 2013, Intuit announced that it had rebuilt QuickBooks Online "from the ground up" with a platform that allows third parties to create small business applications and gives customers the ability to customize the online version of QuickBooks.
QuickBooks Online is supported on Chrome, Firefox, Internet Explorer 10, Safari 6.1, and also accessible via Chrome on Android and Safari on iOS 7. One may also access QuickBooks Online via an iPhone, a BlackBerry, and an Android web app.
In 2011, Intuit introduced a UK-specific version of QuickBooks Online to address the specific VAT and European tax system. There are also versions customized for the Canadian, Indian, and Australian markets, as well as a global version that can be customized by the user.
Quickbooks Online offers integration with other third-party software and financial services, such as banks, payroll companies, and expense management software.
QuickBooks desktop also supports a migration feature where customers can migrate their desktop data from pro or prem SKU's to Quickbooks Online.
QuickBooks Point of Sale
QuickBooks Point of Sale is software that replaces a retailer's cash register, tracks its inventory, sales, and customer information, and provides reports for managing its business and serving its customers.
Add-on programs
Through the Solutions Marketplace, Intuit encouraged third-party software developers to create programs that fill niche areas for specific industries and integrate with QuickBooks. Intuit partnered with Lighter Capital to create a $15 million fund for developers designing apps for Intuit Quickbooks. The Intuit Developer Network provides marketing and technical resources, including software development kits (SDKs).
Intuit's Lacerte and ProConnect Tax Online tax preparation software for professional accountants who prepare tax returns for a living integrates with QuickBooks in this way. Microsoft Office also integrates with QuickBooks.
Criticism
As of November 2014, users of QuickBooks for OSX had reported compatibility issues with Apple's new operating system, OS X Yosemite.
See also
Comparison of accounting software
References
External links
Accounting software
Intuit software |
4664172 | https://en.wikipedia.org/wiki/CoreAVC | CoreAVC | CoreAVC was a proprietary codec for decoding the H.264/MPEG-4 AVC (Advanced Video Coding) video format.
In 2010, when CoreAVC was a software-only decoder, it was one of the fastest software decoders, but still slower than hardware-based ones. CoreAVC supports all H.264 Profiles except for 4:2:2 and 4:4:4.
From 2009, CoreAVC introduced support to two forms of GPU hardware acceleration for H.264 decoding on Windows: CUDA (Nvidia only, in 2009) and DXVA (Nvidia and ATI GPUs, in 2011).
CoreAVC was included as a part of the CorePlayer Multimedia Framework and was being used in the now defunct desktop client by Joost a system that was distributing videos over the Internet using peer-to-peer TV technology.
CoreAVC-For-Linux DMCA complaint
An open-source project named CoreAVC-For-Linux hosted at Google Code patches the loader code in the open source media player program MPlayer and allows it to use the Windows only CoreAVC DirectShow filter in free software environments. It does not include CoreAVC, but simply allows MPlayer to make use of it. This project also contains patches to use the proprietary codec in MythTV, open source software for Home Theater Personal Computers and the media player xine.
In May 2008 the CoreAVC-For-Linux project was taken down by Google due to a DMCA complaint. There was speculation about this DMCA complaint, because the project as a wrapper did not use any copyrighted material, but maybe reverse engineering techniques were used without prior permission, which CoreCodec, Inc. interpreted as a violation of the DMCA. CoreCodec has stated that reverse engineering was the reason, and it was in error and has apologized to the community.
CoreAVC-For-Linux is now back online and is recognized and supported by CoreCodec. Despite this, the project's future is currently in doubt as the developer stated they are quite busy and do not have enough time to continue working on it. The developer is currently requesting help from any developers interested in contributing to the project.
Multi-platform support
In early 2008, due to popular demand, CoreCodec ported the until then Windows-only to a plethora of platforms and CPU architectures. CoreAVC is now supported on the operating systems Windows, macOS and Linux, as well as mobile-embedded operating systems like Palm OS, Symbian, Windows CE and Windows Mobile - although the Linux version is not available as retail but only for OEMs. CoreAVC runs not only on 32-bit and 64-bit x86, but also on PowerPC (including AltiVec support), ARM9, ARM11 and MIPS. As for GPUs, supported are Intel 2700G, ATI Imageon, Marvell Monahan, (limited) Qualcomm QTv.
On February 2009, CoreCodec released an update to CoreAVC that implemented support for Nvidia CUDA. CUDA allows selected Nvidia graphics cards to assist in the decoding of video. On March 2011, CoreCodec introduced support for DXVA. Like CUDA, DXVA allows ATI and NVIDIA based graphics cards to assist in the decoding of video.
References
External links
CoreCodec, Inc.
CorePlayer (multi-platform)
Doom9.org Discussion on CoreAVC
Openlaw - the current US law and Reverse Engineering
coreavc-for-linux - Google Code
Codecs
Multimedia
Linux media players
MacOS media players
Windows media players
Symbian software |
57575221 | https://en.wikipedia.org/wiki/Perlaggen | Perlaggen | Perlaggen (regionally also Perlåggen), formerly Perlagg-Spiel ("game of Perlagg"), is a traditional card game which is mainly played in the regions of South Tyrol in Italy, the Tyrolean Oberland and the Innsbruck areas of Austria. It is the only card game to have been recognised by UNESCO as an item of Intangible Cultural Heritage.
Origin
Perlaggen originated in the South Tyrolean valleys of the Etschtal and Eisacktal when South Tyrol was part of Austria. Its beginnings go back to the 19th century. The oldest known record of the game of Perlaggen comes from an 1853 booklet, Das Tiroler National- oder Perlagg-Spiel, which describes the origins of the game in Giltspiel as well as its rules. At the first Perlaggen Congress, which took place on 19 April 1890 in Innsbruck, the game's inventors, its place and year of origin were confirmed. Its inventors were chancery clerks, Alois von Perkhammer and Josef Pfonzelter, and forestry officials, Ferdinand Gile and Johann Sarer. They created the game at the inn of Zum Pfau in Bozen (now Bolzano), South Tyrol, in the year 1833. Also confirmed at this congress were the rules of the game, albeit they were not always strictly observed; in fact in most places the rules were modified.
Name
Originally the game had no name. It was only several years later that the term Perlagg emerged in the region around Salurn, where the devil was referred to as the Berlicche. Like the devil, the Perlagg can appear in any possible, suitable form of cards. In the game, the name Perlagg (plural: Perlaggen in German, Perlaggs in English) is applied to a card of a high rank. Auer, the Austrian card expert, uses "Perlaggen" for the game and "Perlåggen" for the special trump cards to distinguish them, although he says the pronunciation is the same – the "å" sounding halfway between a German "a" and German "o".
Playing
Cards
The game is played with the well known German deck and with 33 cards, i.e. it includes the wild card known as the Weli. In South Tyrol the single-headed Salzburger pattern cards are used; in Austrian Tyrol they generally use the double-headed William Tell pattern.
The game has four suits:
The ranking of the suits is the same. Each suit consists of eight cards in the usual order: Ace/Deuce/Sow (Ass/Daus/Sau), King (König), Ober, Unter, Ten, Nine, Eight, Seven; plus the Weli, a 6 of Bells.
Players
The game may be played between 2, 4 or 6 players, but it is usually played in fours, two against two. The players sit in a 'cross' with partners facing each other across the table. Each player is dealt five cards. There is also a variation for two players in which they receive seven cards each.
Dealing
At the start of the game, the player who cuts the highest card deals. Before dealing, he invites the neighbour on his right (from the other team) to cut. If the cutter cuts to one of the Perlagg cards, he may keep it. If there is another one under the first Perlagg, this also belongs to the cutter, as does a third or fourth. It is therefore possible for the cutter to capture all four permanent Perlaggs. The cutter is obliged to show all players the drawn Perlagg(s). Once he has fulfilled this obligation, he no longer has to inform anyone during the game what he has drawn. After cutting, the cards are dealt to the left. Each player is dealt five cards, two in the first packet and three in the second. The dealer must check if and how many Perlaggs the cutter has captured; if he has taken one, he gets only one card the first time; if he has taken two, none at all; if he has lifted three, he gets none the first time and two the second time; if he has captured all four Perlaggs, he is dealt no cards in the first packet and one in the second.
If the dealer makes a mistake and, for example, deals two cards in the first packet to the cutter, even though the latter has taken a Perlagg, the opponents may demand that the hand be reshuffled and redealt. In this case, the person on the right of the dealer has the advantage of being allowed to cut a second time. He may keep any Perlaggs that he takes the second time as well.
Trump suit
If the dealer has dealt the cards correctly so that each of the four players has five cards in his hand, he flips the next card, the twenty-first, as a trump, i.e. the suit he flips becomes the trump suit and beats the cards of the other three suits. So there are 21 cards in play and twelve remain hidden on the table. The dealer is obliged to show all other players the top and bottom (Luck und Boden) cards once. After that, no-one may look at these cards. Three cards of the trump suit, the 7, the Unter and the Ober, now have the rank of a Perlagg and are therefore called Trump Perlaggs. These three Trump Perlaggs have the same attributes as the four permanent Perlaggs, except that they are lower in rank than the permanent ones. The ranking among them is 7, Unter, Ober.
Perlaggs
Permanent Perlaggs
Among the 33 cards which may be 'perlagged', there are four so-called permanent Perlaggs. These four cards are superior to all the others in that, firstly, they beat all other cards in a trick and, secondly, that they may be turned into one of four other cards, a move known as christening (taufen). Naturally, these christened cards may only be used once. These cards are therefore:
K - Maxl, the King of Hearts, is the highest card, called Maxl after King Maximilian I of Bavaria.
6 - Weli, the 6 of Bells, is the second highest. It is also called the geschriebene Weli ("written Weli")
7 - Bell Spitz (Schellspitz), the 7 of Bells, is the third highest card, also called Little Weli
7 - Acorn Spitz (Eichelspitz), the 7 of Acorns, is the fourth-highest card
If the dealer has a 7, Unter or Ober trump and another trump card in his five-card hand, he has the right to exchange a trump Perlagg which has been turned over for the trump card in his hand. If one of the four permanent Perlaggs is turned up, its suit becomes the trump suit, i.e. if the King of Hearts is turned up, Hearts becomes the trump suit, etc., and it can also be replaced with a card of the same suit. The Weli is considered to be a Bell. If the dealer has no trump card, i.e. he cannot exchange, the right to exchange passes to his partner. If he has no trump either, the Perlagg remains in place. Under no circumstances may the other team exchange it. Once the first card has been played and the first card has been accepted or beaten, the team entitled to exchange has lost that right. The other team may therefore not anticipate the exchange by throwing the card out. If the team entitled to exchange forgets or overlooks the exchange, that is their own fault; they have only penalised themselves. It is very important for the other team to remember who has exchanged and what kind of Perlagg has been exchanged. Once the game is underway, players are under no obligation to say what has been swapped and who has swapped. Once the dealing and exchanging is over, the game can begin.
Perlaggs in play
During play, a Perlagg may be christened as any card even if that card has already been played as natural card or as another Perlagg. If two Perlaggs are christened as the same card, they rank in the order: K 6 7 7 T7 TU TO. If they are not christened, they count as their natural card. If a Perlagg is led, players must follow suit according to the suit the Perlagg was christened as.
Deuten
An important feature of Perlaggen, allowed under the rules, is deuten, whereby partners may communicate the cards they hold to one another by means of signals, gestures and words. They may indicate the number and rank of their Perlaggs and trumps, but also ask openly. As a rule, players signal. Showing one's cards is not allowed. Playing partners may develop their own signals, but in order to help those who have not played together before, there are 'standard' signals which were described by Schwaighofer as early as 1926. These are:
Martl held: rolling the eyes (upwards)
Weli held: forming a kiss, pointing the tip of the tongue
Bell Spitz held: forming a kiss, pointing the tip of the tongue to the left or right
Acorn Spitz held: winking the right eye
Trump Perlagg held: winking the left eye
Number of trump cards held: tapping the middle finger on the tabletop
No Perlaggs and no trumps held: shaking the head
Quartet held: move the hand from the right corner of the mouth to the right ear
Run of four cards held: move the hand from the middle of the mouth down to the neck
Figures
There are three so-called 'figures' (Figuren) in Perlaggen: the Gleich, the Hanger and the Spiel. The game turns on these three figures, for which points are scored.
Gleich: A Gleich is two or more cards of the same rank e.g. two 10s, three kings, etc. Two aces are known as the "highest Gleich" (Gleich höchst or höchstes Gleich). Two equally high cards are known as a "pair" or "simple Gleich" (einfaches Gleich), three as a "triplet" (dritziges Gleich), four as a "quartet" (viertiges Gleich) and so on. If these multiple Gleichs consist of aces, they are called "highest triplet" (höchst dritzigen), etc.
Hanger: The Hanger comprises two or more consecutive cards of the same suit e.g. the 10 of Acorns, Unter of Acorns, Ober of Acorns and King of Acorns. The Hanger is called "highest" if it consists of the highest cards - the King and Deuce - of the same suit.
Spiel: Because each player has five cards, one team will take three or more tricks and the other only two or fewer. The team that wins three tricks, has won the Spiel ("game") and earns one or more points (Gutpunkte), according to whether their opponent has left or held the game well. One often leaves the game to the opponent, that is one gifts it, in order not to have to play one's own cards, which could be important when playing the other two figures, the Gleich and the Hanger, and thereby having to reveal one's own cards.
Trump suit
Trumps beat all other suits. Suit must be followed under all circumstances, even if the led suit has been beaten by an intermediate card with a trump or a Perlagg. All Perlaggs are independent cards and count neither as part of a suit nor a trump card. If a Perlagg is played or put down during the game, it must be christened immediately by its owner, i.e. he must indicate which card it is to be considered as. If the Perlagg owner does not name the card, the Perlagg becomes a simple card. Once a Perlagg has been christened, it must not be renamed under any circumstances. With Perlaggs one does not need to declare one's trumps or suit, because in a sense they do not belong to any particular suit.
Bidding
To bid in Perlaggen players will say something like: "I bid my Gleich!" or "I bid the Spiel!". It is essential that the player bidding actually uses the words "I bid..." or "... bid!" For example, if he says only "Gleich!" or "Spiel!", he has not bid according to the rules of the game and experienced players may use this rule to catch newcomers out. If a bid has been offered, the other team must reply, for example, with "good!" indicating they don't wish to contest the bid, "hold!" meaning that they accept the bid or "three!" to raise the bid further. Typical bids and responses are as follows:
Bids may be raised up to a total of 7 points.
The two players who form a team are partners and count as one person to the opposition. One is responsible for the other and both are jointly responsible.
The game is usually played for 18 points. The pair that reach 18 first, win the game.
Variants
Two-hand Perlaggen
Perlaggen may be played by two people using the same rules as the four-hand game, but without deuten. One author suggests that it is more interesting to deal 7 cards to both players, as packet of 2, packet of 3, then the face-up trump, then another packet of two. Alternatively the dealer may deal 3 each, then the trump, then 4 each. Four tricks are needed to win the Spiel figure.
Recognition
Since 2004 there has been a Perlaggen Circle (Förderkreis Perlaggen) in South Tyrol which holds an annual championship, the Meisterschaft in Perlaggen. 2015 saw the 6th All-Tyrol Perlaggen Championship (Gesamt-Tiroler Meisterschaft in Perlaggen), in which North Tyrolese Perlaggen players took part.
In June 2016, this traditional Tyrolean card game was declared by the Austrian UNESCO Commission to be an item of immaterial cultural heritage. At the same time, in the Tyrolean Folk Art Museum in Innsbruck, a display cabinet was set up which is devoted to Perlaggen.
References
Literature
_ (1853). Das Tiroler National- oder Perlagg-Spiel, Wagner, Innsbruck.
Auer, Hubert. Watten, Bieten und Perlaggen. Perlen-Reihe, Vol. 659. Vienna: Perlen-Reihe (2015).
Förderkreis Perlaggen Südtirol. Perlåggen in Südtirol mit Watten & Bieten. Bozen: Raetia (2014).
Schwaighofer, Hermann. Die Tiroler Kartenspiele Bieten, Watten, Perlaggen. Innsbruck: Wagner (1926), 95 pp.
External links
Perlaggenförderkreis
Austrian card games
Trump group
William Tell deck card games
Two-player card games
Four-player card games
Six-player card games
German deck card games
Card games introduced in the 1830s |
44073792 | https://en.wikipedia.org/wiki/Jan%20Bosch | Jan Bosch | Jan Bosch (born 1967) is a Dutch computer scientist, Professor of Software Engineering at the University of Groningen and at Chalmers University of Technology, and IT consultant, particularly known for his work on software architecture.
Biography
Bosch received his MSc in computer science in 1991 from the University of Twente, and in 1995 his PhD degree in computer science from Lund University.
In 1994 Bosch got appointed Professor of Software Engineering at the Blekinge Institute of Technology, and in 2000 he moved to the University of Groningen, where he became Professor of Software Engineering. Since 2011 he is also Professor of Software Engineering at Chalmers University of Technology.
In 2004 Bosch became also Vice President and the Head of Laboratory at the Nokia Research Center, and from 2007 to 2011 he Vice President Engineering Process at Intuit. In 2011 he co-founded the consultancy firm Boschonian AB, where he is partner.
Selected publications
Bosch, Jan. Design and use of software architectures: adopting and evolving a product-line approach. Pearson Education, 2000.
Bosch, Jan. Speed, Data, and Ecosystems: Excelling in a Software-Driven World., CRC Press 2016.
Articles, a selection:
Aksit, M., Wakita, K., Bosch, J., Bergmans, L., & Yonezawa, A. (1994). "Abstracting object interactions using composition filters." In Object-Based Distributed Programming (pp. 152-184). Springer Berlin Heidelberg.
Van Gurp, Jilles, Jan Bosch, and Mikael Svahnberg. "On the notion of variability in software product lines." Software Architecture, 2001. Proceedings. Working IEEE/IFIP Conference on. IEEE, 2001.
Bengtsson, P., Lassing, N., Bosch, J., & van Vliet, H. (2004). Architecture-level modifiability analysis (ALMA). Journal of Systems and Software, 69(1), 129-147.
Svahnberg, Mikael, Jilles Van Gurp, and Jan Bosch. "A taxonomy of variability realization techniques." Software: Practice and Experience 35.8 (2005): 705-754.
Hartmann, H., Trew, T., Bosch, J., 2012. The changing industry structure of software development for consumer electronics and its consequences for software architectures. Journal of Systems and Software 85, 178–192. doi:10.1016/j.jss.2011.08.007.
References
External links
Jan Bosch homepage
1967 births
Living people
Dutch computer scientists |
40337947 | https://en.wikipedia.org/wiki/Neural%20Engineering%20Object | Neural Engineering Object | Neural Engineering Object (Nengo) is a graphical and scripting software for simulating large-scale neural systems. As Neural network software Nengo is a tool for modelling neural networks with applications in cognitive science, psychology, Artificial Intelligence and neuroscience.
History
Some form of Nengo has existed since 2003. Originally developed as a Matlab script under the name NESim (Neural Engineering Simulator), it was later moved to a Java implementation under the name NEO, and then eventually Nengo. The first three generations of Nengo developed with a focus on developing a powerful modelling tool with a simple interface, and scripting system. As the tool became increasingly useful the limitations of the system in terms of speed led to development of a back-end agnostic API. This most recent iteration of Nengo defines a specific Python-based scripting API with back-ends targeting Numpy, OpenCL and Neuromorphic hardware such as Spinnaker. This newest iteration also comes with an interactive GUI to help with the quick prototyping of neural models.
As open source software Nengo uses a custom license which allows for free personal and research use, but licensing required for commercial purposes.
Theoretical Background
Nengo is built upon two theoretic underpinnings, the Neural Engineering Framework (NEF) and the Semantic Pointer Architecture (SPA).
Neural Engineering Framework
Nengo differs primarily from other modelling software in the way it models connections between neurons and their strengths. Using the NEF, Nengo allows defining connection weights between populations of spiking neurons by specifying the function to be computed, instead of forcing the weights to be set manually, or use a learning rule to configure them from a random start. That being said, these aforementioned traditional modelling methods are still available in Nengo.
Semantic Pointer Architecture
To represent symbols in Nengo, SPA is used. Many aspects of human cognition are easier to model using symbols. In Nengo, these are presented as vectors with a set of operations associated to them. These vectors and their operations are called SPA. SPA has been used to model human linguistic search and task planning.
Applications
Notable developments accomplished using the Nengo software have occurred in many fields, and Nengo has been used and cited in over 100 publications. An important development to note is Spaun, a network of 6.6 million artificial spiking neurons (a small number compared to the number in the human brain), which uses groups of these neurons to complete cognitive tasks via flexible coordination. Spaun is the world's largest functional brain model, and can be used to test hypotheses in neuroscience.
References
Further reading
Neural network software |
6554707 | https://en.wikipedia.org/wiki/Pyraechmes | Pyraechmes | In Greek mythology, Pyraechmes (; Ancient Greek: Πυραίχμης Puraíkhmēs) was, along with Asteropaeus, a leader of the Paeonians in the Trojan War.
Mythology
Pyraechmes came from the city of Amydon. Although Homer mentions Pyraechmes as the leader of the Paeonians early on in the Iliad, in the Trojan Catalogue, Pyraechmes plays a minor role compared to the more illustrious Asteropaeus, a later arrival to the front. Unlike Asteropaeus, Homer does not provide a pedigree for Pyraechmes (although Dictys Cretensis says his father was Axius - also the name of a river in Paeonia). Pyraechmes was killed in battle by Patroclus: dressed in Achilles' armor, Patroclus routed the panicked Trojans, and the first person he killed was Pyraechmes.
References
Dictionary of Greek and Roman Biography and Mythology
People of the Trojan War
Characters in Greek mythology
Paeonian people
Paeonian mythology |
32398 | https://en.wikipedia.org/wiki/Fighting%20game | Fighting game | A fighting game (also known as versus fighting game) is a video game genre that involves combat between two (or more) players. Fighting game combat features mechanics such as blocking, grappling, counter-attacking, and chaining attacks together into "combos". Characters generally engage in battle using hand to hand combat—often some form of martial arts. The fighting game genre is related to, but distinct from, the beat 'em up genre, which pits large numbers of computer-controlled enemies against one or more player characters.
Battles in fighting games usually take place in a fixed-size arena, along a two-dimensional plane to which the characters' movement is restricted. Characters can navigate this plane horizontally by walking or dashing and vertically by jumping. Additionally, some games, such as Tekken, allow limited movement in 3D space.
The first video game to feature fist fighting was Heavyweight Champ in 1976, but it was Karate Champ which popularized one-on-one fighting game genre in arcades in 1984. Released later the same year, Yie Ar Kung-Fu featured antagonists with differing fighting styles and introduced health meters, while The Way of the Exploding Fist released the following year further popularized the genre on home systems. In 1987, Street Fighter introduced special attacks. In 1991, Capcom's highly successful Street Fighter II refined and popularized many of the conventions of the genre, including the introduction of the concept of combos. Fighting games subsequently became the preeminent genre for competitive video gaming in the early to mid-1990s, particularly in arcades. This period spawned dozens of other popular fighting games, including franchises like Street Fighter, Mortal Kombat, Super Smash Bros., Tekken, and Virtua Fighter.
Definition
Fighting games are a type of action game where two (one-on-one fighting games) or more than two (platform fighters) on-screen characters fight each other. These games typically feature special moves that are triggered using rapid sequences of carefully timed button presses and joystick movements. Games traditionally show fighters from a side-view, even as the genre has progressed from two-dimensional (2D) to three-dimensional (3D) graphics. Street Fighter II, though not the first fighting game, is considered to have standardized the genre, and similar games released prior to Street Fighter II have since been more explicitly classified as fighting games. Fighting games typically involve hand-to-hand combat, though many games also feature characters with melee weapons.
This genre is related but distinct from beat 'em ups, another action genre involving combat, where the player character must fight many enemies at the same time. Beat ‘em ups, like traditional fighting games, display player and enemy health in a bar, generally located at the top of the screen. However, beat ‘em ups generally do not feature combat divided into separate “rounds”. During the 1980s to 1990s, publications used the terms "fighting game" and "beat 'em up" interchangeably, along with other terms such as "martial arts simulation" (or more specific terms such as "judo simulator") and "punch-kick" games. Fighting games were still being called "beat 'em up" games in video game magazines up until the end of the 1990s.
With hindsight, critics have argued that the two types of game gradually became dichotomous as they evolved, though the two terms may still be conflated. sports-based combat are games that feature boxing, mixed martial arts (MMA), or wrestling. Serious boxing games belong more to the sports game genre than the action game genre, as they aim for a more realistic model of boxing techniques, whereas moves in fighting games tend to be either highly exaggerated or outright fantastical models of Asian martial arts techniques. As such, boxing games, mixed martial arts games and wrestling games are often described as distinct genres, without comparison to fighting games, and belong more in the sports game genre.
Game design
Fighting games involve combat between pairs of fighters using highly exaggerated martial arts moves. They typically revolve around primarily brawling or combat sport, though some variations feature weaponry. Games usually display on-screen fighters from a side view, and even 3D fighting games play largely within a 2D plane of motion. Games usually confine characters to moving left and right and jumping, although some games such as Fatal Fury: King of Fighters allow players to move between parallel planes of movement. Recent games tend to be rendered in three dimensions and allow side-stepping, but otherwise play like those rendered in two dimensions.
Tactics and combos
Aside from moving around a restricted space, fighting games limit the player's actions to different offensive and defensive maneuvers. Players must learn which attacks and defenses are effective against each other, often by trial and error. Blocking is a basic technique that allows a player to defend against basic attacks. Some games feature more advanced blocking techniques: for example, Capcom's Street Fighter III features a move termed "parrying" which causes the parried attacker to become momentarily incapacitated (a similar state is termed "just defended" in SNK's Garou: Mark of the Wolves).
Counterplay
Predicting opponents' moves and counter-attacking, known as "countering", is a common element of gameplay. Fighting games also emphasize the difference between the height of blows, ranging from low to jumping attacks. Thus, strategy becomes important as players attempt to predict each other's moves, similar to rock–paper–scissors.
Grappling / Takedowns
In addition to blows such as punches and kicks, players can utilize throwing or "grappling" to circumvent "blocks". Most fighting games give the player the ability to execute a grapple move by pressing 2 or more buttons together or simply by pressing punch or kick while being extremely close the opponent. Other fighting games like Dead or Alive have a unique button for throws and takedowns.
Projectiles
Used primarily in 2D fighting games, projectiles are objects that a fighter can launch at another fighter to attack from a distance. While they can be used to simply inflict damage, projectiles are most often used to maneuver opponents into disadvantageous positions. The most notable projectile is Ryu and Ken's Hadoken from Street Fighter.
Rushdown
The opposite of turtling, Rushdown refers to a number of specific, aggressive strategies, philosophies and play styles across all fighting games. The general goal of a rushdown player is to overwhelm the opponent and force costly mistakes either by using fast, confusing setups or by taking advantage of an impatient opponent as they are forced to play defense for prolonged periods of time. Rushdown players often favor attacking opponents in the corner or as they get up from a knockdown; both situations severely limit the options of the opponent and often allow the attacking player to force high-risk guessing scenarios.
Spacing / Zoning
Zoning is footwork and whatever series of tactics a player uses to keep their opponent at a specific distance. Keeping balance, closing or furthering the distance, controlling spatial positioning, and/or creating additional momentum for strikes. What exactly that distance is depends on both who the "zoner" and the opponent are using; differing based on the tools at their disposal versus the tools that the opposing player has.
Turtling
Turtling refers to fighting game tactic of playing very defensively, In the world of fighting games, especially those of the 2D variety, a turtle style of play is a defensive style that focuses on patience, positioning, timing, and relatively safe attack options to slow down the pace of the game and minimize the number of punishable mistakes made during the course of the match.This style can be very useful in timed matches, as it allows a player to deal a small amount of damage to an opponent, and then win the match by running down the clock. If available, players can turn off the timer to prevent such a strategy.
Special attacks
An integral feature of fighting games includes the use of "special attacks", also called "secret moves", that employ complex combinations of button presses to perform a particular move beyond basic punching and kicking. Combos, in which several attacks are chained together using basic punches and kicks, are another common feature in fighting games and have been fundamental to the genre since the release of Street Fighter II. Some fighting games display a "combo meter" that displays the player's progress through a combo. The effectiveness of such moves often relate to the difficulty of execution and the degree of risk. These moves are often beyond the ability of a casual gamer and require a player to have both a strong memory and excellent timing. Taunting is another feature of some fighting games and was originally introduced by Japanese company SNK in their game Art of Fighting. It is used to add humor to games, but can also have an effect on gameplay such as improving the strength of other attacks. Sometimes, a character can even be noted especially for taunting (for example, Dan Hibiki from Street Fighter Alpha). Super Smash Bros. Brawl introduced a new special attack that is exclusive to the series known as a Final Smash.
Matches and rounds
Fighting game matches generally consist of a set number of rounds (typically "best-of-three") and once the in-game announcer gives the signal (typically "ROUND 1… FIGHT!"), the match will officially begin. If the score is tied after an even number of rounds, then the winner will be decided in the final round. Round decisions can also be determined by time over (if a timer is present), which judge players based on remaining vitality to declare a winner. In the Super Smash Bros. series, the rules are different. Instead of rounds, they usually have a set number of stocks for each player (usually three) and if the score is tied between two or more fighters when time expires, then a "sudden death" match will decide the winner by delivering a single hit with 300% damage.
Fighting games widely feature life bars which were introduced in Yie Ar Kung-Fu in 1984, which are depleted as characters sustain blows. Each successful attack will deplete a character's health, and the round continues until a fighter's stamina reaches zero. Hence, the main goal is to completely deplete the life bar of one's opponent, thus achieving a "knockout". Games such as Virtua Fighter also allow a character to be defeated by forcing them outside of the arena, awarding a "ring-out" to the victor. The Super Smash Bros. series allow them to send the fighters off the stage when a character reached a high percentage total, however the gameplay objective differs from that of traditional fighters in that the aim is to increase damage counters and knock opponents off the stage instead of depleting life bars.
Beginning with Midway's Mortal Kombat released in 1992, the Mortal Kombat series introduced "Fatalities", a gameplay feature in which the victor of the final round in a match inflicts a brutal and gruesome finishing move onto their defeated opponent. Prompted by the in-game announcer saying "Finish Him/Her!", players have a short time window to execute a Fatality by entering a specific button and joystick combination, while positioned at a specific distance from the opponent. The Fatality and its derivations are arguably the most notable features of the Mortal Kombat series and have caused a large cultural impact and controversies.
Fighting games often include a single-player campaign or tournament, where the player must defeat a sequence of several computer-controlled opponents. Winning the tournament often reveals a special story–ending cutscene, and some games also grant access to hidden characters or special features upon victory.
Character selection
In most fighting games, players may select from a variety of playable characters who have unique fighting styles and special moves and personalities. This became a strong convention for the genre with the release of Street Fighter II, and these character choices have led to deeper game strategy and replay value.
Custom creation, or "create–a–fighter", is a feature of some fighting games which allows a player to customize the appearance and move set of their own character. Super Fire Pro Wrestling X Premium was the first game to include such a feature.
Multiplayer modes
Fighting games may also offer a multiplayer mode in which players fight each other, sometimes by letting a second player challenge the first at any moment during a single-player match. Some titles allow up to four players to compete simultaneously. Uniquely, the Super Smash Bros. series has allowed eight-player local and online multiplayer matches, beginning with Super Smash Bros. for Wii U, though many classify Super Smash Bros. under the platform fighter subgenre due to its deviation from traditional fighting game rules and design. Several games have also featured modes that involve teams of characters; players form "tag teams" to fight matches in which combat is one-on-one, but a character may leave the arena to be replaced by a teammate. Some fighting games have also offered the challenge of fighting against multiple opponents in succession, testing the player's endurance. Newer titles take advantage of online gaming services, although lag created by slow data transmission can disrupt the split-second timing involved in fighting games. The impact of lag in some fighting games has been reduced by using technology such as GGPO, which keeps the players' games in sync by quickly rolling back to the most recent accurate game state, correcting errors, and then jumping back to the current frame. Games using this technology include Skullgirls and Street Fighter III: 3rd Strike Online Edition.
History
Origins (1970s to early 1980s)
Fighting games find their origins in martial arts films, especially Bruce Lee's Hong Kong martial arts films which featured concepts that would be foundational to fighting games, such as Game of Death (1972) which had Lee fighting a series of boss battles and Enter the Dragon (1973) which was about an international martial arts tournament. The genre also drew inspiration from Japanese martial arts works, including the manga and anime series Karate Master (1971–1977) as well as Sonny Chiba's The Street Fighter (1974).
The earliest video games which involved fist-fighting were boxing games, before martial arts fighting games later emerged featuring battles between characters with fantastic abilities and complex special maneuvers. Sega's black-and-white boxing game Heavyweight Champ, released for arcades in 1976, is considered the first video game to feature fist fighting. Vectorbeam's arcade video game. Warrior (1979) is another title sometimes credited as one of the first fighting games; in contrast to Heavyweight Champ and most later titles, Warrior was based on sword fighting duels and used a bird's eye view. Sega's jidaigeki-themed arcade action game Samurai, released in March 1980, featured a boss battle where the player samurai confronts a boss samurai in one-on-one sword-fighting combat.
One-on-one boxing games appeared on consoles with Activision's Atari VCS game Boxing, released in July 1980, and Sega's SG-1000 game Champion Boxing (1983), which was Yu Suzuki's debut title at Sega. Nintendo's arcade game Punch-Out, developed in 1983 and released in February 1984, was a boxing game that featured a behind-the-character perspective, maneuvers such as blocking and dodging, and stamina meters that deplete when getting hit and replenish with successful strikes.
Emergence of fighting game genre (mid-to-late 1980s)
Karate Champ, developed by Technōs Japan and released by Data East in May 1984, is credited with establishing and popularizing the one-on-one fighting game genre. A variety of moves could be performed using the dual-joystick controls, it used a best-of-three matches format like later fighting games, and it featured training bonus stages. The Player vs Player edition of Karate Champ released later the same year was also the first fighting game to allow two players to fight each other. It went on to influence Konami's Yie Ar Kung Fu, released in October 1984. The game drew heavily from Bruce Lee films, with the main player character Oolong modelled after Lee (like Bruceploitation films). In contrast to the grounded realism of Karate Champ, Yie Ar Kung-Fu moved the genre towards more fantastical, fast-paced action, with a variety of special moves and high jumps, establishing the template for subsequent fighting games. It expanded on Karate Champ by pitting the player against a variety of opponents, each with a unique appearance and fighting style. The player could also perform up to sixteen different moves, including projectile attacks, and it replaced the point-scoring system of Karate Champ with a health meter system, becoming the standard for the genre.
Irem's Kung-Fu Master, designed by Takashi Nishiyama and released in November 1984, was a side-scrolling beat 'em up that, at the end of each level, featured one-on-one boss battles that resemble fighting games. It was based on Hong Kong martial arts films, specifically Jackie Chan's Wheels on Meals (1984) and Bruce Lee's Game of Death. Nishiyama later used its one-on-one boss battles as the basis for his fighting game Street Figher. Nintendo's boxing sequel Super Punch-Out, released for arcades in late 1984 and ported by Elite to home computers as Frank Bruno's Boxing in 1985, featured martial arts elements, high and low guard, ducking, lateral dodging, and a KO meter that is built up with successful attacks, and when full enables a special, more powerful punch to be thrown. Broderbund's Karateka, designed by Jordan Mechner and released at the end of 1984, was a one-on-one fighting game for home computers that successfully experimented with adding plot to its fighting action, like the beat 'em up Kung-Fu Master.
By early 1985, martial arts games had become popular in arcades. On home computers, the Japanese MSX version of Yie Ar Kung-Fu was released in January 1985, and Beam Software's The Way of the Exploding Fist was released for PAL regions in May 1985; The Way of the Exploding Fist borrowed heavily from Karate Champ, but nevertheless achieved critical success and afforded the burgeoning genre further popularity on home computers in PAL regions, becoming the UK's best-selling computer game of 1985. In North America, Data East ported Karate Champ to home computers in October 1985, becoming one of the best-selling computer games of the late 1980s. Other game developers also imitated Karate Champ, notably System 3's computer game International Karate, released in Europe in November 1985; after Epyx released it in North America in April 1986, Data East took unsuccessful legal action against Epyx over the game. Yie Ar Kung-Fu went on to become the UK's best-selling computer game of 1986, the second year in a row for fighting games. The same year, Martech's Uchi Mata for home computers featured novel controller motions for grappling maneuvers, but they were deemed too difficult.
In the late 1980s, side-scrolling beat 'em ups became considerably more popular than one-on-one fighting games, with many arcade game developers focused more on producing beat 'em ups and shoot 'em ups. Takashi Nishiyama used the one-on-one boss battles of his earlier beat 'em up Kung-Fu Master as the template for Capcom's fighting game Street Fighter, combined with elements of Karate Champ and Yie Ar Kung Fu. Street Fighter found its own niche in the gaming world, which was dominated by beat 'em ups and shoot 'em ups at the time. Part of the game's appeal was the use of special moves that could only be discovered by experimenting with the game controls, which created a sense of mystique and invited players to practice the game. Following Street Fighter's lead, the use of command-based hidden moves began to pervade other games in the rising fighting game genre. Street Fighter also introduced other staples of the genre, including the blocking technique as well as the ability for a challenger to jump in and initiate a match against a player at any time. The game also introduced pressure-sensitive controls that determine the strength of an attack, though due to causing damaged arcade cabinets, Capcom replaced it soon after with a six-button control scheme offering light, medium and hard punches and kicks, which became another staple of the genre.
In 1988, Home Data released Reikai Dōshi: Chinese Exorcist, also known as Last Apostle Puppet Show, the first fighting game to use digitized sprites and motion capture animation. Meanwhile, home game consoles largely ignored the genre. Budokan: The Martial Spirit was one of few releases for the Sega Genesis but was not as popular as games in other genres. Technical challenges limited the popularity of early fighting games. Programmers had difficulty producing a game that could recognize the fast motions of a joystick, and so players had difficulty executing special moves with any accuracy.
Mainstream success (early 1990s)
The release of Street Fighter II in 1991 is considered a revolutionary moment in the fighting game genre. Yoshiki Okamoto's team developed the most accurate joystick and button scanning routine in the genre thus far. This allowed players to reliably execute multi-button special moves, which had previously required an element of luck. The graphics took advantage of Capcom's CPS arcade chipset, with highly detailed characters and stages. Whereas previous games allowed players to combat a variety of computer-controlled fighters, Street Fighter II allowed players to play against each other. The popularity of Street Fighter II surprised the gaming industry, as arcade owners bought more machines to keep up with demand. Street Fighter II was also responsible for popularizing the combo mechanic, which came about when skilled players learned that they could combine several attacks that left no time for the opponent to recover if they timed them correctly. Its success led to fighting games becoming the dominant genre in the arcade game industry of the early 1990s, which led to a resurgence of the arcade game industry. The popularity of Street Fighter II led it to be released for home game consoles and becoming the defining template for fighting games.
SNK released Fatal Fury shortly after Street Fighter II in 1991. It was designed by Takashi Nishiyama, the creator of the original Street Fighter, which it was envisioned as a spiritual successor to. Fatal Fury placed more emphasis on storytelling and the timing of special moves, and added a two-plane system where characters could step into the foreground or background. Meanwhile, Sega experimented with Dark Edge, an early attempt at a 3D fighting game where characters could move in all directions. Sega however, never released the game outside Japan because it felt that "unrestrained" 3D fighting games were unenjoyable. Sega also attempted to introduced 3-D holographic technology to the genre with Holosseum in 1992, though it was unsuccessful. Several fighting games achieved commercial success, including SNK's Art of Fighting and Samurai Shodown as well as Sega's Eternal Champions. Nevertheless, Street Fighter II remained the most popular, spawning a Champion Edition that improved game balance and allowed players to use characters that were unselectable in the previous version.
Chicago's Midway Games achieved unprecedented notoriety when they released Mortal Kombat in 1992. The game featured digital characters drawn from real actors, numerous secrets, and a "Fatality" system of finishing maneuvers with which the player's character kills their opponent. The game earned a reputation for its gratuitous violence, and was adapted for home game consoles. The home version of Mortal Kombat was released on September 13, 1993, a day promoted as "Mortal Monday". The advertising resulted in line-ups to purchase the game and a subsequent backlash from politicians concerned about the game's violence. The Mortal Kombat franchise would achieve iconic status similar to that of Street Fighter with several sequels as well as movies, television series, and extensive merchandising. Numerous other game developers tried to imitate Street Fighter II and Mortal Kombat'''s financial success with similar games; Capcom USA took unsuccessful legal action against Data East over the 1993 arcade game Fighter's History. Data East's largest objection in court was that their 1984 arcade game Karate Champ was the true originator of the competitive fighting game genre, which predated the original Street Fighter by three years, but the reason the case was decided against Capcom was that the copied elements were scenes a faire and thus excluded from copyright.
Emergence of 3D fighting games (mid-to-late 1990s)
Sega AM2's first attempt in the genre was the 1993 arcade game Burning Rival, but gained renown with the release of Virtua Fighter for the same platform the same year. It was the first fighting game with 3D polygon graphics and a viewpoint that zoomed and rotated with the action. Despite the graphics, players were confined to back and forth motion as seen in other fighting games. With only three buttons, it was easier to learn than Street Fighter and Mortal Kombat, having six and five buttons respectively. By the time the game was released for the Sega Saturn in Japan, the game and system were selling at almost a one-to-one ratio.
The 1995 PlayStation title Battle Arena Toshinden is credited for taking the genre into "true 3-D" due to its introduction of the sidestep maneuver, which IGN described as "one little move" that "changed the fighter forever." The same year, SNK released The King of Fighters '94 in arcades, where players choose from teams of three characters to eliminate each other one by one. Eventually, Capcom released further updates to Street Fighter II, including Super Street Fighter II and Super Street Fighter II Turbo. These games featured more characters and new moves, some of which were a response to people who had hacked the original Street Fighter II game to add new features themselves. However, criticism of these updates grew as players demanded a true sequel. By 1995, the dominant franchises were the Mortal Kombat series in America and Virtua Fighter series in Japan, with Street Fighter Alpha unable to match the popularity of Street Fighter II. Throughout this period, the fighting game was the dominant genre in competitive video gaming, with enthusiasts popularly attending arcades in order to find human opponents. The genre was also very popular on home consoles. At the beginning of 1996, GamePro (a magazine devoted chiefly to home console and handheld gaming) reported that for the last several years, their reader surveys had consistently seen 4 out of 5 respondents name fighting games as their favorite genre.
In the latter part of the 1990s, traditional 2D fighting games began to decline in popularity, with specific franchises falling into difficulty. Electronic Gaming Monthly awarded the excess of fighting games the "Most Appalling Trend" award of 1995. Although the release of Street Fighter EX introduced 3D graphics to the series, both it and Street Fighter: The Movie flopped in arcades. While a home video game also titled Street Fighter: The Movie was released for the PlayStation and Sega Saturn, it is not a port but a separately produced game based on the same premise. Capcom released Street Fighter III in 1997 which featured improved 2D visuals, but was also unable to match the impact of earlier games. Excitement stirred in Japan over Virtua Fighter 3 in arcades, and Sega eventually ported the game to its Dreamcast console. Meanwhile, SNK released several fighting games on their Neo-Geo platform, including Samurai Shodown II in 1994, Real Bout Fatal Fury in 1995, The Last Blade in 1997, and annual updates to their The King of Fighters franchise. Garou: Mark of the Wolves from 1999 (part of the Fatal Fury series) was considered one of SNK's last great games; the company announced that it would close its doors in 2001. Electronic Gaming Monthly reported that in 1996, U.S. gamers spent nearly $150 million on current generation fighting games, and in Japan, fighting games accounted for over 80% of video game sales.
The fighting game genre continued to evolve, with several strong 3D fighting games emerging in the late 1990s. Namco's Tekken (released in arcades in 1994 and on the PlayStation in 1995) proved critical to the PlayStation's early success, with its sequels also becoming some of the console's most important titles. The Soul series of weapon-based fighting games also achieved considerable critical success, beginning with 1995's Soul Edge (known as Soul Blade outside Japan) to Soulcalibur VI in 2018. Tecmo released Dead or Alive in the arcades in 1996, porting it for the PlayStation in 1998. It spawned a long running franchise, known for its fast-paced control system, innovative counterattacks, and environmental hazards. The series again included titles important to the success of their respective consoles, such as Dead or Alive 3 for the Xbox and Dead or Alive 4 for the Xbox 360. In 1998, Bushido Blade, published by Square, introduced a realistic fighting engine that featured three-dimensional environments while abandoning time limits and health bars in favour of an innovative Body Damage System, where a sword strike to a certain body part can amputate a limb or decapitate the head.
Video game enthusiasts took an interest in fictional crossovers which feature characters from multiple franchises in a particular game. An early example of this type of fighting game was the 1996 arcade release X-Men vs. Street Fighter (Marvel vs. Capcom), featuring comic book superheroes as well as characters from other Capcom games. In 1999, Nintendo released the first game in the Super Smash Bros. series, which allowed match-ups such as Pikachu vs. Mario.
Decline (early 2000s)
In the early 2000s, fighting games declined in popularity. In retrospect, multiple developers attribute the decline of the fighting genre to its increasing complexity and specialization. This complexity shut out casual players, and the market for fighting games became smaller and more specialized. Even as far back as 1997, many in the industry said that the fighting game market's growing inaccessibility to newcomers was bringing an end to the genre's dominance. Furthermore, arcades gradually became less profitable throughout the late 1990s to early 2000s due to the increased technical power and popularity of home consoles. The early 2000s is considered to be the "Dark Age" of fighting games.
In 2000, Italian studio NAPS team released Gekido for the PlayStation console, which uses a fast-paced beat 'em up system, with many bosses and a colorful design in terms of graphics. Several more fighting game crossovers were released in the new millennium. The two most prolific developers of 2D fighting games, Capcom and SNK, combined intellectual property to produce SNK vs. Capcom games. SNK released the first game of this type, SNK vs. Capcom: The Match of the Millennium, for its Neo Geo Pocket Color handheld at the end of 1999. GameSpot regarded the game as "perhaps the most highly anticipated fighter ever" and called it the best fighting game ever to be released for a handheld console. Capcom released Capcom vs. SNK: Millennium Fight 2000 for arcades and the Dreamcast in 2000, followed by sequels in subsequent years. Though none matched the critical success of the handheld version, Capcom vs. SNK 2 EO was noted as the first game of the genre to successfully utilize internet competition. Other crossovers from 2008 included Tatsunoko vs. Capcom and Mortal Kombat vs. DC Universe. The most successful crossover, however, was Super Smash Bros. Brawl for the Wii. Featuring characters from Nintendo and third-party franchises, the game was a runaway commercial success in addition to being lavished with critical praise.
In the new millennium, fighting games became less popular and plentiful than in the mid-1990s, with multiplayer competition shifting towards other genres. However, SNK reappeared in 2003 as SNK Playmore and continued to release games. Arc System Works received critical acclaim for releasing Guilty Gear X in 2001, as well as its sequel Guilty Gear XX, as both were 2D fighting games featuring striking anime inspired graphics. The fighting game is beacme a popular genre for amateur and doujin developers in Japan. The 2002 title Melty Blood was developed by then amateur developer French-Bread and achieved cult success on the PC. It became highly popular in arcades following its 2005 release, and a version was released for the PlayStation 2 the following year. While the genre became generally far less popular than it once was, arcades and their attendant fighting games remained reasonably popular in Japan in this time period, and still remain so even today. Virtua Fighter 5 lacked an online mode but still achieved success both on home consoles and in arcades; players practiced at home and went to arcades to compete face-to-face with opponents. In addition to Virtua Fighter, the Tekken, Soul and Dead or Alive franchises continued to release installments. Classic Street Fighter and Mortal Kombat games were re-released on PlayStation Network and Xbox Live Arcade, allowing internet play, and in some cases, HD graphics.
The early part of the decade had seen the rise of major international fighting game tournaments such as Tougeki – Super Battle Opera and Evolution Championship Series, and famous players such as Daigo Umehara. An important fighting game at the time was Street Fighter III: 3rd Strike, originally released in 1999. The game gained significant attention with "Evo Moment 37", also known as the "Daigo Parry", which refers to a portion of a 3rd Strike semi-final match held at Evolution Championship Series 2004 (Evo 2004) between Daigo Umehara and Justin Wong. During this match, Umehara made an unexpected comeback by parrying 15 consecutive hits of Wong's "Super Art" move while having only one pixel of vitality. Umehara subsequently won the match. "Evo Moment #37" is frequently described as the most iconic and memorable moment in the history of competitive video gaming, compared to sports moments such as Babe Ruth's called shot and the Miracle on Ice. It inspired many to start playing 3rd Strike which brought new life into the fighting game community during a time when the community was in a state of stagnancy.
Resurgence (late 2000s to present) Street Fighter IV, the series' first mainline title since Street Fighter III: 3rd Strike in 1999, was released in early 2009 to critical acclaim, having garnered praise since its release at Japanese arcades in 2008. The console versions of the game as well as Super Street Fighter IV sold more than 6 million copies over the next few years. Street Fighter's successful revival sparked a renaissance for the genre, introducing new players to the genre and with the increased audience allowing other fighting game franchises to achieve successful revivals of their own, as well as increasing tournament participance. Tekken 6 was positively received, selling more than 3 million copies worldwide as of August 6, 2010. Other successful titles that followed include Mortal Kombat, Marvel vs. Capcom 3, The King of Fighters XIII, Dead or Alive 5, Tekken Tag Tournament 2, SoulCalibur V, and Guilty Gear Xrd. Despite the critically acclaimed Virtua Fighter 5 releasing to very little fanfare in 2007, its update Virtua Fighter 5: Final Showdown received much more attention due to the renewed interest in the genre. Numerous indie fighting games have also been crowdfunded on websites such as Kickstarter and Indiegogo, the most notable success being Skullgirls in 2012. Later, in 2019, Ubisoft reported that the free-to-play platform fighting game Brawlhalla reached 20 million players. Super Smash Bros. Ultimate for the Nintendo Switch in 2018 is the best-selling fighting game of all time, topping its Wii predecessor Super Smash Bros. Brawl, having sold 27.4 million copies worldwide.
Financial performance
Highest-grossing franchises
The following are the highest-grossing fighting game franchises, in terms of total gross revenue generated by arcade games, console games and computer games.
Best-selling franchises
Arcade
The following are the best-selling fighting arcade video game franchises that have sold least 10,000 arcade units. The prices of fighting game arcade units ranged from for Street Fighter II Dash (Champion Edition) in 1992, up to for Virtua Fighter (1993). In addition to unit sales, arcade games typically earned the majority of their gross revenue from coin drop earnings.
Home
The following are the best-selling fighting game franchises for home systems, having sold at least 10 million software units sold for game consoles and personal computers.
{| class="wikitable sortable" style="font-size:100%; text-align:center"
|-
! Rank !! Franchise !! Debut !! Creator(s) !! Owner(s) !! Software sales !! Subgenre !! As of !! class="unsortable" |
|-
!scope="row" style="text-align:center;"| 1
| Mortal Kombat
| 1992
| Ed BoonJohn Tobias
| Warner Bros. Interactive Entertainment
| 73 million
| 2D
| July 2021
|
|-
!scope="row" style="text-align:center;"| 2
| Super Smash Bros.
| 1999
| Masahiro Sakurai HAL Laboratory
| Nintendo
| 68.37 million
| Platform
| December 2021
| {{efn|Super Smash Bros. series sales:
Super Smash Bros.: 5.55 million worldwide
Super Smash Bros. Melee: 7.09 million
Super Smash Bros. Brawl: 13.32 million
Super Smash Bros. for Nintendo 3DS and Wii U: 15.01million combined (9.63million for 3DS, 5.38million for Wii U)
Super Smash Bros. Ultimate: 27.4million
|group=n|name=SmashBros}}
|-
!scope="row" style="text-align:center;"| 3
| Dragon Ball
| 1986
| Akira Toriyama (manga) Bandai (games)
| Bandai Namco Entertainment
| 66.5 million
| 2D
| January 2022
|
|-
!scope="row" style="text-align:center;"| 4
| Tekken
| 1994
| Seiichi Ishii Namco
| Bandai Namco Entertainment
|
| 3D
| November 2021
|
|-
!scope="row" style="text-align:center;"| 5
| Street Fighter
| 1987
| Takashi NishiyamaHiroshi Matsumoto
| Capcom
| 47 million
| 2D
| September 2021
|
|-
!scope="row" style="text-align:center;"| 6
| Naruto: Ultimate Ninja
| 2003
| Masashi Kishimoto (manga) CyberConnect2 (games)
| Bandai Namco Entertainment
| 20.8 million
| 3D
| March 2021
|-
!scope="row" style="text-align:center;"| 7
| Soulcalibur
| 1995
| Hiroaki Yotoriyama Namco
| Bandai Namco Entertainment
| 17 million
| 3D
| July 2021
|
|-
!scope="row" style="text-align:center;"| 8
| Marvel vs. Capcom
| 1996
| Akira YasudaRyota NiitsumaNoritaka FunamizuTsuyoshi Nagayama
| Capcom Marvel Games
| 10 million
| 2D
| September 2021
|
|}
Best-selling fighting games
Arcade
The following titles are the top ten best-selling fighting arcade video games, in terms of arcade units sold. The prices of fighting game arcade units ranged from for Street Fighter II Dash (Champion Edition) in 1992, up to for Virtua Fighter (1993). In addition to unit sales, arcade games typically earned the majority of their gross revenue from coin drop earnings, which are unknown for most games. Arcade revenue figures, from unit sales and/or coin drop earnings, are listed if known.
Home
The following titles are the top ten best-selling fighting games for home systems, in terms of software units sold for game consoles and personal computers.
See also
Fighting game community
List of fighting games
M.U.G.E.N.
Notes
References
Video game genres
Video game terminology |
10328596 | https://en.wikipedia.org/wiki/List%20of%20Unicode%20characters | List of Unicode characters | As of Unicode version 14.0, there are 144,697 characters with code points, covering 159 modern and historical scripts, as well as multiple symbol sets. As it is not technically possible to list all of these characters in a single Wikipedia page, this list is limited to a subset of the most important characters for English-language readers, with links to other pages which list the supplementary characters. This article includes the 1062 characters in the Multilingual European Character Set 2 (MES-2) subset, and some additional related characters.
Character reference overview
HTML and XML provide ways to reference Unicode characters when the characters themselves either cannot or should not be used. A numeric character reference refers to a character by its Universal Character Set/Unicode code point, and a character entity reference refers to a character by a predefined name.
A numeric character reference uses the format
&#nnnn;
or
&#xhhhh;
where nnnn is the code point in decimal form, and hhhh is the code point in hexadecimal form. The x must be lowercase in XML documents. The nnnn or hhhh may be any number of digits and may include leading zeros. The hhhh may mix uppercase and lowercase, though uppercase is the usual style.
In contrast, a character entity reference refers to a character by the name of an entity which has the desired character as its replacement text. The entity must either be predefined (built into the markup language) or explicitly declared in a Document Type Definition (DTD). The format is the same as for any entity reference:
&name;
where name is the case-sensitive name of the entity. The semicolon is required.
Because numbers are harder for humans to remember than names, character entity references are most often written by humans, while numeric character references are most often produced by computer programs.
Control codes
65 characters, including DEL. All belong to the common script.
Footnotes:
1 Control-C has typically been used as a "break" or "interrupt" key.
2 Control-D has been used to signal "end of file" for text typed in at the terminal on Unix / Linux systems. Windows, DOS, and older minicomputers used Control-Z for this purpose.
3 Control-G is an artifact of the days when teletypes were in use. Important messages could be signalled by striking the bell on the teletype. This was carried over on PCs by generating a buzz sound.
4 Line feed is used for "end of line" in text files on Unix / Linux systems.
5 Carriage Return (accompanied by line feed) is used as "end of line" character by Windows, DOS, and most minicomputers other than Unix- / Linux-based systems
6 Control-O has been the "discard output" key on minicomputers. Output is not sent to the terminal, but discarded, until another Control-o is typed.
7 Control-Q has been used to tell a host computer to resume sending output after it was stopped by Control-S.
8 Control-S has been used to tell a host computer to postpone sending output to the terminal. Output is suspended until restarted by the Control-Q key.
9 Control-U was originally used by Digital Equipment Corporation computers to cancel a line of typed-in text. Other manufacturers used Control-X for this purpose.
10 Control-X was commonly used to cancel a line of input typed in at the terminal.
11 Control-Z has commonly been used on minicomputers, Windows and DOS systems to indicate "end of file" either on a terminal or in a text file. Unix / Linux systems use Control-D to indicate end-of-file at a terminal.
Latin script
The Unicode Standard (version 14.0) classifies 1,475 characters as belonging to the Latin script.
Basic Latin
95 characters; the 52 alphabet characters belong to the Latin script. The remaining 43 belong to the common script.
The 33 characters classified as ASCII Punctuation & Symbols are also sometimes referred to as ASCII special characters. See § Latin-1 Supplement and § Unicode symbols for additional "special characters". Certain special characters can be used in passwords; some organizations require their use. See the List of Special Characters for Passwords.
Latin-1 Supplement
96 characters; the 62 letters, and two ordinal indicators belong to the Latin script. The remaining 32 belong to the common script.
Latin Extended-A
128 characters; all belong to the Latin script.
Latin Extended-B
208 characters; all belong to the Latin script; 33 in the MES-2 subset.
Latin Extended Additional
256 characters; all belong to the Latin script; 23 in the MES-2 subset. For the rest, see Latin Extended Additional (Unicode block).
Additional Latin Extended
Latin Extended-C (Unicode block)
Latin Extended-D (Unicode block)
Latin Extended-E (Unicode block)
Latin Extended-F (Unicode block)
Latin Extended-G (Unicode block)
Phonetic scripts
IPA Extensions
96 characters; all belong to the Latin script; three in the MES-2 subset. For the rest, see IPA Extensions (Unicode block).
Spacing modifier letters
80 characters; 15 in the MES-2 subset.
Phonetic Extensions
Phonetic Extensions (Unicode block)
Phonetic Extensions Supplement (Unicode block)
Combining Marks
Combining Diacritical Marks (Unicode block)
Combining Diacritical Marks Extended (Unicode block)
Combining Half Marks (Unicode block)
Combining Diacritical Marks Supplement (Unicode block)
Combining Diacritical Marks for Symbols (Unicode block)
Greek and Coptic
144 code points; 135 assigned characters; 85 in the MES-2 subset.
Greek Extended
For polytonic orthography. 256 code points; 233 assigned characters, all in the MES-2 subset (#670 – 902).
Cyrillic
256 characters; 191 in the MES-2 subset.
Cyrillic supplements
Cyrillic Supplement (Unicode block)
Cyrillic Extended-A (Unicode block)
Cyrillic Extended-B (Unicode block)
Cyrillic Extended-C (Unicode block)
Armenian
Armenian (Unicode block)
Semitic languages
Arabic script in Unicode, including the Persian alphabet, Jawi alphabet and others
Unicode and HTML for the Hebrew alphabet
Mandaic (Unicode block)
Samaritan (Unicode block)
Syriac (Unicode block)
Syriac Supplement (Unicode block)
Thaana
Thaana (Unicode block)
Brahmic (Indic) scripts
The range from U+0900 to U+0DFF includes Devanagari, Bengali script, Gurmukhi, Gujarati script, Odia alphabet, Tamil script, Telugu script, Kannada script, Malayalam script, and Sinhala script.
Devanagari in Unicode
Bengali (Unicode block)
Gurmukhi (Unicode block)
Gujarati (Unicode block)
Oriya (Unicode block)
Tamil (Unicode block)
Tamil Supplement (Unicode block)
Telugu (Unicode block)
Kannada (Unicode block)
Malayalam (Unicode block)
Sinhala (Unicode block)
Sinhala Archaic Numbers (Unicode block)
Other Brahmic and Indic scripts in Unicode include:
Ahom (Unicode block)
Balinese (Unicode block)
Batak (Unicode block)
Bhaiksuki (Unicode block)
Brahmi (Unicode block)
Buhid (Unicode block)
Buginese (Unicode block)
Chakma (Unicode block)
Cham (Unicode block)
Common Indic Number Forms (Unicode block)
Dives Akuru (Unicode block)
Dogra (Unicode block)
Grantha (Unicode block)
Gunjala Gondi (Unicode block)
Hanunoo (Unicode block)
Javanese (Unicode block)
Kaithi (Unicode block)
Khmer (Unicode block)
Khmer Symbols (Unicode block)
Khojki (Unicode block)
Khudawadi (Unicode block)
Lao (Unicode block)
Lepcha (Unicode block)
Limbu (Unicode block)
Mahajani (Unicode block)
Makasar (Unicode block)
Marchen (Unicode block)
Meetei Mayek (Unicode block)
Meetei Mayek Extensions (Unicode block)
Modi (Unicode block)
Multani (Unicode block)
Myanmar (Unicode block)
New Tai Lue (Unicode block)
Newa (Unicode block)
Ol Chiki (Unicode block)
Phags-pa (Unicode block)
Rejang (Unicode block)
Saurashtra (Unicode block)
Sharada (Unicode block)
Siddham (Unicode block)
Sundanese (Unicode block)
Sundanese Supplement (Unicode block)
Syloti Nagri (Unicode block)
Tagalog (Unicode block)
Tagbanwa (Unicode block)
Tai Le (Unicode block)
Tai Tham (Unicode block)
Tai Viet (Unicode block)
Takri (Unicode block)
Thai (Unicode block)
Tibetan (Unicode block)
Tirhuta (Unicode block)
Other south and central Asian writing systems
Masaram Gondi (Unicode block)
Mro (Unicode block)
Sora Sompeng (Unicode block)
Tangsa (Unicode block)
Toto (Unicode block)
Warang Citi (Unicode block)
Georgian
Georgian (Unicode block)
Georgian Extended (Unicode block)
Georgian Supplement (Unicode block)
African scripts
Adlam (Unicode block)
Bamum (Unicode block)
Bamum Supplement (Unicode block)
Bassa Vah (Unicode block)
Ge'ez/Ethiopic script
Medefaidrin (Unicode block)
Mende Kikakui (Unicode block)
NKo (Unicode block)
Osmanya (Unicode block)
Ottoman Siyaq Numbers (Unicode block)
Tifinagh (Unicode block)
Vai (Unicode block)
American scripts
Cherokee (Unicode block)
Cherokee Supplement (Unicode block)
Deseret (Unicode block)
Osage (Unicode block)
Unified Canadian Aboriginal Syllabics (Unicode block)
Unified Canadian Aboriginal Syllabics Extended (Unicode block)
Unified Canadian Aboriginal Syllabics Extended-A (Unicode block)
Mongolian
Mongolian (Unicode block)
Mongolian Supplement (Unicode block)
Unicode symbols
General Punctuation
112 code points; 111 assigned characters; 24 in the MES-2 subset.
Superscripts and Subscripts
Currency Symbols
Letterlike Symbols
Number Forms
Arrows
Miscellaneous Symbols and Arrows (Unicode block)
Supplemental Arrows-A (Unicode block)
Supplemental Arrows-B (Unicode block)
Supplemental Arrows-C (Unicode block)
Mathematical symbols
Supplemental Mathematical Operators (Unicode block)
Miscellaneous Mathematical Symbols-A (Unicode block)
Miscellaneous Mathematical Symbols-B (Unicode block)
Mathematical Alphanumeric Symbols: Mathematical Alphanumeric Symbols (Unicode block)
Miscellaneous Technical
Optical Character Recognition
Optical Character Recognition (Unicode block)
Enclosed Alphanumerics
Box Drawing
Block Elements
Geometric Shapes
Miscellaneous Symbols
Symbols for Legacy Computing
Dingbats
Dingbat
East Asian writing systems
Bopomofo (Unicode block)
Bopomofo Extended (Unicode block)
CJK Unified Ideographs
CJK Radicals Supplement (Unicode block)
CJK Strokes (Unicode block)
CJK Symbols and Punctuation (Unicode block)
Counting Rod Numerals (Unicode block)
Enclosed Alphanumeric Supplement (Unicode block)
Enclosed CJK Letters and Months (Unicode block)
Enclosed Ideographic Supplement (Unicode block)
Halfwidth and Fullwidth Forms (Unicode block)
Hangul in Unicode
Hiragana (Unicode block)
Ideographic Description Characters (Unicode block)
Ideographic Symbols and Punctuation (Unicode block)
Kanbun (Unicode block)
Kangxi Radicals (Unicode block)
Katakana (Unicode block)
Kana Extended-A (Unicode block)
Kana Extended-B (Unicode block)
Kana Supplement (Unicode block)
Katakana Phonetic Extensions (Unicode block)
Khitan Small Script (Unicode block)
Lisu (Unicode block)
Lisu Supplement (Unicode block)
Miao (Unicode block)
Modifier Tone Letters (Unicode block)
Nushu (Unicode block)
Nyiakeng Puachue Hmong (Unicode block)
Small Form Variants (Unicode block)
Small Kana Extension (Unicode block)
Tai Xuan Jing Symbols (Unicode block)
Tangut (Unicode block)
Tangut Components (Unicode block)
Tangut Supplement (Unicode block)
Vertical Forms (Unicode block)
Wancho (Unicode block)
Yi Syllables (Unicode block)
Yi Radicals (Unicode block)
Yijing Hexagram Symbols (Unicode block)
Southeast Asian writing systems
Hanifi Rohingya (Unicode block)
Kayah Li (Unicode block)
Pahawh Hmong (Unicode block)
Pau Cin Hau (Unicode block)
Meetei Mayek (Unicode block)
Alphabetic Presentation Forms
Ancient and historic scripts
Aegean Numbers (Unicode block)
Anatolian Hieroglyphs (Unicode block)
Ancient Greek Numbers (Unicode block)
Ancient Symbols (Unicode block)
Avestan (Unicode block)
Carian (Unicode block)
Caucasian Albanian (Unicode block)
Chorasmian (Unicode block)
Cuneiform (Unicode block)
Cuneiform Numbers and Punctuation (Unicode block)
Cypriot Syllabary (Unicode block)
Cypro-Minoan (Unicode block)
Early Dynastic Cuneiform (Unicode block)
Egyptian Hieroglyph Format Controls (Unicode block)
Egyptian Hieroglyphs (Unicode block)
Elbasan (Unicode block)
Elymaic (Unicode block)
Glagolitic (Unicode block)
Glagolitic Supplement (Unicode block)
Gothic (Unicode block)
Hatran (Unicode block)
Imperial Aramaic (Unicode block)
Indic Siyaq Numbers (Unicode block)
Inscriptional Pahlavi (Unicode block)
Inscriptional Parthian (Unicode block)
Kharoshthi (Unicode block)
Linear A (Unicode block)
Linear B Ideograms (Unicode block)
Linear B Syllabary (Unicode block)
Lycian (Unicode block)
Lydian (Unicode block)
Manichaean (Unicode block)
Mayan Numerals (Unicode block)
Meroitic Cursive (Unicode block)
Meroitic Hieroglyphs (Unicode block)
Nabataean (Unicode block)
Nandinagari (Unicode block)
Ogham (Unicode block)
Old Hungarian (Unicode block)
Old Italic (Unicode block)
Old North Arabian (Unicode block)
Old Permic (Unicode block)
Old Persian (Unicode block)
Old Sogdian (Unicode block)
Old South Arabian (Unicode block)
Old Turkic (Unicode block)
Old Uyghur (Unicode block)
Palmyrene (Unicode block)
Phaistos Disc (Unicode block)
Phoenician (Unicode block)
Psalter Pahlavi (Unicode block)
Runic (Unicode block)
Sogdian (Unicode block)
Soyombo (Unicode block)
Ugaritic (Unicode block)
Vithkuqi (Unicode block)
Yezidi (Unicode block)
Zanabazar Square (Unicode block)
Shavian
Shavian (Unicode block)
Notational systems
Braille
Braille Patterns (Unicode block)
Music
Western Musical Symbols (Unicode block)
Byzantine Musical Symbols (Unicode block)
Ancient Greek Musical Notation (Unicode block)
Znamenny Musical Notation (Unicode block)
Shorthand
Duployan (Unicode block)
Shorthand Format Controls (Unicode block)
Sutton SignWriting
Sutton SignWriting: Sutton SignWriting (Unicode block)
Emoji
Emoji in Unicode
Alchemical symbols
Alchemical Symbols (Unicode block)
Game symbols
Chess Symbols (Unicode block)
Domino Tiles (Unicode block)
Mahjong Tiles (Unicode block)
Playing cards
Special areas and format characters
Control Pictures (Unicode block)
Private Use Areas
Private Use Area (Unicode block)
Supplementary Private Use Area-A (Unicode block)
Supplementary Private Use Area-B (Unicode block)
Specials (Unicode block)
Surrogates
Low Surrogates (Unicode block)
High Surrogates (Unicode block)
High Private Use Surrogates (Unicode block)
Tags (Unicode block)
Variation Selectors
Variation Selectors (Unicode block)
Variation Selectors Supplement (Unicode block)
See also
Comparison of Unicode encodings
Free software Unicode typefaces
GNU Unifont
List of Unicode radicals
List of Unicode fonts
List of typefaces
Typographic unit
Unicode Consortium
Unicode fallback font
Unicode typeface
Universal Character Set characters
References
Unicode 7.0 Character Code Charts, Unicode, Inc.
CWA 13873:2000 – Multilingual European Subsets in ISO/IEC 10646-1 CEN Workshop Agreement 13873
Multilingual European Character Set 2 (MES-2) Rationale, Markus Kuhn, 1998
External links
Official web site of the Unicode Consortium (English)
Unicode
Unicode |
14276228 | https://en.wikipedia.org/wiki/Carlos%20Morimoto | Carlos Morimoto | Carlos Eduardo Morimoto is the author of Kurumin, a Linux distribution based on Knoppix. It was the most popular Linux distribution in Brazil. After almost ending the project, due to many complaints from users in his forums and technical problems, mostly caused by the fact that Kurumin was based on Debian unstable, which is not targeted to non-experienced users, and was the majority of Kurumin users, Morimoto improved it and it reached its success. In 2008, Morimoto abandoned the project due to personal reasons.
A new group of developers attempted to continue the project, now called "Kurumin NG", and based in Kubuntu, when Leandro Santos, the developer of Kalango Linux (another distro based in Kurumin) joined efforts with Morimoto to develop the new version of Kurumin. But this new effort was short-lived and was shut down some time later.
He is also the author of several books about hardware and software, especially Linux, all published in Brazil, his home country. He sells books and Linux CDs. Much (if not all) of his work is free content.
After abandoning Kurumin, Morimoto gradually left his other projects: "Guia do Hardware" website, then the books, and finally, all his material belongings, choosing to leave worldly affairs and live his Hare-Krishna religion full-time. Morimoto changed his name to "Caitanya Chandra Dasa" and lives near Porto Alegre, Rio Grande do Sul.
Published books
Upgrade e Manutenção de Hardware (; paperback; 2001)
Entendendo e Dominando o Linux (; paperback; 2004)
References
External links
Guia do Hardware
Morimoto, Carlos Eduardo
Living people
Year of birth missing (living people) |
34614961 | https://en.wikipedia.org/wiki/Oric | Oric | Oric was the name used by UK-based Tangerine Computer Systems for a series of 6502-based home computers sold in the 1980s, primarily in Europe.
With the success of the ZX Spectrum from Sinclair Research, Tangerine's backers suggested a home computer and Tangerine formed Oric Products International Ltd to develop the Oric-1. The computer was introduced in 1982. During 1983, approximately 160,000 Oric-1 computers were sold in the UK, plus another 50,000 in France (where it was the year's top-selling machine). This resulted in Oric being acquired and given funding for a successor model, the 1984 Oric Atmos.
Oric was bought by Eureka, which produced the less successful Oric Telestrat (1986). Oric was dissolved the year the Telestrat was released. Eastern European clones of Oric machines were produced into the 1990s.
Models
Oric-1
Based on a 1 MHz MOS Technology 6502 CPU, the Oric-1 came in 16 KB or 48 KB RAM variants for £129 and £169 respectively, matching the models available for the popular ZX Spectrum and undercutting the price of the 48 KB version of the Spectrum by a few pounds. The circuit design requires 8 memory chips, one chip per data line of the CPU. Due to the sizing of readily available memory chips the 48 KB model has 8 * 8 KB (64 KBit) chips, making a total of 64 KB. As released only 48 KB is available to the user, with the top 16 KB of memory overlaid by the BASIC ROM;
The optional disc drive unit contains some additional hardware that allows it to enable or disable the ROM, effectively adding 16 KB of RAM to the machine. This additional memory is used by the system to store the Oric DOS software. Both Oric-1 versions have a 16 KB ROM containing the operating system and a modified BASIC interpreter.
The Oric-1 has a sound chip, the programmable General Instrument AY-3-8910.
Two graphics modes are handled by a semi-custom ASIC (ULA) which also manages the interface between the processor and memory. The two modes are a "LORES" (low resolution) text mode (though the character set can be redefined to produce graphics) with 28 rows of 40 characters and a "HIRES" (high resolution) mode with 200 rows of 240 pixels above three lines of text. Like the Spectrum, the Oric-1 suffers from attribute clash–albeit to a much lesser degree in HIRES mode, since 2 different colours can be defined for each 6x1 block of 6 pixels,
The system has a built-in television RF modulator as well as RGB output. A standard audio tape recorder can be used for external storage. There is a Centronics compatible printer interface.
Technical details
CPU: MOS 6502 @ 1 MHz
Operating system: Tangerine/Microsoft Extended Basic v1.0
ROM: 16 KB
RAM: 16 KB / 48 KB
Sound: AY-3-8912
Graphics: 40×28 text characters/ 240×200 pixels, 8 colours
Storage: tape recorder, 300 and 2400 baud
Input: integrated keyboard
Connectivity: Tape recorder I/O, Centronics compatible printer port, RGB video out, RF out, expansion port
Voltage: 9 V
Power consumption: Max 600 milliamps
Oric Atmos
In late 1983 the funding cost for continued development of Oric caused external funding to be sought, and eventually led to a sale to Edenspring Investments PLC. The Edenspring money enabled Oric International to release the Oric Atmos, which added an improved keyboard and an updated V1.1 ROM to the Oric-1. The faulty tape error checking routine was still there.
Soon after the Atmos was released, the modem, printer and 3-inch floppy disk drive originally promised for the Oric-1 were announced and released by the end of 1984. A short time after the release of the Atmos machine, a modification for the Oric-1 was issued and advertised in magazines and bulletin boards. This modification enabled the Oric-1 user to add a second ROM (containing the Oric Atmos system) to a spare ROM-socket on the Oric-1 circuit board. Then, using a switch, the users could then switch between the new Oric Atmos ROM and the original Oric-1 ROM. This was desirable since the updated ROM of the Atmos contained breaking changes for some games which relied on certain behaviours or memory addresses within the ROM. This led to tape based software often containing a 1.1 ROM/Atmos version of the software on one side of the cassette, with a 1.0 ROM/Oric-1 version on the other. Earlier titles from publishers that no longer existed or had stopped producing software for the Oric were unlikely to be updated.
Oric Stratos and Oric Telestrat
Although the Oric Atmos had not turned around Oric International's fortunes, in February 1985, they announced several models including the Oric Stratos/IQ164. Despite their backers putting them into receivership the following day, Oric was bought by French company Eureka, which continued to produce the Stratos, followed by the Oric Telestrat in late 1986.
The Stratos and Telestrat increased the RAM to 64 KB and added more ports, but kept the same processor and graphics and sound hardware as the Oric-1 and Atmos.
The Telestrat is a telecommunications-oriented machine. It comes with a disk drive as standard, and only connects to an RGB monitor / TV. The machine is backward compatible with the Oric-1 and Oric Atmos by using a cartridge. Most of the software is in French, including Hyper-BASIC's error messages. Up to 6000 units were sold in France.
In December 1987, after announcing the Telestrat 2, Oric International went into receivership for the second and final time.
Technical specification
Keyboard
The keyboard has 57 moving keys with tactile feedback. It is capable of full upper and lower case with a correctly positioned space bar. It has a full typewriter pitch. The key layout is a standard QWERTY with ESC, CTRL, RETURN and additional cursor control keys. All keys have auto repeat.
Display
The display adapter will drive a PAL UHF colour or black and white television receiver on approximately Channel 36. RGB output is also provided on a 5 pin DIN 41524 socket.
Character mode
In character mode the Oric displays 28 lines of 40 characters, producing a display very similar to Teletext. The character set is standard ASCII which is enhanced by the addition of 80 user-definable characters. ASCII characters may also be re-defined as these are down loaded into RAM on power-up. Serial attributes are used to control display features, as in Teletext, and take up one character position. All remaining characters on that line are affected by the serial attribute until either the line ends or another serial attribute.
Display features are:
Select background colour (paper) from one of eight.
Select foreground colour (ink) from one of eight.
Flash characters on and off approximately twice a second.
Produce double height characters (even line top, odd line bottom).
Switch over to user-definable character set. This feature is used to produce Teletext-style colour graphics without the need for additional RAM.
Available colours are black, blue, red, magenta, green, cyan, yellow, and white.
Each character position also has a parallel attribute, which may be operated on a character by character basis, to produce video inversion. The display has a fixed black border.
Screen graphics mode
The graphics mode consists of 200 pixels vertically by 240 pixels horizontally plus 3 lines of 40 characters (the same as character mode) at the bottom of the screen to display system information and to act as a window on the user program while still viewing the graphics display. It can also be used to input direct commands for graphics and see the effect instantly without having to switch modes. The graphics display operates with serial attributes in the same way as characters, except that the display is now considered as 200 lines by 40 graphics cells. Each graphic cell is therefore very flexible by having 8 foreground and 8 background colours and flashing patterns. The video invert parallel attribute is also usable in this mode. ASCII characters may be painted over the graphics area, thus enabling the free mixing of graphics and text.
Sound
The Oric has an internal loudspeaker and amplifier and can also be connected to external amplifiers via the 7 Pin DIN 45329 shared with the cassette interface. A General Instruments AY-3-8912 provides 3 channel sound.
For BASIC programs, four keywords generate pre-made sounds: PING, SHOOT, EXPLODE, and ZAP. The commands SOUND, MUSIC, and PLAY produce a broader range of sounds.
Cassette interface
The cassette recorder connects via a 7 Pin DIN 45329 socket shared with the external sound output. The interface includes support for tape motor control. Recording speeds offered as standard are 300 baud or 2400 baud. A tone leader allows tape recorders' automatic level control to stabilise before the filename, followed by the actual data with parity; finally, checksums are recorded to allow overall verification of the recording.
The circuit was designed using a Schmitt trigger to remove noise and make input more reliable. The system allows for verification of stored information against the tape copy, to ensure integrity before the information is flushed from memory. There was however a bug within the error-checking of recorded programs, often causing user-created programs to fail when loaded back in, this bug persist in the updated roms for the Oric Atmos.
Available basic commands are CLOAD, CSAVE (for programs and memory dumps), STORE, RECALL (for arrays of string, integer or real, added with Oric Atmos roms). Filenames up to 16 characters can be specified. Options on the commands exist for slow speed, verification, autorunning of programs or specification of start and ending addresses for dumping memory.
Expansion port
The expansion port allows full access to the CPU's data address and control lines. This allows connection of add-ons specifically designed for the Oric, including user designed hardware. The range of lines exposed allows external ROM and RAM expansion, thus allowing for rom cartridges or for expansion devices to internally include the required operating software on ROM.
Printer port
The printer port is compatible with the then standard Centronics parallel interface allows for connection of many different types of printers from low quality (e.g. low-resolution thermal printers) to high quality printers, such as fixed font daisy wheel printers or laser printers, though the latter were uncommon and expensive during the period of commercial availability of the Oric range. Most contemporary computer printers could produce text output without requiring specific drivers, and often followed de facto standards for simple graphics. More advanced use of the printer would have required a specific driver which, given the proliferation of different home computers and standards of the time, may or may not have been available.
Peripherals
Colour plotter
Tangerine's MCP-40 is a plotter with mechanics by Alps Electric. The same mechanism was also used as the basis for similar low-cost plotters produced by various home computer manufacturers around that time. These included the Atari 1020, the Commodore 1520, the Tandy/Radio Shack CGP-115, the Texas Instruments HX-1000, the Mattel Aquarius 4615, and probably also the Sharp MZ-1P16 (for MZ-800 series).
Prestel adaptor
The Prestel adaptor produced by Eureka (Informatika) was the first adaptor produced for the Oric-1 and Oric Atmos computers. However this adaptor was only furnished with very limited software.
Clones
The Atmos was licensed in Yugoslavia and sold as the Nova 64. The clones were Atmos based, the only difference being the logo indicating ORIC NOVA 64 instead of Oric Atmos 48K. This is to indicate the installed 64 KB of RAM, which was also true of the Atmos; In both 16 KB of which is masked by the ROM at startup, leaving 48 KB to work with the BASIC language.
In Bulgaria, the Atmos clone was named Pravetz 8D and produced between 1985 and 1991.
The Pravetz is entirely hardware and software compatible with the Oric Atmos. The biggest change on the hardware side is the larger white case that hosts a comfortable mechanical keyboard and an integrated power supply. The BASIC ROM has been patched to host both a Western European and Cyrillic alphabet the upper case character set produces Western European characters, while lower case gives Cyrillic letters. In order to ease the use of the two alphabets, the Pravetz 8D is fitted with a Caps Lock key. A Disk II compatible interface and a custom DOS, called DOS-8D, were created in 1987–88 by Borislav Zahariev.
See also
:Category:Oric games
References
External links
Oric FAQ
Oric: The Story so Far
Oric Atmos review March 1984 Your Computer
Microtan 65 Oric-1 Oric Atmos at the Old Computers Museum
Oric.org community portal (French)
Early microcomputers
6502-based home computers
Home computers
Tangerine Computer Systems |
4232205 | https://en.wikipedia.org/wiki/Andrew%20Hacker | Andrew Hacker | Andrew Hacker (born 1929) is an American political scientist and public intellectual.
He is currently Professor Emeritus in the Department of Political Science at Queens College
in New York. He did his undergraduate work at Amherst College, followed by graduate work at Oxford University, University of Michigan, and Princeton University, where he received his PhD degree. Hacker taught at Cornell before taking his current position at Queens. He is the son of Louis M. Hacker.
His most recent book, Higher Education? was written in collaboration with Claudia Dreifus, his wife, a New York Times science writer and Columbia University professor. Professor Hacker is a frequent contributor to the New York Review of Books. In his articles he has questioned whether mathematics is necessary, claiming "Making mathematics mandatory prevents us from discovering and developing young talent."
References
Bibliography
Hacker, A., (1961) Political Theory: Philosophy, Ideology, Science, The Macmillan Company
Hacker, A., (1992) Two Nations: Black and White, Separate, Hostile, Unequal, Scribner.
Hacker, A., (1998) Money: Who Has How Much and Why, Simon and Schuster.
Hacker, A., (2003) Mismatch: The Growing Gulf Between Women and Men. Scribner.
Hacker, A. and Claudia Dreifus, (2010) Higher Education?: How Colleges Are Wasting Our Money and Failing Our Kids - and What We Can Do About It Holt, Henry & Company, Inc.
Hacker, A., (2012) "Is Algebra Necessary?", New York Times, Published July 28, 2012. https://www.nytimes.com/2012/07/29/opinion/sunday/is-algebra-necessary.html
Hacker, A., (2016) "The Math Myth: And Other STEM Delusions," The New Press.
External links
"The Math Myth And Other STEM Delusions"—an argument that requiring all students to master a full menu of mathematics is causing more harm than good. http://themathmyth.net/
Author page at The New York Review of Books
Blurb written by the author for "Mismatch" on the Simon and Schuster website
Article about Claudia Dreifus at the East Hampton Star website
1929 births
Living people
American political scientists
Cornell University faculty
Amherst College alumni
Horace Mann School alumni
Queens College, City University of New York faculty
University of Michigan alumni |
14873058 | https://en.wikipedia.org/wiki/Context%20MBA | Context MBA | Context MBA was the first integrated software application for personal computers, providing five functions in one program: spreadsheet, database, charting, word processing, and communication software. It was first released in 1981 by Context Management Systems for the Apple III computer, but was later ported to the Hewlett Packard 9000 / 200 series computers running Rocky Mountain BASIC and IBM PC platform as well.
Since the program was written in UCSD Pascal, it was easy to port to different platforms, but did so at the expense of performance, which was critical at the time of its release, given the limited amount of memory, processing power, and disk I/O available on a desktop computer. It was soon overtaken by Lotus 1-2-3, a more limited integrated software package, but one written in assembly language, yielding much better performance.
Reception
PC Magazine stated in June 1983 that Context MBA "still runs too slowly for a person accustomed to the speed of a microcomputer". It found the spreadsheet the best application of the suite, describing the database as "amazingly slow" and the text editor as "clumsy and confusing". The review concluded that Context MBA "fails in two areas ... UCSD p-System simply does not produce good code", and a confusing, heavily modal user interface.
References
DOS software
Spreadsheet software
History of software |
83603 | https://en.wikipedia.org/wiki/Capys | Capys | In Roman and Greek mythology, Capys (; Ancient Greek: Κάπυς) was a name attributed to three individuals:
Capys, king of Dardania.
Capys, the Trojan who warned not to bring the Trojan horse into the city.
Capys, mythological king of Alba Longa and descendant of Aeneas. Said to have reigned from 963 to 935 BC.
According to Roman sources, in the Etruscan language the word capys meant "hawk" or "falcon" (or possibly "eagle" or "vulture").
Notes
References
Dionysus of Halicarnassus, Roman Antiquities. English translation by Earnest Cary in the Loeb Classical Library, 7 volumes. Harvard University Press, 1937–1950. Online version at Bill Thayer's Web Site
Dionysius of Halicarnassus, Antiquitatum Romanarum quae supersunt, Vol I-IV. . Karl Jacoby. In Aedibus B.G. Teubneri. Leipzig. 1885. Greek text available at the Perseus Digital Library.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. Greek text available at the Perseus Digital Library.
Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library.
Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library.
Trojans
Characters in the Aeneid
Kings of Alba Longa
Kings in Greek mythology
Characters in Book VI of the Aeneid
Characters in Greek mythology
Capua (ancient city) |
4589124 | https://en.wikipedia.org/wiki/Dan%20Fylstra | Dan Fylstra | Dan Fylstra is a pioneer of the software products industry.
A graduate of the Massachusetts Institute of Technology, in 1975 he was a founding associate editor of Byte magazine. In 1978 he co-founded Personal Software, and that year reviewed the Commodore PET 2001 and TRS-80 Model I for Byte while studying for an MBA at the Harvard Business School, having ordered each almost immediately after release. Personal Software became the distributor of a new program called VisiCalc, the first-ever computer spreadsheet. In his marketing efforts Fylstra ran teaser ads in Byte that asked, considering electronic spreadsheets were an entirely new product category, "How did you ever do without it?"
The VisiCalc-Apple connection suggested the hypothesis of the "killer app"—or the "software tail that wags the hardware dog." Once VisiCalc caught on, people came into computer stores asking for VisiCalc and then also the computer (the Apple II) they would need to run the program. VisiCalc sales exceeded 700,000 units by 1983.
Fylstra's software products company, later called VisiCorp, was the #1 personal-computer software publisher in 1981 with $20 million in revenues as well as in 1982 with $35 million (exceeding Microsoft which became the largest such firm in 1983).
Fylstra is the former president of Sierra Sciences, and is currently president of software vendor Frontline Systems. He joined the Libertarian Party in 1998.
References
External links
Opening Pandora's Box: An Open Letter about the Politicization of the PC Industry by Dan Fylstra.
Photo of Dan Fylstra with VisiOn and VisiCalc – Computer History Museum
Year of birth missing
Living people
Members of the Libertarian Party (United States)
Massachusetts Institute of Technology alumni
Harvard Business School alumni |
1249599 | https://en.wikipedia.org/wiki/X11vnc | X11vnc | x11vnc is a Virtual Network Computing (VNC) server program. It allows remote access from a remote client to a computer hosting an X Window session and the x11vnc software, continuously polling the X server's frame buffer for changes. This allows the user to control their X11 desktop (KDE, GNOME, Xfce, etc.) from a remote computer either on the user's own network, or from over the Internet as if the user were sitting in front of it. x11vnc can also poll non-X11 frame buffer devices, such as webcams or TV tuner cards, iPAQ, Neuros OSD, the Linux console, and the Mac OS X graphics display.
x11vnc is part of the LibVNCServer project and is free software available under the GNU General Public License.
x11vnc was written by Karl Runge.
x11vnc does not create an extra display (or X desktop) for remote control. Instead, it uses the existing X11 display shown on the monitor of a Unix-like computer in real time, unlike other Linux alternatives such as TightVNC Server. However, it is possible to use Xvnc or Xvfb to create a 'virtual' extra display, and have x11vnc connect to it, enabling X-11 access to headless servers.
x11vnc has security features that allows the user to set an access password or to use Unix usernames and passwords. It also has options for connection via a secure SSL link. An SSL Java VNC viewer applet is provided that enables secure connections from a web browser. The VeNCrypt SSL/TLS VNC security type is also supported.
Many of the UltraVNC extensions to VNC are supported by x11vnc, including file transfer.
Polling algorithm
x11vnc keeps a copy of the X server's frame buffer in RAM. The X11 programming interface XShmGetImage is used to retrieve the frame buffer pixel data. x11vnc compares the X server's frame buffer against its copy to see which pixel regions have changed (and hence need to be sent to the VNC viewers.) Reading pixel data from the physical frame buffer can be much slower than writing to it (because graphics devices are not optimized for reading) and so a sequential pixel by pixel check would often be too slow.
To improve the situation, x11vnc reads in full rows of pixels separated by 32 pixels vertically. Once it gets to the bottom of the screen it starts again near the top with a slightly different offset. After 32 passes like this it has covered the entire screen. This method enables x11vnc to detect changes on the screen roughly 32 times more quickly than a sequential check would (unless the changes are very small, say only 1 pixel tall.) If the X11 DAMAGE extension is present, x11vnc uses it to provide hints where to focus its polling, thereby finding changes even more quickly and also lowering the system load.
Input injection
When x11vnc receives user input events (keystrokes, pointer motion, and pointer button clicks) from a VNC viewer, it must inject them synthetically into the X server. The X11 programming interfaces XTestFakeKeyEvent, XTestFakeMotionEvent, and XTestFakeButtonEvent of the XTEST extension are used to achieve this.
For non-X11 managed devices (such as the Mac OS X graphics display) different programming interfaces must be used. x11vnc also provides an interface where the user can supply their own input injection program.
Interesting uses
Often special-purpose systems are built using the X Window System to manage the graphical display. x11vnc can be used to export the system's display for remote VNC access. This enables remote monitoring, control, and troubleshooting of the special-purpose system. Sometimes this saves sending a technician to a remote site or allows users to control equipment from their workstation or laptop. x11vnc is known to have been run on the following types of systems: Electron microscope, MRI and Radiology image analysis system, Power plant and Oil platform management consoles, Materials distribution control, Ship self-defense system testing, NMR systems, Silicon wafer analysis microscope, and Theater and concert lighting control. x11vnc is used to export the X11 displays in embedded systems such as Linux-based PDAs and Home theater PCs.
If x11vnc cannot be run on the special-purpose system, sometimes it can be run on a nearby computer and poll the X server frame buffer over the network. This is how proprietary X terminal devices can be accessed via x11vnc.
Xvnc emulation
Although x11vnc's primary use is for X servers associated with physical graphics hardware, it can also attach to virtual X servers (whose frame buffers exist in RAM only) such as Xvfb or a Sun Ray session. x11vnc has options (-create and -svc) to start Xvfb automatically, possibly as the Unix user that logged in. The interactive response of x11vnc and Xvfb may not be as fast as Xvnc, however this mode enables features that Xvnc does not have, such as SSL encryption and Unix usernames and passwords.
Client-side caching
The RFB (VNC) protocol is odd when compared to other network graphics protocols, such as X11 and RDP, in that there is no provision for viewer-side caching of pixel data. While this makes the client easier to implement, there is a price to pay in terms of interactive response. For example, every re-exposure of a window or background region needs to have its (compressed) pixel data resent over the network. This effect is particularly noticeable for windows with complex or photo regions (such as a web browser window) that gets iconified and deiconified or re-exposed often.
x11vnc has an experimental and somewhat brute-force implementation of client-side caching. It is enabled via the -ncache option. When creating the RFB frame buffer in this mode, x11vnc allocates a very large scratch region below the top portion used for the actual (on-screen) pixel data. x11vnc can then use the RFB CopyRect command to instruct the viewer to move rectangles of pixel data into and out of the scratch region. These moves are done locally on the viewer side. In this way x11vnc can manage the scratch region to store and retrieve pixel data without having to resend it over the network.
x11vnc's client-side caching mode can give noticeable interactive response improvements for many activities. Since it uses the existing RFB CopyRect command, the scheme will work with any (i.e. unmodified) VNC viewer. There are some disadvantages, however. The first is that it consumes a large amount of memory. For good performance a scratch region 10 to 20 times larger than the actual screen should be used. So instead using 5 MB for a 1280x1024 truecolor frame buffer, closer to 100 MB will be used (on both the VNC client and server sides.) This is not so much of an issue on modern computers, but would not be possible on a low memory device. Second, the VNC viewer may treat the scratch region in ways that confuse the user, for example displaying it to the user or automatically panning down into it if the mouse reaches the bottom of the real screen. The Unix VNC viewer in SSVNC automatically hides the scratch region. Finally, x11vnc's heuristics for caching and reusing window pixel data are not perfect and can lead to unexpected flashing of a window's contents and other undesired effects.
See also
KRDC
X11
References
External links
x11vnc: a VNC server for real X displays (old project home page)
Remote desktop software for Linux
Virtual Network Computing
X Window programs |
34532407 | https://en.wikipedia.org/wiki/Dasient | Dasient | Dasient was an internet security company headquartered in Sunnyvale, California. It was founded in 2008 and launched its first product in June 2009.
Dasient was acquired by Twitter in January 2012.
Products
Dasient provides cloud-based anti-malware services for protecting businesses against web-based malware and malvertising.
Dasient's Web Malware Analysis Platform uses a dynamic, behavioral-based engine - based on sophisticated algorithms and anomaly detection technology - to defend against the latest attacks using up-to-date intelligence about malware. This platform includes a system of highly instrumented virtual machines to simulate what actual users would experience when visiting a particular web page or viewing a specific online ad.
History
The company was founded by former Google personnel Neil Daswani and Shariq Rizvi, and former McKinsey strategy consultant Ameet Ranadive.
Dasient was named by Network World as one of ten startups to watch in 2010.
The company received seed funding from Mike Maples, ex-Verisign CEO Stratton Sclavos, and ex-3Com/Palm chairman Eric Benhamou. In February 2011, it was announced that Google Ventures invested in Dasient.
Dasient was acquired by Twitter in January 2012.
References
External links
2008 establishments in California
Companies based in Sunnyvale, California
Software companies established in 2008
Computer security companies
GV companies
Twitter acquisitions
2012 mergers and acquisitions |
19921564 | https://en.wikipedia.org/wiki/Menon | Menon | Menon may refer to:
People
Menon (subcaste), an honorary title accorded to some members of the Nair community of Kerala, southern India; used as a surname by many holders of the title
Surnamed
Menon (surname), the surname of several people
Given named
Menon (cookbook author), pseudonym of an unidentified 18th-century French cookbook author
Menon (Phidias), a workman with Phidias
Menon (Trojan), a Trojan soldier in Trojan War
Menon I of Pharsalus, assisted Cimon at Battle of Eion
Menon II of Pharsalus, led troops assisting Athens in the Peloponnesian War
Menon III of Pharsalus or Meno, a Thessalian general and character in Plato's Meno dialogue
Menon IV of Pharsalus (born ?), 4th century Greek general
Menon, 4th century BC Peripatetic writer on medicine: see Anonymus Londinensis
Múnón, also called Mennón, a Trojan chieftain or king mentioned by the twelfth-century Icelandic writer Snorri Sturluson that may refer to Menon, Memnon, or another person.
Other uses
Menon (gastropod), a genus of gastropods within the family Eulimidae
Meno, a dialogue by Plato, is sometimes referred to also as Menon
Menon's caecilian
See also
Mennon
Menos (disambiguation)
Meno (disambiguation) |
56450389 | https://en.wikipedia.org/wiki/Snake%20%281808%20ship%29 | Snake (1808 ship) | Snake was a prize that came into British hands in 1808. Her first owner employed her a privateer, but in 1810 sold her. Thereafter she sailed between London or Plymouth and the Cape of Good Hope (CGH), or from Falmouth in the packet trade. She may have spent her last years sailing between London and South America. She was last listed in 1824.
Origins
Between 1808 and 1814 both Lloyd's Register and the Register of Shipping give Snakes origin as a Spanish prize. However, in its issue for 1814, Lloyd's Register showed a change of origin from Spain to Île de France. The Register of Shipping followed suit in 1816. Neither register published in 1817. In 1818 and 1819 the Register of Shipping showed two vessels named Snake, one a Spanish prize and with other data from its pre-1816 listings, and the other a vessel with origin Île de France, and data broadly consistent with that in Lloyd's Register. In its volume for 1820, the Register of Shipping showed only the vessel with origin Île de France.
Hackman, in his listing of vessels that either served the British East India Company (EIC), or after 1814 sailed to the East Indies under license from the EIC, used as a source a volume of Lloyd's Register from after 1814. He then jumped to the conclusion that the British had captured her during the 1810 British invasion of Isle de France. The information from the registers shows that this assumption is incorrect. Furthermore, on 15 February 1811, Lloyd's List reported the names and tons (bm) of the vessels taken at Port Louis after the invasion. Although some vessels are of roughly the correct tonnage, no vessel is a close fit.
Career
Snake first entered online British records in 1808 when Captain Thomas Cuzens acquired a letter of marque on 29 February 1808. The table below broadly outlines her subsequent career; it draws on both Lloyd's Register and the Register of Shipping, highlighting when either of the sources indicated a change from its previous information, or when the two sources differed. Snake also appears on occasion in Lloyd's List.
On 13 January 1809 Lloyd's List reported that the "Snake packet", which had sailed for America on 1 February, had put back into Fowey with three feet of water in her hold. It is not possible to say with a high degree of confidence that this news item refers to the Snake brig of this article.
On 3 November 1812, Lloyd's List reported that Snake, Burford, master, sailing from London to the Cape of Good Hope, had put into Plymouth. She was leaky and her cargo had had to be discharged.
On 23 April 1813, Lloyd's List reported that Snake, Burford, master had had to put into Bahia leaky and had had to discharge her cargo. She had been on a voyage to Île de France when she had become leaky.
On 20 May 1814 Lloyd's List reported that the "Snake Packet" had arrived from Surinam, and that on her way she had spoken several merchantmen.
On 3 September 1816 Lloyd's List reported that the "Snake Packet" had arrived at Falmouth having spoken to a number of merchantmen.
Citations and references
Citations
References
1808 ships
Captured ships
Privateer ships of the United Kingdom
Age of Sail merchant ships of England
Packet (sea transport)
Falmouth Packets |
49024 | https://en.wikipedia.org/wiki/Wolfram%20Mathematica | Wolfram Mathematica | Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, manipulating matrices, plotting functions and various types of data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other programming languages. It was conceived by Stephen Wolfram, and is developed by Wolfram Research of Champaign, Illinois. The Wolfram Language is the programming language used in Mathematica.
Notebook interface
Wolfram Mathematica (called Mathematica by some of its users) is split into two parts: the kernel and the front end. The kernel interprets expressions (Wolfram Language code) and returns result expressions, which can then be displayed by the front end.
The original front end, designed by Theodore Gray in 1988, consists of a notebook interface and allows the creation and editing of notebook documents that can contain code, plaintext, images, and graphics.
Alternatives to the Mathematica front end include Wolfram Workbench—an Eclipse-based integrated development environment (IDE) that was introduced in 2006. It provides project-based code development tools for Mathematica, including revision management, debugging, profiling, and testing.
There is also a plugin for IntelliJ IDEA-based IDEs to work with Wolfram Language code that in addition to syntax highlighting can analyze and auto-complete local variables and defined functions. The Mathematica Kernel also includes a command line front end.
Other interfaces include JMath, based on GNU Readline and WolframScript which runs self-contained Mathematica programs (with arguments) from the UNIX command line.
The file format of Mathematica files is .nb and .m for configuration files.
Mathematica is designed to be fully stable and backwards compatible with previous versions. Newer versions will have more concise and readable code but the goal is to have code from Mathematica 1 still run in Mathematica 13.
High-performance computing
Capabilities for high-performance computing were extended with the introduction of packed arrays in version 4 (1999) and sparse matrices (version 5, 2003), and by adopting the GNU Multi-Precision Library to evaluate high-precision arithmetic.
Version 5.2 (2005) added automatic multi-threading when computations are performed on multi-core computers. This release included CPU-specific optimized libraries. In addition Mathematica is supported by third party specialist acceleration hardware such as ClearSpeed.
In 2002, gridMathematica was introduced to allow user level parallel programming on heterogeneous clusters and multiprocessor systems and in 2008 parallel computing technology was included in all Mathematica licenses including support for grid technology such as Windows HPC Server 2008, Microsoft Compute Cluster Server and Sun Grid.
Support for CUDA and OpenCL GPU hardware was added in 2010.
Connections to other applications, programming languages, and services
Communication with other applications occurs through a protocol called Wolfram Symbolic Transfer Protocol (WSTP). It allows communication between the Wolfram Mathematica kernel and front end and provides a general interface between the kernel and other applications.
Wolfram Research freely distributes a developer kit for linking applications written in the programming language C to the Mathematica kernel through WSTP using J/Link., a Java program that can ask Mathematica to perform computations. Similar functionality is achieved with .NET /Link, but with .NET programs instead of Java programs.
Other languages that connect to Mathematica include Haskell, AppleScript, Racket, Visual Basic, Python, and Clojure.
Mathematica supports the generation and execution of Modelica models for systems modeling and connects with Wolfram System Modeler.
Links are also available to many third-party software packages and APIs.
Mathematica can also capture real-time data from a variety of sources and can read and write to public blockchains (Bitcoin, Ethereum, and ARK).
It supports import and export of over 220 data, image, video, sound, computer-aided design (CAD), geographic information systems (GIS), document, and biomedical formats.
In 2019, support was added for compiling Wolfram Language code to LLVM.
Version 12.3 of the Wolfram Language added support for Arduino.
Computable data
Mathematica is also integrated with Wolfram Alpha, an online computational knowledge answer engine that provides additional data, some of which is kept updated in real time, for users who use Mathematica with an internet connection. Some of the data sets include astronomical, chemical, geopolitical, language, biomedical, airplane and weather data, in addition to mathematical data (such as knots and polyhedra).
Reception
BYTE in 1989 listed Mathematica as among the "Distinction" winners of the BYTE Awards, stating that it "is another breakthrough Macintosh application ... it could enable you to absorb the algebra and calculus that seemed impossible to comprehend from a textbook". Mathematica has been criticized for being closed source. Wolfram Research claims keeping Mathematica closed source is central to its business model and the continuity of the software.
See also
Comparison of multi-paradigm programming languages
Comparison of numerical-analysis software
Comparison of programming languages
Comparison of regular expression engines
Computational X
Dynamic programming language
Fourth-generation programming language
Functional programming
List of computer algebra systems
List of computer simulation software
List of graphing software
Literate programming
Mathematical markup language
Mathematical software
Wolfram Alpha, a web answer engine
Wolfram Language
Wolfram SystemModeler, a physical modeling and simulation tool which integrates with Mathematica
References
External links
Mathematica Documentation Center
A little bit of Mathematica history documenting the growth of code base and number of functions over time
1988 software
Astronomical databases
Computational notebook
Computer algebra system software for Linux
Computer algebra system software for MacOS
Computer algebra system software for Windows
Computer algebra systems
Cross-platform software
Data mining and machine learning software
Earth sciences graphics software
Econometrics software
Formula editors
Interactive geometry software
Mathematical optimization software
Mathematical software
Numerical analysis software for Linux
Numerical analysis software for MacOS
Numerical analysis software for Windows
Numerical programming languages
Numerical software
Physics software
Pi-related software
Plotting software
Proprietary commercial software for Linux
Proprietary cross-platform software
Proprietary software that uses Qt
Regression and curve fitting software
Simulation programming languages
Software that uses Qt
Statistical programming languages
Theorem proving software systems
Time series software
Wolfram Research
Graph drawing software |
149754 | https://en.wikipedia.org/wiki/Exidy%20Sorcerer | Exidy Sorcerer | The Sorcerer is a home computer system released in 1978 by the video game company Exidy. Based on the Zilog Z80 and the general layout of the emerging S-100 standard, the Sorcerer was comparatively advanced when released, especially when compared to the contemporary more commercially successful Commodore PET and TRS-80. The basic design was proposed by Paul Terrell, formerly of the Byte Shop, a pioneering computer store.
Lacking strong support from its parent company, who were focused on the successful arcade game market, the Sorcerer was sold primarily through international distributors and technology licensing agreements. Distribution agreements with Dick Smith Electronics in Australia and Liveport in the UK as well as Compudata which included a manufacturing license to build, market and distribute the Tulip line of computers in Europe. The system remains relatively unknown outside these markets.
The Exidy Data Systems division was sold to a Wall Street firm, Biotech, in 1983.
History
Origins
Paul Terrell entered the computer industry by starting the first personal computer store, the Byte Shop, in 1975. By 1977, the store had grown into a chain of 58 stores, and Terrell sold the chain to John Peers of Logical Machine Corporation.
With free time on his hands, Terrell started looking for new ventures. He wanted a consumer computer that was user-friendly beyond anything currently in the marketplace. At the time, the Commodore PET and Tandy TRS-80 offered the out-of-the-box experience he considered essential, yet required a costly computer monitor in spite of their inadequate graphics. The Apple II had superior graphics and color, but required some user assembly before being operational.
Terrell's objective was a machine offering the best of both worlds. Looking for a suitable name, he noted "Computers are like magic to people, so let's give them computer magic with the Sorcerer computer".
Exidy
Terrell was friends with H.R. "Pete" Kauffman and Howell Ivy of Exidy, a successful arcade game manufacturer. Terrell noted "Their graphic designs with a computer were so good they would take quarters out of my pocket." Howell, VP of Engineering, was a computer enthusiast and was interested in Terrell's concept. The wish list of design improvements over the existing designs went like this:
A keyboard computer that could plug into a television set like the Apple II and TRS-80 but also plug into a computer monitor to display high resolution graphics.
An easily programmable graphics character set like the Commodore PET, so aspiring programmers could write BASIC language programs that would impress their friends. The Sorcerer design was eloquent with the highest resolution in the marketplace and innovative because the graphic characters could be reprogrammed to represent any kind of 8x8 character the programmer wanted and was not fixed like the graphic characters on the Commodore PET. Howell did such a good job in this area of the design that it was to achieve a “Most Innovative” award at the Consumer Electronics Show after its introduction.
The fastest microcomputer chip with the most software compatibility in the marketplace. The Exidy Sorcerer used the Z80 Processor from Zilog Corp. (the same as the TRS-80 from Tandy, while the Apple II and Commodore PET used the 6502 processor from MOS Technology) which allowed it to run the same BASIC language software that was becoming one of the first standards in the personal computer industry, Microsoft BASIC. Exidy was one of the first companies to license software from Microsoft after they parted ways from MITS, Inc. and before they moved from New Mexico to Seattle.
Plug-In software cartridges so the computer user could immediately begin using the computer at power-on. The user would not have to load a program from tape or disk to start operating the computer. Exidy would provide three program cartridges under license: Microsoft 8K BASIC, Word Processor Cartridge (which was the “Killer App“ for PCs at the time), and an Assembler Cartridge (for programmers to write their own custom software for proprietary applications). Blank cartridges were provided for custom applications and the most popular application was customer generated foreign language character sets, which made the Exidy Sorcerer the most popular international PC .
An expansion unit designed to the industry standard S-100 bus so that all of the low cost peripheral products then currently available could be attached to configure a computer system.
Launch in the US
The Sorcerer made its debut at the Long Beach Computer Show in April 1978. The standard plug-in attachments to the keyboard case (included in the base price of the unit) were a printer port for hard copy devices, cassette port for mass storage, and serial port for communications. Some of these were included with the competing products and some were add-on.
The Exidy Sorcerer was competitively priced at $895 and went to market in Long Beach California in April 1978 and generated a 4,000 unit back-log on introduction. Shipments did not start until later that summer.
Exidy sold the rights to the design of the Exidy Sorcerer to Dynasty Computer Corp. of Dallas, Texas. They made minor updates and re-released it as the "Dynasty smart-ALEC".
Successes outside the US
Export of personal computers was complicated by the requirement of US Government State Department approval but this was more than offset by the financial advantage afforded by the customary export terms of sale under letter of credit, yielding immediate cash, as compared to chasing payments from domestic retailers on 30-day credit terms. Exidy was thus keen to concentrate on international sales though recognizing the importance of its US presence for development and marketing purposes.
Exidy took this to another level by licensing production both domestically and internationally, increasing total production and market penetration without calling on cash flow. With its unique programmable character set for foreign language characters, the Exidy Sorcerer was in a league of its own. Advance royalty payments and license fees made this business a priority for Exidy, Inc.
The first Sorcerers sold in the UK were imported direct from the US by a small company based in Cornwall called Liveport Ltd. Liveport also eventually designed and built extra plug-in ROM-PAC cartridges and an add-on floppy disk drive (based on Micropolis units) that did not require the expensive S-100 chassis. Sorcerer sales in continental Europe were fairly strong, via their distributor, Compudata Systems. The machine had its greatest success in 1979 when the Dutch broadcasting company TELEAC, in a move to be emulated later by the BBC with its BBC Micro, decided to introduce its own home computer. The Belgian company DAI was originally contracted to design the machine but they failed to deliver and Compudata delivered several thousand Sorcerers instead.
Sales in Europe were strong and, when the Dutch Government endorsed computers for small business, Compudata decided to license the Exidy design for local construction in the Netherlands with government support. After several years of Exidy production, Compudata developed their own 16-bit Intel 8088–based machine called the Tulip, replacing the Sorcerer in 1983. One of the largest computing user groups in the Netherlands was the ESGG (Exidy Sorcerer Gebruikers Groep) which published a monthly newsletter in two editions, Dutch and English. For some time, they were the largest group in the HCC (Hobby Computer Club) federation. The Dutch company De Broeders Montfort was a major firmware manufacturer.
The Sorcerer was successful in Australia as a result of strong promotion by its exclusive agent Dick Smith Electronics, though there was price resistance as it was considered beyond the means of most hobbyists. The Sorcerer Computer Users group of Australia (SCUA) actively supported the Sorcerer long after Exidy discontinued it, with RAM upgrades, speed boosts, the "80-column card", and even a replacement monitor program, SCUAMON.
The history of the Sorcerer has some parallels with Exidy's competitor Bally's attempts to build a home computer based on the Astrocade. In contrast to the Astrocade's (and Datamax UV-1's) limited text capabilities but excellent graphics, the Sorcerer had excellent text and only limited graphics.
Description
The Sorcerer was a combination of parts from a standard S-100 bus machine and its custom display circuitry. The machine included the Zilog Z80 and various bus features needed to run the CP/M operating system, but placed them inside a "closed" box with a built-in keyboard similar to machines like the Commodore PET, the Commodore 64, and the Atari 8-bit family. The Sorcerer's keyboard was a high quality unit with full "throw". The keyboard included a custom "Graphics" key, which allowed easy entry of the extended character set, without having to overwork the Control key, the more common solution on other machines. Leading its peers, the Sorcerer included lower-case characters as a standard feature.
Unlike most S-100 CP/M machines of its era, the Sorcerer did not have any internal expansion slots, and everything that was needed for basic computing was built-in. A standard video monitor was required for display, and optionally a standard audio cassette deck was needed for data storage. The Sorcerer included a small ROM containing a simple monitor program which allowed the machine to be controlled at the machine language level, as well as load programs from cassette tape or cartridges. The cartridges, known as "ROM PAC"s in Exidy-speak, were built by replacing the internal tape in an eight-track tape cartridge with a circuit board and edge connector to interface with the Sorcerer.
The machine was usable without any expansion, but if the user wished to use S-100 cards they could do so with an external expansion chassis. This was connected to the back of the machine through a 50-pin connector. Using the expansion chassis the user could directly support floppy disks, and boot from them into CP/M (without which the disks were not operable). Another expansion option was a large external cage which included a full set of S-100 slots, allowing the Sorcerer to be used like a "full" S-100 machine. Still another option combined the floppies, expansion chassis and a small monitor into a single box.
Graphics on the Sorcerer sound impressive, with a resolution of 512×240, when most machines of the era supported a maximum of 320×200. These lower resolutions were a side effect of the inability of the video hardware to read the screen data from RAM fast enough; given the slow speed of the machines they would end up spending all of their time driving the display. The key to building a usable system was to reduce the total amount of data, either by reducing the resolution, or by reducing the number of colors.
The Sorcerer instead chose another method entirely, which was to use definable character graphics. There were 256 characters possible for each screen location. The lower half was fixed in ROM, and contained the usual ASCII character set. The upper half was defined in RAM. This area would be loaded with a default set of graphics at reset, but could be re-defined and used in lieu of pixel-addressable graphics. In fact the machine was actually drawing a 64×30 display (8×8 characters) which was well within the capabilities of the hardware. However this meant that all graphics had to lie within a checkerboard pattern on the screen, and the system was generally less flexible than machines with "real" graphics. In addition, the high resolution was well beyond the capability of the average color TV, a problem they solved by not supporting color. In this respect the Sorcerer was similar to the PET and TRS-80 in that it had only "graphics characters" to draw with, but at least on the Sorcerer one could define a custom set. It was also possible to provide animation by character replacement or by redefining the character bitmap.
Given these limitations, the quality of the graphics on the Sorcerer was otherwise excellent. Clever use of several characters for each graphic allowed programmers to create smooth motion on the screen, regardless of the character-cell boundaries. A more surprising limitation, given the machine's genesis, is the lack of sound output. Enterprising developers then standardized on use of two pins of the parallel port, to which users were expected to attach a speaker.
A Standard BASIC cartridge was included with the machine. This cartridge was essentially the common Microsoft BASIC already widely used in the CP/M world. One modification was the addition of single-stroke replacements for common BASIC commands, pressing GRAPHICS-P would insert the word PRINT for instance, allowing for higher-speed entry. The machine included sound in/out ports on the back that could be attached to a cassette tape recorder, so BASIC could load and save programs to tape without needing a disk drive. An Extended BASIC cartridge requiring 16 KB was also advertised, but it is unclear if this was actually available; Extended BASIC from Microsoft was available on cassette. Another popular cartridge was the Word Processor PAC which contained a version of the early word processor program Spellbinder. A constant ROM fault in the wordprocessor PAC was a printer status switch setting for the printer, but most people learned about it and turned it off early in their power-on.
The Montfort Brothers made an EPROM PAC with a rechargeable battery inside and 16 kB RAM with an external write-protect switch. Thus bootable software could be uploaded to the pack and kept for a longer period.
Many CP/M machines were designed to allow the full 16-bit address space of 64 kB to be populated by memory. This was problematic on the Exidy Sorcerer. 32 kB could easily be populated. Another 16 kB was the ROM cartridge address space. This could be populated, but required disabling the ROM cartridge capability. The last 16 kB was required by the system for I/O, particularly for the video, and would have required extensive system modification.
References
External links
Trailing Edge's Exidy Sorcerer Pages
Obsolete Technology website
Evil Exidy's Sorcerer Page
Exidy Sorcer Emulator
Satellite tracking BASIC code for Scorcerer SATTRAK by Dcooke and WBarker 1979
Digibarn Systems: Exidy Sorcerer
Exidy Sorcerer at Vintage Computers
Exidy Sorcerer at computer-museum.nl
OLD-COMPUTERS.COM's Exidy Sorcerer pages
Exidy Sorcerer in Terry Stewart's collection
Z80-based home computers
Home computers
S-100 machines |
533047 | https://en.wikipedia.org/wiki/FreeSBIE | FreeSBIE | FreeSBIE is a live CD, an operating system that is able to load directly from a bootable CD with no installation process or hard disk. It is based on the FreeBSD operating system. Its name is a pun on frisbee. Currently, FreeSBIE uses Xfce and Fluxbox.
FreeSBIE 1.0 was based on FreeBSD 5.2.1 and released on February 27, 2004. The first version of FreeSBIE 2 was developed during the summer of 2005, thanks to the Google Summer of Code. FreeSBIE 2.0.1, which is a complete rewrite of the so-called toolkit, is based on FreeBSD 6.2 and was released on February 10, 2007. According to DistroWatch the FreeSBIE project is discontinued.
Goals
The goals of the FreeSBIE project are:
To develop a suite of programs to be used to create one's own CD, with all the personalizations desired
To make various ISO images available, each with its different goals and possible uses
See also
Comparison of BSD operating systems
References
External links
historic FreeSBIE project homepage (archive.org)
An interview with a FreeSBIE developer
FreeBSD
Live CD |
1408167 | https://en.wikipedia.org/wiki/Linux%20Format | Linux Format | Linux Format is the UK's first Linux-specific magazine, and as of 2013 was the best-selling Linux title in the UK. It is also exported to many countries worldwide. It is published by Future plc (which produces a number of other computer magazines). Linux Format is commonly abbreviated to LXF, and issues are referred to with LXF as a prefix followed by the issue number (for example, LXF102 refers to the 102nd issue).
It began as a one-issue pilot in 1999 called Linux Answers, and began full publication as Linux Format in May 2000 after being launched and produced by a small team consisting of Editor Nick Veitch, Art Editor Chris Crookes and staff writer Richard Drummond, who together created the magazine's core values and initial design appearance.
Currently Linux Format has translated editions available in Italy, Greece and Russia. Many magazines are exported around the world, principally to the USA where they are sold in Barnes & Noble stores, as well as other large book stores.
Articles within Linux Format regularly feature at-length series and practical tutorials to teach and allow users to expand their skills in using the Linux operating system and its associated software applications. Contributions are encouraged to be submitted by readers.
Linux Format shares the UK market place with an English-language version of Linux Magazine and formerly with Linux User and Developer which discontinued in September 2018.
Contents
Linux Format includes similar content to that found in most computer magazines, but aimed specifically at users of the Linux operating system. There are reviews, round-ups, technology features and tutorials aimed at all levels of users.
The magazine comes with a DVD containing full Linux distributions, and other free software.
Staff
The magazine is currently edited by Neil Mohr with a team composed of Efraín Hernández-Mendoza as Art Editor, Jonni Bidwell as Technical Editor and Chris Thornett as Operations Editor. Previous staff members include Graham Morrison, Andrew Gregory, Mike Saunders and Ben Everard who went on to produce Linux Voice magazine (which later merged with Linux Magazine).
Frequency
The magazine is published 13 times a year.
Online presence
Linux Format has a dedicated magazine website which contains forums for readers to interact with the editorial staff and writers, as well as an extensive reference section for the articles in the magazine. In February 2009, the Linux Format editorial staff launched TuxRadar. TuxRadar has become the primary method of the editorial team getting Linux news on to the Internet, with the Linux Format webpage undergoing some modifications to become more community-focused.
See also
Linux Journal
Linux Voice
Linux User and Developer
Linux Magazine
References
External links
Linux Format
Computer magazines published in the United Kingdom
Linux magazines
Magazines established in 2000
2000 establishments in the United Kingdom |
348825 | https://en.wikipedia.org/wiki/Ben%20Dover | Ben Dover | Simon James Honey (born 23 May 1956), better known as Ben Dover, is an English pornographic actor, director and producer. He has also worked under several other pseudonyms including Steve Perry as producer and Lindsay Honey as an actor and musician.
Honey was included in Larry Flynt's Hustler's Top 50 Most Influential People in Porn list, printed in the January 1999 issue and in 2011, he was inducted into the AVN Hall of Fame.
Honey has also won a host of awards for the Ben Dover series including the Breakthrough Award at the AVN Awards in 1997, AVN's Best Gonzo Award twice (in 2000 for Ben Dover's End Games and in 2002 for Ben Dover's The Booty Bandit). In 2006, he was awarded the Lifetime Achievement Award at the UK Adult Film Awards, which he co-presented with Kristyn Halborg, Kelly Stafford and co-star Pascal White.
In 2012, Honey was nominated by the Internet Service Providers Association as an Internet Villain for his involvement with his company Golden Eye (International) in speculative invoicing.
Biography
Early days in the music industry
Born in Sittingbourne, Kent to Frank Cyril Honey (1921–1993) and his wife Sylvia (née Foster) (1926–2017), Honey attended Borden Grammar School. Following expulsion from school in 1973, he moved to Newquay, Cornwall where he worked as a children's entertainer called "Uncle Simon" and drummed in a cabaret band for the summer season.
Using the pseudonym Lindsay Honey as he "didn't think Simon fitted the bill somehow", he began working as a session drummer for artists such as Edison Lighthouse and White Plains. Honey joined Artful Dodgers who released one single "Here We Go" in 1978 before changing their name to 20th Century Heroes.
Following a chance meeting between the band's guitarist Paul Jackson and one time Bay City Rollers bassist Ian Mitchell (who replaced original member Alan Longmuir in 1976, before quitting after seven months) in a taxi, the band (Jackson, Honey and bassist John Jay) agreed to be his backing band and The Ian Mitchell Band was launched in May 1979. Although the band did not take off in the UK, they released three studio albums within a single year and toured across Europe and Japan. Following the departure of Jackson, the band changed their name to La Rox and reinvented themselves as a glam rock band, with Honey becoming the band's keyboard player. The change made little impact on the band's fortunes in the UK, and they split up in 1982. Honey, along with the band's second drummer John Towe and guitarist Lea Hart (who had replaced Paul Jackson) scored a minor UK hit with the one-off single "Small Ads" released in April 1981 under the name the "Small Ads".
Honey also worked as a cabaret singer under the pseudonym "Steve Jackson" and briefly reunited with Mitchell to form "Bachelor Of Hearts" who released one album in 1983 to little fanfare (a version of the track "Girls in Jeans" was later used as the outro music on Ben Dover films).
Early adult film work
In 1978, broke from his dwindling career as a musician and working as a male stripper using the stage name "Hot Rod" (due to his reported likeness to Rod Stewart at the time), Honey responded to an advertisement for models in The Stage newspaper and met agent Kent William Boulton (1941–2002), a college lecturer from Bromley and former Labour Party candidate on the Isle of Wight who entered the porn business late in life and was renowned for organising "spanking parties".
Being paid £150 a shoot, Honey started working for Berth Milton Sr, publisher of Private Magazine and other European hardcore porn magazines such as Rodox/Color Climax, a Danish company. His first shoot was with a then 17-year-old Eileen Daly, who had gone to see the same agent above a strip club in Soho with her mother. Honey was introduced by Boulton to the photographer Lexington Steel and director Mike Freeman.
Working for Videx Ltd.
Honey began working as an actor for Mike Freeman's Videx Ltd. in 1980, a video production company based in Wimbledon. Freeman had produced softcore porn throughout the 1960s via his company Climax Films, and employed well known Soho gangster Gerald Patrick Joseph Hawley as a bodyguard. He was jailed in December 1969 for the murder of Hawley, reportedly after he tried to take over Freeman's business.
Upon release from prison in 1979, Freeman set about exploiting the Obscene Publications Act, which didn't yet cover the new video format, and started producing hardcore pornography. While working at Videx, Honey starred in his first film as an actor in "Truth or Dare" with actress Paula Meadows. However, the law was soon changed to bring video under the Obscene Publications Act and Freeman's home was raided in 1981 by the OPS (Obscene Publications Squad), with all the Videx video equipment confiscated in the process. The Videx offices were raided again in 1983 and Freeman was arrested in relation to the film "The Videx Video Show" under Obscene Publications Act and Perverting the course of justice.
Freeman was sentenced to 15 months in the case relating to the first raid in 1981 involving the video "Sex Slave". Honey took over Videx and produced and directed his first film, "Rock 'n' Roll Ransom", for the company which featured his bandmate Ian Mitchell. After serving ten months Freeman was released from prison and set about preparing for the case of the second raid in relation to the video "The Videx Video Show". Freeman was charged under The Obscene Publications Act and Protection of Children Act, as images of children taken at a naturist park were featured (although in a separate scene to any pornographic content). Having called film critic Derek Malcolm as a witness who classed the film as "harmless erotica" and successfully demonstrating that the film was not obscene, Freeman, Honey, Freeman's girlfriend and company secretary Sara Bhaskaran, and silent business partner John Edward Currey were found not guilty under the Obscene Publications Act. However, Freeman was found guilty of perverting the course of justice, despite having already been acquitted of the offence relating to the video, and sentenced to a further 15 months. He was then recalled by the Parole Board for his previous life sentence for murder.
Going it alone and imprisonment
Honey worked as a photographer, under the pseudonym "Brian Wilson" (after The Beach Boys singer), for Escort magazine throughout the 1980s, shooting his partner Linzi Drew who regularly appeared in the magazine. The couple also took the Videx equipment (which Freeman had purchased to replace the original confiscated equipment) and mailing list to set up their own mail order business. Operating from their home, Corner Cottage in Stoke d'Abernon, Surrey, they sold hardcore pornographic films under various names such as Stephanie Perry and Glamour Pussy Video Club via adverts in magazines. The couple also used other addresses across Surrey, including that of a pet shop, where order forms would be collected.
The couple were arrested in February 1990 following a police raid on their home and convicted in 1992 at Guildford Crown Court under the Obscene Publications Act for "publishing obscene material for gain" and "being in possession of obscene material" following a sting operation in which an undercover police officer joined the companies' mailing list and over a period of more than a year bought several videos, which also proved that the operation was "an on-going tax-free profitable business".
Drew was sentenced to four months, which she served at Holloway Prison and Drake Hall, Staffordshire, before being released after serving two months, while Honey was sentenced to nine months, which he served at Brixton Prison before being transferred after five weeks to Send Open Prison, Surrey. Documents seized by the police indicated that they had 400 people on their mailing list who paid £60 per video.
In December 1993 Honey filed for bankruptcy. Following release from prison Honey and Bill Wright (a.k.a. Frank Thring) were commissioned to work on the first feature-length videos for Berth Milton, Sr.'s Private Media Group, which had only produced magazines before. Honey directed its first seven films with Wright and took on the new pseudonym Steve Perry (after the Journey frontman).
Ben Dover from the Faroe Islands series
Honey started production of a new gonzo-style series in summer 1994 using the stage name Ben Dover (a name which was influenced by a title from John Stagliano's "Buttman" pornography video series "Bendover Babes"). He directed, produced and starred (as both cameraman/narrator and occasionally as performer) in the series between 1995 and 2002 along with actors including Marino Franchi and Pascal White. In 1995, Drew became pregnant with their son Tyger Drew-Honey and left the pornography industry.
Upon release in the United States, the Ben Dover films were edited by VCA Pictures to censor the more graphic contents of the original productions. In 2002 Honey was dropped by VCA, which had been taken over by Hustler, and opted to concentrate on the market in the UK instead. It wasn't until 2004 that he again attempted to obtain US distribution for his films and signed a deal with Kick Ass Pictures and resurrected the Ben Dover brand. He began producing the Ben Dover's Kick Ass Anal Adventures series, of which there were five instalments.
In 2008 Honey attended the UK Adult Film Awards once more, representing Television X and Red Hot TV along with Linsey Dawn Mckenzie and other British adult film stars. Television X has shown some of his most recent projects such as St/ Teenycums, which won Best Script at the UK Adult Film Awards 2008.
Following the demise of physical DVD sales, Honey diversified the Ben Dover brand by launching branded clothing, sex toys (including The Ben Dover Signature line, which produces, among other products, The Ben Dover Realistic Penis and The Ben Dover Anal Kit), male enhancers, mugs and stickers as well as Ben Dover approved stag and hen parties. Honey also hosts "Ben Dover Porn Disco"s involving wet t-shirt competitions and "SwingalongaBen" swinging parties in Aston, Birmingham.
Also in 2012, Honey started production on a new online Ben Dover series "Like Father Like Son," an incest-based series where he and his on-screen son have sex with the same girl together. He confirmed the series did not feature his own son.
Outside the porn industry
In addition to his pornographic work, in 2000 Honey featured in the feature film Last Resort (2000), in which he played a low-budget porn producer.
In 2001, despite admitting to having failed to vote in the previous election, Honey revealed that he was considering standing as an independent candidate in the next general election as an "Independent Libertarian". He would advocate the dismantling of the Royal Family, branding them an "outdated, irrelevant and frankly dangerous institution", removing any power from the church and reforming the National Health Service and would "make patients at A&E departments who are there as a result of pub fights etc, be made to pay a serious levy for the expense of treating them". In 2009 Honey resurrected his plans to enter politics, with his own party "New Democrats".
In September 2009, Honey was featured on the BBC Four programme Rich Man Poor Man: Ben Dover Straightens Up, where he attempted to search for personal fulfilment in his life and be taken seriously by making a break in mainstream acting. He performed his one-man show Innocent 'Til Proven Filthy at the 2009 Edinburgh Fringe Festival.
Honey appeared as Arthur in the comedy film On the Ropes.
In 2012 Honey joined the Guns N' Roses tribute band Guns 2 Roses as a special guest drummer, and continues to tour with them on select dates. He also released the mockumentary "The Only Way is Dover" on YouTube, claiming "It's a bit like Curb Your Enthusiasm. It's not about porn, it's about what we do when we're not doing porn."
In 2016 Honey was cast by film producer Mark Noyce as Rubber John in the comedy film The Blazing Cannons.
In February 2017, Honey revealed that he was battling bladder cancer and is currently undergoing chemotherapy.
Golden Eye – speculative invoicing
In 2008, Honey set up the shell company Golden Eye (International) Ltd with Julian Fraser Becker (commercial director of Optime Strategies Ltd, who trade as Ben Dover Productions) which claims to be the "holder of numerous film copyrights", primarily those of Ben Dover Productions films. The company was named after the house Honey shared with his partner Linzi Drew and son, Tyger Drew-Honey in Hersham, Surrey into which they moved in 2005 and left in 2011, after the couple split up and the house put up for sale and sold in November 2011.
In May 2009, Ben Dover Productions announced that it was retaining the services of the Anti Piracy Group to tackle the problem of DVD piracy. The company claimed their sales and profits had plummeted due to organised crime gangs flooding the streets with pirated DVDs of their titles.
The company then engaged in a campaign of "speculative invoicing", where they sent out letters, initially through lawyers, to alleged copyright infringers demanding a payment of £700 or face the threat of potential court action: a scheme described by the House of Lords as "straightforward legal blackmail" and a scam. The company paid for a list of alleged BitTorrent file-sharers identities, and retained the services of Tilly Bailey & Irvine to pursue the alleged copyright infringers for compensation, using what they claimed to be "bespoke technology which captures the irrefutable evidence of the perpetrators". It was revealed that this technology, investigated and checked by physicist and "checked computer expert" Clement Vogler of computer consultants Ad Litem Limited, a company which was dissolved in 2011, was targeting innocent individuals, and that the speculative invoicing relied on the embarrassment of those targeted agreeing to the fine to avoid the threatened court action, regardless of whether they were guilty or not. The data gatherer, Alireza Torabi of NG3 Systems, also gathered IP data for ACS:Law. Following adverse public and press reaction, Tilly Bailey & Irvine abandoned the practice and accepted a £2,800 fine from the Solicitors Regulation Authority. Without the aid of solicitors, Golden Eye (International) Limited continued the practice of speculatively invoicing those they claim had "infringed their copyrights".
In 2011, the company lost a court case against an internet user who they claimed had illegally downloaded the Ben Dover film "Fancy An Indian?", and were themselves accused of breaching the Copyright Designs and Patents Act 1988 and Civil Procedure Rules when issuing the claim. Timothy Dutton QC of the Solicitors Regulation Authority noted that as computer IP addresses can be shared, faked and hijacked, the evidence being used was of the "flimsiest" variety. Golden Eye attempted to get the case tried in a county court, but when it was confirmed it would need to be tried at the London Patents County Court the company tried to pull out of the case. Judge Birss QC highlighted similarities between this and a previous case involving Media CAT Limited and lawyers ACS:Law who unsuccessfully tried to sue 27 individuals for alleged copyright infringement, as Media CAT Limited were not the rights holders of the copyrighted material in question and led to ACS:Law ceasing to trade.
In order for the case to progress in the Patents County Court, the actual copyright holder, Optime Strategies Limited (who trade as Ben Dover Productions), would have to have joined Golden Eye (Julian Becker is a director of both companies) and "potentially put themselves at risk of becoming liable for a wasted costs claim if the case were to fail like ACS:Law's did". To avoid this, in January 2012, the company instead started pursuing individuals through the HM Courts & Tribunals Money Claim Online service for claims which should only be made through the Patents County Court.
On 9 March 2012, Golden Eye went to court in an attempt to obtain a Norwich Pharmacal Order (NPO) for the details of over 9,000 IP addresses from internet service provider O2 (UK)/Telefónica Europe (an internet provider who previously had not contested NPOs) to service further "speculative invoicing" letters to alleged copyright infringers. Golden Eye were questioned by statutory consumer organisation Consumer Focus regarding the company's ability (or lack of) to connect an IP address with the account holder, the company's role in relation to the copyrights involved and the amount being demanded, which they stated was "far and above the likely actual damages".
On 26 March 2012, the High Court ordered O2 to hand over the details of 9,124 of its customers’ details to Golden Eye. However, the judge deemed the proposed £700 fine to be "unsupported and unsupportable" and that the bill payer couldn't automatically be assumed to be guilty of any alleged copyright violation, and therefore any claim made by Golden Eye/Ben Dover Productions/Optime Strategies Ltd couldn't move forward unless the recipient of the company's speculative invoicing letters admitted their own guilt due to the evidence used being unreliable. The wording of any such letters would also be severely restricted, and the "precise wording of the order and of the letter of claim" would be decided at a further hearing. Consumer Focus welcomed the ruling that bill payers couldn't automatically be assumed to be guilty of any alleged copyright violation on their internet connection, commenting that "Consumers should not be subject to the type of threatening letters Golden Eye intended to send to more than 9,000 O2 customers". Golden Eye's lawyer admitted that taking a test case to court would not be cost effective, and therefore the company didn't intend on taking any cases to court and that they relied on those accused paying their "fine".
In July 2012, the High Court ruled that Golden Eye would only be granted access to data in relation to Ben Dover Production films, not the titles by twelve other production companies that Golden Eye were acting on behalf of (including Terry Stephen's One Eyed Jack Productions and Justin Ribeiro Dos Santos's Joybear Pictures) and would take 75% of any "damages" paid, which Mr Justice Arnold stated "would be tantamount to the court sanctioning the sale of the intended defendants’ privacy and data protection rights to the highest bidder". This meant that Golden Eye would only be able to target 2,845 of its original target of over 9,000 households. It was revealed in December 2012 that the IP data supplied to O2 (UK)/Telefónica Europe by Golden Eye, from 2,850 alleged copyright infringements of Ben Dover Production films could only be matched to less than 1,000 individuals.
In an interview with Vice, Honey claimed his income had dropped 90% in two years and admitted his reasons for his involvement in speculative invoicing was that "if I can't make money out of porn, the only way I can make money is to get to the people who are not buying it".
Personal life
Honey used to be married to adult actress and model Linzi Drew. Their son is English actor, musician and television presenter Tyger Drew-Honey, best known for starring in the BBC sit-com Outnumbered. Honey also appeared in the first episode of his son's TV series Tyger Takes On..., which examined the influence of pornography on Drew-Honey's generation.
Partial filmography
Ben Dover in London (1994)
Ben Dover's Ben Behaving Badly (1996)
Ben Dover's British Anal Invasion (1997)
Ben Dover's London Calling (1998)
Ben Dover's Spicy Girls (1999)
Ben Dover's Porno Groupies (1999)
Ben Dover's Cheek Mates (2000)
Ben Dover's Sex Kittens from Britain (2000)
Ben Dover's Posh Birds (2001)
Ben Dover's Essex Girls (2002)
Ben Dover's Soccer Sluts (2003)
Ben Dover Does The Boob Cruise (2004)
Ben Dover's The Girlie Show (2005)
Ben Dover's Pussy Galore (2006)
Ben Dover's Yummy Mummies (2009)
Killer Bitch (2010)
On the Ropes (2011)
Down on Abby (2014)
See also
List of British pornographic actors
Pornography in the United Kingdom
References
External links
Golden Eye (International) Ltd
Ben Dover as "Steve Perry" at Adult Film Database
1956 births
Living people
English pornographic film directors
British pornographic film producers
English male pornographic film actors
Male actors from Kent
People educated at Borden Grammar School
English republicans
People from Sittingbourne
People from Stoke d'Abernon |
21979 | https://en.wikipedia.org/wiki/Netscape | Netscape | Netscape Communications Corporation (originally Mosaic Communications Corporation) was an American independent computer services company with headquarters in Mountain View, California and then Dulles, Virginia. Its Netscape web browser was once dominant but lost to Internet Explorer and other competitors in the so-called first browser war, with its market share falling from more than 90 percent in the mid-1990s to less than 1 percent in 2006. An early Netscape employee Brendan Eich created the JavaScript programming language, the most widely used language for client-side scripting of web pages and a founding engineer of Netscape Lou Montulli created HTTP cookies. The company also developed SSL which was used for securing online communications before its successor TLS took over.
Netscape stock traded from 1995 until 1999 when the company was acquired by AOL in a pooling-of-interests transaction ultimately worth US$10 billion. In February 1998, approximately one year prior to its acquisition by AOL, Netscape released the source code for its browser and created the Mozilla Organization to coordinate future development of its product. The Mozilla Organization rewrote the entire browser's source code based on the Gecko rendering engine, and all future Netscape releases were based on this rewritten code. When AOL scaled back its involvement with Mozilla Organization in the early 2000s, the Organization proceeded to establish the Mozilla Foundation in July 2003 to ensure its continued independence with financial and other assistance from AOL. The Gecko engine is used to power the Mozilla Foundation's Firefox browser.
Netscape's browser development continued until December 2007, when AOL announced that the company would stop supporting it by early 2008. As of 2011, AOL continued to use the Netscape brand to market a discount Internet service provider.
History
Early years
Netscape was the first company to attempt to capitalize on the emerging World Wide Web. It was founded under the name Mosaic Communications Corporation on April 4, 1994, the brainchild of Jim Clark who had recruited Marc Andreessen as co-founder and Kleiner Perkins as investors. The first meeting between Clark and Andreessen was never truly about a software or service like Netscape, but more about a product that was similar to Nintendo. Clark recruited other early team members from SGI and NCSA Mosaic. Jim Barksdale came on board as CEO in January 1995. Jim Clark and Marc Andreessen originally created a 20-page concept pitch for an online gaming network to Nintendo for the Nintendo 64 console, but a deal was never reached. Marc Andreessen explains, "If they had shipped a year earlier, we probably would have done that instead of Netscape."
The company's first product was the web browser, called Mosaic Netscape 0.9, released on October 13, 1994. Within four months of its release, it had already taken three-quarters of the browser market. It became the main browser for Internet users in such a short time due to its superiority over other competition, like Mosaic. This browser was subsequently renamed Netscape Navigator, and the company took the "Netscape" name (coined by employee Greg Sands, although it was also a trademark of Cisco Systems) on November 14, 1994, to avoid trademark ownership problems with NCSA, where the initial Netscape employees had previously created the NCSA Mosaic web browser. The Mosaic Netscape web browser did not use any NCSA Mosaic code. The internal codename for the company's browser was Mozilla, which stood for "Mosaic killer", as the company's goal was to displace NCSA Mosaic as the world's number one web browser. A cartoon Godzilla-like lizard mascot was drawn by artist-employee Dave Titus, which went well with the theme of crushing the competition. The Mozilla mascot featured prominently on Netscape's website in the company's early years. However, the need to project a more "professional" image (especially towards corporate clients) led to this being removed.
On August 9, 1995, Netscape made an extremely successful IPO, only sixteen months after the company was formed. The stock was set to be offered at US$14 per share, but a last-minute decision doubled the initial offering to US$28 per share. The stock's value soared to US$75 during the first day of trading, nearly a record for first-day gain. The stock closed at US$58.25, which gave Netscape a market value of US$2.9 billion. While it was somewhat unusual for a company to go public prior to becoming profitable, Netscape's revenues had, in fact, doubled every quarter in 1995. The success of this IPO subsequently inspired the use of the term "Netscape moment" to describe a high-visibility IPO that signals the dawn of a new industry. During this period, Netscape also pursued a publicity strategy (crafted by Rosanne Siino, then head of public relations) packaging Andreessen as the company's "rock star." The events of this period ultimately landed Andreessen, barefoot, on the cover of Time magazine. The IPO also helped kickstart widespread investment in internet companies that created the dot-com bubble.
It is alleged that several Microsoft executives visited the Netscape campus in June 1995 to propose dividing the market (an allegation denied by Microsoft and, if true, would have breached antitrust laws), which would have allowed Microsoft to produce web browser software for Windows while leaving all other operating systems to Netscape. Netscape refused the proposition. Microsoft released version 1.0 of Internet Explorer as a part of the Windows 95 Plus Pack add-on. According to former Spyglass developer Eric Sink, Internet Explorer was based not on NCSA Mosaic as commonly believed, but on a version of Mosaic developed at Spyglass (which itself was based upon NCSA Mosaic).
This period of time would become known as the browser wars. Netscape Navigator was not free to the general public until January 1998, while Internet Explorer and Internet Information Server have always been free or came bundled with an operating system and/or other applications. Meanwhile, Netscape faced increasing criticism for "featuritis" – putting a higher priority on adding new features than on making their products work properly. Netscape experienced its first bad quarter at the end of 1997 and underwent a large round of layoffs in January 1998. Former Netscape executives Mike Homer and Peter Currie have described this period as "hectic and crazy" and that the company was undone by factors both internal and external. In January 1998, Netscape started the open source Mozilla project. Netscape publicly released the source code of Netscape Communicator 5.0 under the Netscape Public License, which was similar to the GNU General Public License but allowed Netscape to continue to publish proprietary work containing the publicly released code.
The United States Department of Justice filed an antitrust case against Microsoft in May 1998. Netscape was not a plaintiff in the case, though its executives were subpoenaed and it contributed much material to the case, including the entire contents of the 'Bad Attitude' internal discussion forum.
Acquisition by America Online
On November 24, 1998, America Online (AOL) announced it would acquire Netscape Communications in a tax-free stock-swap valued at US$4.2 billion. By the time the deal closed on March 17, 1999, it was valued at US$10 billion. This merger was ridiculed by many who believed that the two corporate cultures could not possibly mesh; one of its most prominent critics was longtime Netscape developer Jamie Zawinski.
Disbanding
During Netscape's acquisition by AOL, joint development and marketing of Netscape software products would occur through the Sun-Netscape Alliance. In the newly branded iPlanet, the software included "messaging and calendar, collaboration, web, application, directory, and certificate servers", as well as "production-ready applications for e-commerce, including commerce exchange, procurement, selling, and billing." In March 2002, when the alliance was ended, "iPlanet became a division of Sun... Sun retained the intellectual property rights for all products and the engineering"
On July 15, 2003, Time Warner (formerly AOL Time Warner) disbanded Netscape. Most of the programmers were laid-off, and the Netscape logo was removed from the building. However, the Netscape 7.2 web browser (developed in-house rather than with Netscape staff, with some work outsourced to Sun's Beijing development center) was released by AOL on August 18, 2004.
After the Sun acquisition by Oracle in January 2010, Oracle continued to sell iPlanet branded applications, which originated from Netscape.
Final release of the browser
The Netscape brand name continued to be used extensively. The company once again had its own programming staff devoted to the development and support for the series of web browsers. Additionally, Netscape also maintained the Propeller web portal, which was a popular social-news site, similar to Digg, which was given a new look in June 2006. AOL marketed a discount ISP service under the Netscape brand name.
A new version of the Netscape browser, Netscape Navigator 9, based on Firefox 2, was released in October 2007. It featured a green and grey interface. In November 2007, IE had 77.4% of the browser market, Firefox 16.0%, and Netscape 0.6%, according to Net Applications, an Internet metrics firm. On December 28, 2007, AOL announced that it would drop support for the Netscape web browser and would no longer develop new releases on February 1, 2008. The date was later extended to March 1 to allow a major security update and to add a tool to assist users in migrating to other browsers. These additional features were included in the final version of Netscape Navigator 9 (version 9.0.0.6), released on February 20, 2008.
Software
Classic releases
Netscape Navigator (versions 0.9–4.08)
Netscape Navigator was Netscape's web browser from versions 1.0–4.8. The first beta versions were released in 1994 and were called Mosaic and later Mosaic Netscape. Then, a legal challenge from the National Center for Supercomputing Applications (makers of NCSA Mosaic), which many of Netscape's founders used to develop, led to the name Netscape Navigator. The company's name also changed from Mosaic Communications Corporation to Netscape Communications Corporation.
The browser was easily the most advanced available and so was an instant success, becoming a market leader while still in beta. Netscape's feature-count and market share continued to grow rapidly after version 1.0 was released. Version 2.0 added a full email reader called Netscape Mail, thus transforming Netscape from a single-purpose web browser to an Internet suite. The email client's main distinguishing feature was its ability to display HTML email. During this period, the entire suite was called Netscape Navigator.
Version 3.0 of Netscape (the first beta was codenamed "Atlas") was the first to face any serious competition in the form of Microsoft Internet Explorer 3.0. But Netscape remained the most popular browser at that time.
Netscape also released a Gold version of Navigator 3.0 that incorporated WYSIWYG editing with drag and drop between web editor and email components.
Netscape Communicator (versions 4.0–4.8)
Netscape 4 addressed the problem of Netscape Navigator being used as both the name of the suite and the browser contained within it by renaming the suite to Netscape Communicator. After five preview releases in 1996–1997, Netscape released the final version of Netscape Communicator in June 1997. This version, more or less based on Netscape Navigator 3 Code, updated and added new features. The new suite was successful, despite increasing competition from Internet Explorer (IE) 4.0 and problems with the outdated browser core. IE was slow and unstable on the Mac platform until version 4.5. Despite this, Apple entered into an agreement with Microsoft to make IE the default browser on new Mac OS installations, a further blow to Netscape's prestige. The Communicator suite was made up of Netscape Navigator, Netscape Mail & Newsgroups, Netscape Address Book and Netscape Composer (an HTML editor).
On January 22, 1998, Netscape Communications Corporation announced that all future versions of its software would be available free of charge and developed by an open source community, Mozilla. Netscape Communicator 5.0 was announced (codenamed "Gromit"). However, its release was greatly delayed, and meanwhile, there were newer versions of Internet Explorer, starting with version 4. These had more features than the old Netscape version, including better support of HTML 4, CSS, DOM, and ECMAScript; eventually, the more advanced Internet Explorer 5.0 became the market leader.
In October 1998, Netscape Communicator 4.5 was released. It featured various functionality improvements, especially in the Mail and Newsgroups component, but did not update the browser core, whose functionality was essentially identical to that of version 4.08. One month later, Netscape Communications Corporation was bought by AOL. In November, work on Netscape 5.0 was canceled in favor of developing a completely new program from scratch.
Mozilla-based releases
Netscape 6 (versions 6.0–6.2.3)
In 1998, an informal group called the Mozilla Organization was formed and largely funded by Netscape (the vast majority of programmers working on the code were paid by Netscape) to coordinate the development of Netscape 5 (codenamed "Gromit"), which would be based on the Communicator source code. However, the aging Communicator code proved difficult to work with and the decision was taken to scrap Netscape 5 and re-write the source code. The re-written source code was in the form of the Mozilla web browser, on which, with a few additions, Netscape 6 was based.
Netscape 7 (versions 7.0–7.2)
Netscape 7.0 (based on Mozilla 1.0.1) was released in August 2002 as a direct continuation of Netscape 6 with very similar components. It picked up a few users, but was still very much a minority browser. It did, however, come with the popular Radio@Netscape Internet radio client. AOL had decided to deactivate Mozilla's popup-blocker functionality in Netscape 7.0, which created an outrage in the community. AOL reversed the decision and allowed Netscape to reinstate the popup-blocker for Netscape 7.01. Netscape also introduced a new AOL-free-version (without the usual AOL add-ons) of the browser suite. Netscape 7.1 (codenamed "Buffy" and based on Mozilla 1.4) was released in June 2003.
In 2003, AOL closed down its Netscape division and laid-off or reassigned all of Netscape's employees. Mozilla.org continued, however, as the independent Mozilla Foundation, taking on many of Netscape's ex-employees. AOL continued to develop Netscape in-house (with help from Sun's Beijing development center), but, due to there being no staff committed to it, improvements were minimal. One year later, in August 2004, the last version based on Mozilla was released: Netscape 7.2, based on Mozilla 1.7.2.
After an official poll posted on Netscape's community support board in late 2006, speculation arose of the Netscape 7 series of suites being fully supported and updated by Netscape's in-house development team.
Mozilla Firefox-based releases
Netscape Browser (version 8.0–8.1.3)
Between 2005 and 2007, Netscape's releases became known as Netscape Browser. AOL chose to base Netscape Browser on the relatively successful Mozilla Firefox, a re-written version of Mozilla produced by the Mozilla Foundation. This release is not a full Internet suite as before, but is solely a web browser.
Other controversial decisions include the browser only being released for Microsoft Windows and featuring both the Gecko rendering engine of previous releases and the Trident engine used in Internet Explorer, and switching between them based on a "compatibility list" that came with the browser. This effectively exposed users to the security vulnerabilities in both and resulted in a completely different user experience based on which site they were on. Examples are handling of right-to-left or bi-directional text, user interface widgets, bugs and web standards violations in Trident, etc. On top of this, Netscape Browser 8 even broke Internet Explorer's ability to open XML files by damaging a Windows Registry key, and would do so every time it was opened, even if the user fixed it manually.
AOL's acquisition of Netscape Communications in November 1998 made it less of a surprise when the company laid off the Netscape team and outsourced development to Mercurial Communications. Netscape Browser 8.1.3 was released on April 2, 2007, and included general bug fixes identified in versions 8.0–8.1.2
Netscape Navigator (version 9.0)
Netscape Navigator 9's features were said to include newsfeed support and become more integrated with the Propeller Internet portal, alongside more enhanced methods of discussion, submission and voting on web pages. It also sees the browser return to multi-platform support across Windows, Linux and Mac OS X. Like Netscape version 8.x, the new release was based upon the popular Mozilla Firefox (version 2.0), and supposedly had full support of all Firefox add-ons and plugins, some of which Netscape was already providing. Also for the first time since 2004, the browser was produced in-house with its own programming staff. A beta of the program was first released on June 5, 2007. The final version was released on October 15, 2007.
End of development and support
AOL officially announced that support for Netscape Navigator would end on March 1, 2008, and recommended that its users download either the Flock or Firefox browsers, both of which were based on the same technology.
The decision met mixed reactions from communities, with many arguing that the termination of product support is significantly belated. Internet security site Security Watch stated that a trend of infrequent security updates for AOL's Netscape caused the browser to become a "security liability", specifically the 2005–2007 versions, Netscape Browser 8. Asa Dotzler, one of Firefox's original bug testers, greeted the news with "good riddance" in his blog post, but praised the various members of the Netscape team over the years for enabling the creation of Mozilla in 1998. Others protested and petitioned AOL to continue providing vital security fixes to unknowing or loyal users of its software, as well as protection of a well-known brand.
Mozilla Thunderbird-based releases
Netscape Messenger 9
On June 11, 2007, Netscape announced Netscape Mercury, a standalone email and news client that was to accompany Navigator 9. Mercury was based on Mozilla Thunderbird. The product was later renamed Netscape Messenger 9, and an alpha version was released. In December 2007, AOL announced it was canceling Netscape's development of Messenger 9 as well as Navigator 9.
Product list
Initial product line
Netscape's initial product line consisted of:
Netscape Navigator web browser for Windows, Macintosh, OS/2, Unix, and Linux
Netsite Communications web server, with a web-based configuration interface
Netsite Commerce web server, simply the Communications server with SSL (https) added
Netscape Proxy Server
Later Netscape products
Netscape's later products included:
Netscape Personal Edition (the browser along with PPP software and an account creation wizard to sign up with an ISP)
Netscape Communicator (a suite which included Navigator along with tools for mail, news, calendar, VoIP, and composing web pages, and was bundled with AOL Instant Messenger and RealAudio)
Netscape FastTrack and Enterprise web servers
Netscape Collabra Server, a NNTP news server acquired in a purchase of Collabra Software, Inc.
Netscape Directory Server, an LDAP server
Netscape Messaging Server, an IMAP and POP mail server
Netscape Certificate Server, for issuing SSL certificates
Netscape Calendar Server, for group scheduling
Netscape Compass Server, a search engine and spider
Netscape Application Server, for designing web applications
Netscape Publishing System, for running a commercial site with news articles and charging users per access
Netscape Xpert Servers
ECxpert – a server for EDI message exchange
SellerXpert – B to B Commerce Engine
BuyerXpert – eProcurement Engine
BillerXpert – Online Bill Paying Engine
TradingXpert – HTML EDI transaction frontend
CommerceXpert – Online Retail Store engine
Radio@Netscape and Radio@Netscape Plus
Propeller
Between June 2006 and September 2007, AOL operated Netscape's website as social news website similar to Digg. The format did not do well as traffic dropped 55.1 percent between November 2006 and August 2007. In September 2007, AOL reverted Netscape's website to a traditional news portal, and rebranded the social news portal as "Propeller", moving the site to the domain "propeller.com." AOL shut down the Propeller website on October 1, 2010.
Netscape Search
Netscape operated a search engine, Netscape Search, which now redirects to AOL Search (which itself now merely serves Bing (formerly Google) search results). Another version of Netscape Search was incorporated into Propeller.
Other sites
Netscape also operates a number of country-specific Netscape portals, including Netscape Canada among others. The portal of Netscape Germany was shut down in June 2008.
The Netscape Blog was written by Netscape employees discussing the latest on Netscape products and services. Netscape NewsQuake (formerly Netscape Reports) is Netscape's news and opinion blog, including video clips and discussions. As of January 2012, no new posts have been made on either of these blogs since August 2008.
Netscape technologies
Netscape created the JavaScript web page scripting language. It also pioneered the development of push technology, which effectively allowed websites to send regular updates of information (weather, stock updates, package tracking, etc.) directly to a user's desktop (aka "webtop"); Netscape's implementation of this was named Netcaster. However, businesses quickly recognized the use of push technology to deliver ads to users, which annoyed them, so Netcaster was short-lived.
Netscape was notable for its cross-platform efforts. Its client software continued to be made available for Windows (3.1, 95, 98, NT), Macintosh, Linux, OS/2, BeOS, and many versions of Unix including DEC, Sun Solaris, BSDI, IRIX, IBM AIX, and HP-UX. Its server software generally was only available for Unix and Windows NT, though some of its servers were made available on Linux, and a version of Netscape FastTrack Server was made available for Windows 95/98. Today, most of Netscape's server offerings live on as the Sun Java System, formerly under the Sun ONE branding. Although Netscape Browser 8 was Windows only, multi-platform support exists in the Netscape Navigator 9 series of browsers.
Current services
Netscape Internet Service
Netscape ISP is a dial-up Internet service once offered at US$9.95 per month. The company serves web pages in a compressed format to increase effective speeds up to 1300 kbit/s (average 500 kbit/s). The Internet service provider is now run by Verizon under the Netscape brand. The low-cost ISP was officially launched on January 8, 2004.
Netscape.com
Netscape drove much traffic from various links included in the browser menus to its web properties. Some say it was very late to leverage this traffic for what would become the start of the major online portal wars.
Netscape's exclusive features, such as the Netscape Blog, Netscape NewsQuake, Netscape Navigator, My Netscape and Netscape Community pages, are less accessible from the AOL Netscape designed portal and in some countries not accessible at all without providing a full URL or completing an Internet search. The new AOL Netscape site was originally previewed in August 2007 before moving the existing site in September 2007.
Netscape.com now redirects to AOL's website, with no Netscape branding at all. Meanwhile, Netscape.co.uk now redirects to AOL Search, with no Netscape branding at all.
DMOZ
DMOZ (from directory.mozilla.org, its original domain name, also known as the Open Directory Project or ODP), was a multilingual open content directory of World Wide Web links owned by Netscape that was constructed and maintained by a community of volunteer editors. It closed in 2017.
See also
Code Rush, a 2000 documentary about Netscape engineers
SeaMonkey
The Book of Mozilla
Lou Montulli, a founding engineer of Netscape Communications, creator of HTTP cookies
Brendan Eich, early Netscape employee, creator of JavaScript
References
Further reading
Jim Clark, Netscape Time: The Making of the Billion-Dollar Start-Up That Took On Microsoft, St. Martin's Press, 1999.
Michael E. Cusumano and David B. Yoffie, Competing On Internet Time: Lessons From Netscape And Its Battle With Microsoft, The Free Press, 1998, 2000.
Fortune Magazine, "Remembering Netscape: The Birth Of The Web", July 25, 2005.
External links
Archive of official site circa 1994
The Netscape Blog
The Netscape Unofficial FAQ
Netscape Browser Archive
A Netscape Timeline, Holger Metzger
Mosaic Communications Corporation
Mosaic Communications, early job ads
Netscape 1.0 emulator
Firefox browser Add-ons Netscape Forever website
1994 establishments in California
2008 disestablishments in Virginia
Yahoo!
Companies formerly listed on the Nasdaq
Computer companies disestablished in 2008
Computer companies established in 1994
Defunct computer companies of the United States
Defunct companies based in the San Francisco Bay Area
Technology companies based in the San Francisco Bay Area
Defunct companies based in Virginia
1999 mergers and acquisitions
1995 initial public offerings |
24210225 | https://en.wikipedia.org/wiki/Sensage | Sensage | Sensage Inc. is a privately held data warehouse software provider headquartered in Redwood City, California. Sensage serves enterprises who use the software to capture and store event data so that it can be consolidated, searched and analyzed to generate reports that detect fraud, analyze performance trends, and comply with government regulations.
According to The451 Group, Sensage generates more than 70% of its revenue through global and local partners, which include EMC, HP, Cerner and McAfee.
The company is backed by venture capital firms Sierra Ventures, Canaan Partners, Mitsui & Co. Venture Partners, FTVentures and Sand Hill Capital.
Corporate history
Sensage Inc. was founded as Addamark Technologies Inc. in 2000. In October 2004, Addamark changed its name to Sensage and simultaneously announced version 3.0 of its flagship security information management product (SIM).
Sensage's financial backers include Sierra Ventures, Canaan Partners, Mitsui & Co. Venture Partners Inc., FTVentures and Sand Hill Capital.
In October, 2012 Sensage was acquired by KEYW Corp. of Hanover, MD. On July 31, KEYW spun off a commercial products division, Hexis Cyber Solutions, Corp. Sensage's product relabeled as HawkEye AP (Analytics Platform) along with the Active Defense Grid product formerly known as Project G, now relabeled HawkEye G are the two primary products marketed by Hexis.
Technology
The company uses a columnar database architecture instead of the relational database architecture that is more common in the industry.
Sensage holds U.S. patent #7,024,414 for parsing table data into columns of values, formatting each column into a data stream, and transferring each data stream to a storage device in a continuous strip of data. In this architecture, the data is stored in columns instead of rows, which eliminates the need for indices when storing event data to increase data compression and retrieval speeds.
The event data warehouse software uses an extraction, transformation and loading tool to pull records into the data warehouse, where it is compressed and spread across server nodes. Data queries are distributed across data warehouse nodes as well.
Sensage provides support for SAP, Oracle (PeopleSoft and Siebel), Lawson, Cerner and other packaged application providers, and its technology supports precise analytics needed for use cases, such as fraud detection.
As its customers have shifted resources into cloud computing platforms, Sensage announced software that supports clustering and configuration in a VMware environment with hypervisor for using CPU cores, memory and other virtualized hardware resources. The event data software includes support for storage virtualization, providing integration of SANs (storage area networks), NAS (network-attached storage) and CAS (content addressable storage) as online storage in a cloud-based or VMware environment.
Products & Services
Sensage's products are built on its event data warehouse software that consolidates a complex stream of business transactions and communications from any network source, analyzes and stores the data, and gives users an interface to search through events from a single console. In 2008, Sensage added new user interface features to its software to make it usable by non-technical staff.
Through much of its history, Sensage customers have used its event data warehouse software to analyze system logs for security information and event management (SIEM) to collect, store, manage and analyze log records for security and forensic purposes. In a survey, Sensage customers reported the need to retain logs and archive archiving records for a minimum of one year as a hedge against future audits. Sensage customer Blue Cross Blue Shield of Florida mentions using Sensage tools for proactive monitoring that allow administrators to see and prevent security threats as they occur.
Sensage has OEM arrangements with HP (the HP Compliance Log Warehouse (CLW) appliance) and Cerner (healthcare applications).HP uses Sensage software as the log management engine in the HP CLW, which is used by customers to collect and analyze log data to trigger compliance reporting for Sarbanes-Oxley, PCI and other federal record retention rules. Customers such as Choice Hotels have deployed the HP CLW to automate network analysis and compliance reporting to meet Payment Card Industry Data Security Standard (PCI DSS) regulations. More than 5,800 Choice brand hotels worldwide rely on the HP CLW to identify internal violations and external threats in real time.
Sensage combines its event data warehouse software with EMC Centera long-term storage units to store, manage and analyze call detail records (CDRs) for telephone and wireless carriers and ISPs that must comply with the European Union Data Retention Directive. The regulation was established to ensure service providers could assist law enforcement officials investigating bombings, leading to a requirement to keep records for up to three years.
Telefónica O2 Ireland has deployed Sensage with EMC Centera storage hardware to meet the Data Retention Direction mandate, which requires storage and managing an average of 50 million CDRs per day.
Further reading
Sensage Website
The silent explosion of log management, CNET
Enterprises Rolling on Logs, Dark Reading
PCI forces companies to seek log management help, SearchSecurity
Growing Dependence on Log Data for Compliance and Threat Response, Sarbanes-Oxley Compliance Journal
Meeting The PCI DSS Requirements, Sarbanes-Oxley Compliance Journal
References
Software companies based in California
Software companies of the United States |
6124294 | https://en.wikipedia.org/wiki/Returns%20from%20Troy | Returns from Troy | The Returns from Troy are the stories of how the Greek leaders returned after their victory in the Trojan War. Many Achaean heroes did not return to their homes, but died or founded colonies outside the Greek mainland. The most famous returns are those of Odysseus, whose wanderings are narrated in the Odyssey, and Agamemnon, whose murder at the hands of his wife Clytemnestra was portrayed in Greek tragedy.
The sack of Troy
The Achaeans entered the city using the Trojan Horse and slew the slumbering population. Priam and his surviving sons and grandsons were killed. Glaucus, son of Antenor, who had earlier offered hospitality to the Achaean embassy that asked the return of Helen of Troy and had advocated so was spared, along with his family by Menelaus and Odysseus. Aeneas took his father on his back and fled. He was left alone because of his piety. The city was razed and the temples were destroyed.
Of the women of the royal family, Locrian Ajax violated Cassandra on Athena's altar while she was clinging to her statue, which since looks upward. She was awarded to Agamemnon. Neoptolemus got Andromache, wife of Hector and Odysseus took Priam's widow Hecuba (known in Greek as Hecabe). The ghost of Achilles appeared before the survivors of the war, demanding that the Trojan princess Polyxena be sacrificed before anybody could leave, as either part of his spoil or because she had betrayed him. Neoptolemus did so.
The Returns
News of Troy's fall quickly reached the Achaean kingdoms through phryctoria, a semaphore system used in ancient Greece. A fire signal lit at Troy was seen at Lemnos, relayed to Athos, then to the look-out towers of Macistus on Euboea, across the Euripus straight to Messapion, then to Mount Cithaeron, Mount Aegiplanctus and finally to Mount Arachneus where it was seen by the people of Mycenae, including Clytaemnestra.
But though the message was brought fast and with ease, the heroes were not to return this way. The Gods were very angry over the destruction of their temples and other sacrilegious acts by the Achaeans and decided that most would not return. A storm fell on the returning fleet off Tenos island. Also Nauplius, in revenge for the murder of his son Palamedes by Odysseus, set up false lights in Cape Caphereus (also known today as Cavo D'Oro, on Euboea) and many were shipwrecked.
Agamemnon had made it back to his kingdom safely with Cassandra in his possession after some stormy weather. He and Cassandra were slain by Aegisthus (in the oldest versions of the story) or by Clytemnestra or by both of them. Electra and Orestes later avenged their father, but Orestes was the one who was chased by the Furies. See below for further details.
Nestor, who had the best conduct in Troy and did not take part in the looting, was the only hero who had a good, fast and safe return. Those of his army that survived the war also reached home with him safely.
Locrian Ajax, who had endured more than the others the wrath of the Gods, never returned home. His ship was wrecked by a storm sent by Athena who borrowed one of Zeus' thunderbolts and tore it to pieces. The crew managed to land in a rock but Poseidon smote it and the Lesser Ajax fell in the sea and drowned after he boasted that even the gods could not kill him. He was buried by Thetis on Myconos or Delos.
The archer Teucer (son of Telamon and half-brother of the other Ajax) stood trial by his father for his brother's death. He was acquitted of responsibility but found guilty of negligence because he did not return his dead body or his arms. He was disowned and wasn't allowed back on Salamis Island. He left with his army (who took their wives) and was at sea near Phreattys in the Peiraeus where he later founded Salamis on Cyprus. The Athenians later created a political myth that his son left his kingdom to Theseus' sons (and not to Megara).
Neoptolemus, following Helenus' advice (who accompanied him) traveled over land, always accompanied by Andromache. He met Odysseus and they buried Achilles' teacher Phoenix on the land of the Ciconians. Then they conquered the land of the Molossians (the Epirus) and had a child by Andromache, Molossus, to whom he later gave the throne. Thus the kings of the Epirus claimed descendance from Achilles, as did Alexander the Great whose mother was of that royal house (Alexander and the kings of Macedon also claimed descendance from Hercules). Helenus founded a city in Molossia and inhabited it, and Neoptolemus gave him his mother Deidamia as wife. After Peleus died, he succeeded Phtia's throne as well. He had a feud with Orestes, son of Agamemon, over Menelaus' daughter Hermione and he was killed at Delphi, where he was buried. In Roman myths the kingdom of Phtia was taken over by Helenus, who married Andromache. They offered hospitality to other Trojan refugees, including Aeneas who paid a visit there during his wanderings.
Diomedes was first thrown by a storm on the coast of Lycia where he was to be sacrificed to Ares by king Lycus. King Lycus' daughter Callirrhoe took pity upon him, and assisted him in escaping. Then he accidentally landed in Attica at Phalerum. The Athenians, unaware that they were allies, attacked them. Many were killed and the Palladium was taken by Demophon. He finally landed at Argos where his wife Aegialia was committing adultery and, in disgust, left for Aetolia. According to Roman traditions, he had some adventures and founded a colony in Italy.
Philoctetes, because of a sedition, was driven from his city by a revolt and emigrated to Italy where he founded the cities of Petilia, Old Crimissa, and Chone, between Croton and Thurii. After making war on the Leucanians, he founded there a sanctuary of Apollo the Wanderer to whom also he dedicated his bow.
For Homer, Idomeneus reached his house safe and sound. Another tradition was formed later. After the war, Idomeneus' ship hit a horrible storm. He promised Poseidon that he would sacrifice the first living thing he saw when he returned home if the god would save his ship and crew. The first living thing was his son whom Idomeneus duly sacrificed. The gods were angry at the sacrifice of his own son and they unleashed a plague to Crete. His people sent him into exile to Calabria in Italy, and then Colophon in Asia Minor where he died.
Among the lesser Achaeans very few reached their homes.
Guneus, leader of the Aeneanians (the exact location is unknown but is believed to be in the Epirus), went to Libya and settled near the Cinyps river.
Antiphus, son of Thessalus from Cos, settled in Pelasgiotis and renamed it Thessaly after his father Thessalus.
Pheidippus, who had led an army from Cos, settled on Andros, Agapenor from Arcadia settled in Cyprus and founded Paphos.
Prothous from Magnesia settled in Crete
Menestheus, king of Athens, became king of Melos
Theseus' descendants ruled Athens for four more generations.
The army of Elephenor (who had died in front of Troy) settled in the Epirus and founded Apollonia.
Tlepolemus, king of Rhodes, was driven by the winds and settled in the Balearic islands.
Podalirius, following the instructions of the oracle at Delphi, settled in Caria.
House of Atreus
According to the Odyssey, Menelaus's fleet was blown by storms to Crete and Egypt where they were unable to sail away because the wind was calm. Only 5 of his ships survived. Menelaus had to catch Proteus, a shape-shifting sea god to find out what sacrifices to which gods he would have to make to guarantee safe passage. According to some stories the Helen who was taken by Paris was a fake, and the real Helen was in Egypt where she was reunited with Menelaus at this point. Proteus also told Menelaus that he was destined for Elysium (the Fields of the Blessèd) after his death. Menelaus returned to Sparta with Helen 8 years after he had left Troy.
Agamemnon returned home with Cassandra to Mycenae. His wife Clytemnestra (Helen's sister) was having an affair with Aegisthus, son of Thyestes, Agamemnon's cousin who had conquered Argos before Agamemnon himself retook it. Possibly out of vengeance for the death of Iphigenia, Clytemnestra plotted with her lover to kill Agamemnon. Cassandra foresaw this murder, and warned Agamemnon, but he disregarded her. He was killed, either at a feast or in his bath according to different versions. Cassandra was also killed. Agamemnon's son Orestes, who had been away, returned and conspired with his sister Electra to avenge their father. He killed Clytemnestra and Aegisthus and succeeded to his father's throne yet he was chased by the Furies until he was acquitted by Athena.
The Odyssey
Odysseus (or Ulysses), attempting to travel home, underwent a series of trials, tribulations and setbacks that stretched his journey to ten years' time. These are detailed in Homer's epic poem the Odyssey.
At first they landed in the land of the Ciconians in Ismara. After looting the land they were driven back with many casualties. A storm off Cape Maleas drove them to uncharted waters. They landed in the land of the Lotus-eaters. There a scouting party ate from the lotus tree and forgot everything of home.
The rest then set sail and landed at the land of Polyphemus, son of Poseidon. After a few were killed by him Odysseus blinded him and managed to escape, but earned Poseidon's wrath.
They went next to the isle of Aeolus, god of winds. Odysseus was received hospitably by the Aeolus who gave him a favorable wind and a bag that contained the unfavorable wind. When Odysseus fell asleep in sight of Ithaca his crew opened the bag, and the ships were driven away.
In the next of the Laestrygonians next they neared, where the cannibalistic inhabitants sank his fleet (except Odysseus' ship) and ate the crew.
Next they landed on Circe's island, who transformed most of the crew into pigs, but Odysseus managed to force her to transform them back and left.
Odysseus wished to speak to Tiresias, so he went the river Acheron in Hades, where they performed sacrifices which allowed them to speak to the dead. They gave them advice on how to proceed. Then, he went to Circe's island again.
From there he set sail through the pass of the Sirens, whose sweet singing lure sailors to their doom. He had stopped up the ears of my crew with wax, and Odysseus alone listened while tied to the mast.
Next was the pass of Scylla and Charybdis where he lost part of his ship's crew. The rest landed in the isle Thrinacia, sacred to Helios (the Sun) where he kept sacred cattle. Though Odysseus warned his men not to (as Tiresias had told him), they killed and ate some of the cattle after Zeus placed Odysseus in his sleep to test his crew. Under a threat from Helios to take the sun and shine it in the Underworld, Zeus shipwrecked the last ship and killed everyone except Odysseus.
Odysseus was washed ashore on Ogygia, where the nymph Calypso lived. She made him her lover for seven years and would not let him leave, promising him immortality if he stayed. On behalf of Athena, Zeus intervened and sent Hermes to tell Calypso to let Odysseus go.
Odysseus left on a small raft furnished with provisions of water, wine and food by Calypso, only to be hit by a storm and washed up on the island of Scheria and found by Nausicaa, daughter of King Alcinous and Queen Arete of the Phaeacians, who entertained him well and escorted him to Ithaca. On the twentieth day of sailing he arrived at his home on Ithaca.
There Odysseus traveled disguised as an old beggar by Athena he was recognized by his dog Argus, who died in his lap. Then he discovered his wife Penelope had been faithful to him all these years despite the countless suitors that were eating and spending his property all these years. With his son Telemachus' help and that of Athena and Eumaeus the swineherd, killed all of them except Medon, who had been polite to Penelope, and Phemius, a local singer who had only been forced to help the suitors against Penelope. Penelope tested him and made sure it was him, and he forgave her. On the next day the suitor's relatives tried to take revenge on him but they were stopped by Athena.
Years later Odysseus' son by Circe, Telegonus came from the sea and plundered the island thinking it was Corcyra. Odysseus and Telemachus, defended their city and Telegonus accidentally killed his father with the spine of a stingray. He brought the body back to Aeaea and took Penelope and Telemachus with him. Circe made them immortal and married Telemachus, while Telegonus made Penelope his wife.[183] This is where the tale of the Trojan War for Greek mythology ends. According to a Roman tradition Odysseus did not die this way: when old he took a ship to sea and, crossing the Pillars of Hercules he discovered the estuary of the Tagus river and found there the city of Lisbon.
See also
Aeneid
Founding of Rome
Nostoi
References
Trojan War |
48005845 | https://en.wikipedia.org/wiki/OMEMO | OMEMO | OMEMO is an extension to the Extensible Messaging and Presence Protocol (XMPP) for multi-client end-to-end encryption developed by Andreas Straub. According to Straub, OMEMO uses the Double Ratchet Algorithm "to provide multi-end to multi-end encryption, allowing messages to be synchronized securely across multiple clients, even if some of them are offline". The name "OMEMO" is a recursive acronym for "OMEMO Multi-End Message and Object Encryption".
It is an open standard based on the Double Ratchet Algorithm and the Personal Eventing Protocol (PEP, XEP-0163).
OMEMO offers future and forward secrecy and deniability with message synchronization and offline delivery.
Features
In comparison with OTR, the OMEMO protocol offers many-to-many encrypted chat, offline messages queuing, forward secrecy, file transfer, verifiability and deniability at the cost of slightly larger message size overhead.
History
The protocol was developed and first implemented by Andreas Straub as a Google Summer of Code project in 2015. The project's goal was to implement a double-ratchet-based multi-end to multi-end encryption scheme into an Android XMPP-based instant messaging client called Conversations.
It was introduced in Conversations and submitted to the XMPP Standards Foundation (XSF) as a proposed XMPP Extension Protocol (XEP) in the autumn of 2015 and got accepted as XEP-0384 in December 2016.
In July 2016, the ChatSecure project announced that they would implement OMEMO in the next releases. ChatSecure v4.0 supports OMEMO and was released on January 17, 2017.
A first experimental release of an OMEMO plugin for the cross-platform XMPP client Gajim was made available on December 26, 2015.
In June 2016, the non-profit computer security consultancy firm Radically Open Security published an analysis of the OMEMO protocol.
Client support
Selected clients supporting OMEMO (full list of clients also exists):
BeagleIM (macOS)
ChatSecure (iOS)
Conversations (Android)
Converse.js (Browser-based)
Dino (Linux, macOS)
Gajim via official plugin (Linux, Windows, BSD)
Monal (iOS)
Movim (Browser-based)
Psi via official plugin (Linux, Windows, macOS)
Psi+ via official plugin (Linux, Windows, macOS, Haiku, FreeBSD)
libpurple clients such as Pidgin or Finch via experimental plugin
Adium via an Xtra based on the libpurple plugin
Profanity via experimental plugin (BSD, Linux, macOS, Windows)
SiskinIM (iOS)
Library support
Smack supports OMEMO using the two modules smack-omemo and smack-omemo-signal
XMPPFramework (macOS, iOS, tvOS) supports OMEMO via the OMEMOModule extension when used in conjunction with the SignalProtocol-ObjC library.
References
External links
Homepage
XEP-0384: OMEMO Encryption (Experimental)
Python library for implementing OMEMO in other clients
OMEMO protocol implementation in C
OMEMO Top - OMEMO support toplist in instant message clients
Free security software
Cryptographic protocols
Internet privacy software
Instant messaging
XMPP |
620159 | https://en.wikipedia.org/wiki/List%20of%20fighting%20games | List of fighting games | Fighting games are characterized by close combat between two fighters or groups of fighters of comparable strength, often broken into rounds or stocks. If multiple players are involved, players generally fight against each other.
Note: Games are listed in a "common English title/alternate title - developer" format, where applicable.
General
2D
Fighting games that use 2D sprites. Games tend to emphasize the height of attacks (high, medium, or low), and jumping.
Aggressors of Dark Kombat - ADK
Tōkidenshō Angel Eyes - Tecmo
Akatsuki Blitzkampf series - Subtle Style
Akatsuki Blitzkampf
Akatsuki Blitzkampf Ausf. Achse
En-Eins Perfektewelt
Aquapazza: Aquaplus Dream Match / Examu
Arcana Heart series - Examu
Arcana Heart
Arcana Heart Full!
Arcana Heart 2
Suggoi! Arcana Heart 2
Arcana Heart 3
Arcana Heart 3:Love MAX!!!!!
Arcana Heart 3 Love Max:Six Stars!!!!!!
Art of Fighting series - SNK
Art of Fighting
Art of Fighting 2
Art of Fighting 3: The Path of the Warrior
Astra Superstars - Sunsoft
Asura series - Fuuki
Asura Blade: Sword of Dynasty
Asura Buster: Eternal Warriors
Asuka 120% series
Asuka 120% Burning Festival - Fill-in-Cafe
Asuka 120% Excellent Burning Festival - Fill-in-Cafe / FamilySoft
Asuka 120% Maxima Burning Festival - Fill-in-Cafe
Asuka 120% Special Burning Festival - FamilySoft
Asuka 120% Limited Burning Festival - Kodansha
Asuka 120% Final Burning Festival - FamilySoft / SUCCESS
Asuka 120% Return Burning Festival - FamilySoft
Avengers in Galactic Storm - Data East
Aazohm Krypht - Logitron
Bangkok Knights - System 3
Battle Blaze - American Sammy
Battle Beast - 7th Level
Battle K-Road - Psikyo
Battle Master: Kyuukyoku no Senshitachi - System Vision
Battle Monsters - Naxat Soft
BattleCry - Home Data
Barbarian series
Barbarian: The Ultimate Warrior / Death Sword - Palace Software
Barbarian II: The Dungeon of Drax / Axe of Rage - Palace Software
Beastlord - Grandslam
Best of Best - SunA
Bible Fight - This is Pop
Big Bang Beat - NRF Software
Bikini Karate Babes - Creative Edge Studios
Black Belt - Earthware Computer Services
Black Hole Assault - Micronet
Blandia - Allumer
BlazBlue series - Arc System Works
BlazBlue: Calamity Trigger
BlazBlue: Calamity Trigger Portable
BlazBlue: Continuum Shift
BlazBlue: Continuum Shift II
BlayzBloo: Super Melee Brawlers Battle Royale
BlazBlue: Continuum Shift Extend
BlazBlue: Chrono Phantasma
BlayzBloo: Clone Phantasma
BlazBlue: Chrono Phantasma Extend
BlazBlue: Central Fiction
Blood Warrior / Ooedo Fight - Kaneko
BloodStorm - Incredible Technologies
Body Blows series
Body Blows - Team17
Body Blows Galactic - Team17
Ultimate Body Blows - Team17
Bounces - Denton Designs
Breakers/Breakers Revenge - Visco
Brutal: Paws of Fury series
Brutal: Paws of Fury - GameTek
Brutal Unleashed: Above the Claw - GameTek
Budokan: The Martial Spirit - Electronic Arts
Burning Rival - Sega
Capcom Fighting Evolution - Capcom
Capital Punishment - ClickBOOM
Capoeira Fighter series - Spiritonin
Capoeira Fighter
Capoeira Fighter 2
Capoeira Fighter 3
Catfight - Atlantean Interactive Games
Chinese Hero - Taiyo System
Chaos Code – FK Digital pty ltd.
Cho Aniki: Bakuretsu Ranto Hen - NCS
Chop Suey - English Software
Choy Lee Fut: Kung-Fu Warrior - Positive (company)
ClayFighter series - Interplay
ClayFighter
ClayFighter: Tournament Edition
ClayFighter 2 / ClayFighter 2: Judgement Clay / C2: Judgement Clay
ClayFighter 63⅓
Crimson Alive - Keropyon
Crimson Alive: Genesis of The Heretic - Keropyon
Crimson Alive: Burst Again - Keropyon
Crimson Alive: Extreme Encounter - Keropyon
Cross Theater - ABC Maru
Cosmic Carnage - Sega
Cyberbots: Full Metal Madness - Capcom
Daemon Bride series - Examu
Daemon Bride - Examu
Daemon Bride: Additional Gain - Examu
Dangerous Streets - Micromania (video game developer)
Darkstalkers series - Capcom
Darkstalkers: The Night Warriors
Night Warriors: Darkstalkers' Revenge
Darkstalkers 3
Vampire Hunter 2: Darkstalkers' Revenge
Vampire Savior 2: The Lord of Vampire
Dino Rex - Taito
Divekick - Iron Galaxy
Doomsday Warrior Taiketsu!! Brass Numbers(Japanese name) - Renovation Productions Inc.
Double Dragon: The Movie - Technos
Dr. Doom's Revenge - Empire Interactive
Draglade series - Dimps
Custom Beat Battle: Draglade
Draglade 2
Dragon: The Bruce Lee Story - Virgin
Dragon Ball Z (Arcade) - Banpresto
Dragon Master - Unico (video game company)
Dragoon Might - Konami
Duel 2000 - Coktel Vision
Eternal Champions series - Sega
Eternal Champions
Eternal Champions: Challenge from the Dark Side
Eternal Fighter Zero - Tasogare Frontier
Expect No Mercy
The Fallen Angels / Daraku Tenshi - Psikyo
Fatal Fury/Garou Densetsu series - SNK
Fatal Fury: King of Fighters
Fatal Fury 2
Fatal Fury Special
Fatal Fury 3: Road to the Final Victory
Real Bout Fatal Fury
Real Bout Fatal Fury Special
Real Bout Fatal Fury 2: The Newcomers
Real Bout Garou Densetsu Special: Dominated Mind
Fatal Fury: 1st Contact
Garou: Mark of the Wolves
Fighter's History series - Data East
Fighter's History
Fighter's History Dynamite
Fighter's History: Mizoguchi Kiki Ippatsu!!
Fightin' Spirit - Lightshock Software
Fighting Masters - Treco
Fighting Road - Toei Animation
Fist Fighter - Zeppelin
Fist of the North Star - Arc System Works
Fight Fever / Wang Jung Wang - Unotechnology
Flash Hiders series - Right Stuff
Flash Hiders
Battle Tycoon: Flash Hiders SFX
FOOTSIES series - HiFight
FOOTSIES
FOOTSIES: Rollback Edition
Fu'un series - SNK
Savage Reign
Kizuna Encounter: Super Tag Battle
Fuuka Taisen - Rei no Mono
Galactic Warriors - Konami
Galaxy Fight: Universal Warriors - Sunsoft
Gladiator - Domark
Glove on Fight - French-Bread
Golden Axe: The Duel - Sega
Guilty Gear series - Arc System Works
Gundam: Battle Assault series - Bandai
Head to Head Karate - Softdisk Publishing
Hercules: Slayer of the Damned - Gremlin Graphics
Hiryu no Ken series - Culture Brain
Hitman Reborn! DS: Flame Rumble - Tomy
Holosseum - Sega
Human Killing Machine / HKM - Tiertex
Immaterial and Missing Power - Twilight Frontier / Team Shanghai Alice
InuYasha: A Feudal Fairy Tale - Bandai
International Karate series
International Karate / World Karate Championship - System 3
IK+ / International Karate + / Chop N' Drop - System 3
Istanbul Beyleri series - AKEMRE
Istanbul Beyleri
Istanbul Beyleri 2
JoJo's Bizarre Adventure - Capcom
Joy Mech Fight - Nintendo
Jump Stars series - Ganbarion
Jump Super Stars
Jump Ultimate Stars
Justice League Task Force - Blizzard Entertainment
Kabuki Klash: Far East Of Eden - Hudson Soft
Kaiser Knuckle / Global Champion / Dan-Ku-Ga - Taito
Karate Master Knock Down Blow - Crian Soft
Karate - Ultravision
Karate Champ - Technos Japan Corporation
Karate Combat - Superior Software
Karateka - Jordan Mechner
Kart Fighter - Cracked game featuring Mario
Kasumi Ninja - Hand Made Software
Kick Box Vigilante - Zeppelin
Killer Instinct series
Killer Instinct - Rare
Killer Instinct 2 - Rare
Killer Instinct Gold - Rare
The Killing Blade - IGS
The King of Fighters series - SNK
The King of Fighters '94
The King of Fighters '95
The King of Fighters '96
The King of Fighters '97
The King of Fighters '98
The King of Fighters '99
The King of Fighters 2000
The King of Fighters 2001
The King of Fighters 2002
The King of Fighters 2003
The King of Fighters Neowave
The King of Fighters XI
The King of Fighters XII
The King of Fighters XIII
Knight Games - English Software
Konjiki no Gash Bell Yuujou no Zakeru 2 - Banpresto
Konjiki no Gash Bell Yuujou no Zakeru Dream Tag Tournament - Banpresto
The Kung-Fu Master Jackie Chan / Jackie Chan in Fists of Fire: Jackie Chan Densetsu - Kaneko
The Last Blade series - SNK
Last Fight - Andromeda Software
Makeruna! Makendō 2: Kimero Youkai Souri - Success / Fill-in-Cafe
Maribato! - DK Soft
Martial Masters - IGS
Martial Champion - Konami
Marvel Super Heroes series - Capcom
X-Men: Children of the Atom
Marvel Super Heroes
Marvel vs. Capcom series - Capcom
X-Men vs. Street Fighter
Marvel Super Heroes vs. Street Fighter
Marvel vs. Capcom
Marvel vs. Capcom 2: New Age of Heroes
Master Axe: The Genesis of MysterX - Axe to Grind
Master Ninja: Shadow Warrior of Death - Paragon Software
Matsumura Kunihiro Den: Saikyō no Rekishi o Nurikaero! - Shouei
Melty Blood series - Type-Moon / French-Bread / Ecole Software
Metal & Lace: Battle of the Robo Babes - Forest
Mighty Morphin Power Rangers series
Mighty Morphin Power Rangers (Sega Genesis)
Mighty Morphin Power Rangers (Sega Game Gear)
Mighty Morphin Power Rangers: The Movie (Sega Game Gear)
Mighty Morphin Power Rangers: The Fighting Edition
Mighty Warriors - Electronic Devices/Electtronica Video-Games SRL
Million Knights Vermilion - NRF Software
Monster - 8105 Graphics
Mortal Kombat series - Midway
Mortal Kombat
Mortal Kombat II
Mortal Kombat 3
Ultimate Mortal Kombat 3
Mortal Kombat Trilogy
M.U.G.E.N
Neo Geo Battle Coliseum - SNK
New Mobile Report Gundam Wing: Endless Duel - Natsume
NinjaTrick - CyberAgent America
Ninja / Ninja Mission - Entertainment USA / Mastertronic
Ninja Hamster - CRL
Ninja Master's - ADK
Nitro Royale: Heroines Duel - Nitroplus
No Exit - Titus / Tomahawk
Ōgon Musōkyoku - 07th Expansion
One Must Fall: 2097 - Epic Games
Ragnagard/Shin-Oh-Ken - Saurus / System Vision
Osu!! Karate Bu - Culture Brain
Queen of Heart - Watanabe Seisakujo
Panza Kick Boxing / Best of the Best: Championship Karate - Loriciels
Persona 4 Arena - Arc System Works/Atlus
Persona 4 Arena Ultimax - Arc System Works/Atlus
Phantom Breaker - 5pb.
Photo Dojo - Nintendo
Pit-Fighter: The Ultimate Competition - Atari Games
Power Instinct/Goketsuji Ichizoku series - Atlus
Power Instinct
Power Instinct 2
Gogetsuji Legends
Groove on Fight
Matrimelee
Shin Goketsuji Ichizoku: Bonnou no Kaihou
Gōketsuji Ichizoku Matsuri Senzo Kuyou
Power Moves - Kaneko
Pray For Death - Virgin Interactive
Primal Rage series - Atari Games
Project Cerberus – Hobibox / Milestone
RABBIT - Aorn / Electronic Arts
Red Earth / Warzard - Capcom
Rage of the Dragons - Evoga / Noise Factory
Ranma ½ series
Ranma ½: Chōnai Gekitōhen - NCS
Ranma ½: Datou, Ganso Musabetsu Kakutou-ryuu! - NCS
Ranma ½: Hard Battle - Atelier Double
Ranma ½: Chougi Rambuhen - Rumic Soft
Ranma ½: Battle Renaissance - Rumic Soft
Rise of the Robots series - Mirage Media
Rise 2: Resurrection - Mirage Media
Rock, Paper, Scissors: Extreme Deathmatch - This is Pop
The Rumble Fish series - Dimps
The Rumble Fish
The Rumble Fish 2
Sango Fighter series - Panda Entertainment
Sango Fighter
Sango Fighter 2
Sai Combat - Mirrorsoft
Sailor Moon series - Angel
Samurai Deeper Kyo - Bandai
Samurai Shodown / Samurai Spirits series - SNK
Samurai Trilogy - Gremlin Graphics
Savage Warriors - Mindscape
Scarlet Weather Rhapsody - Twilight Frontier / Team Shanghai Alice
Schmeiser Robo – Hot B Co. Ltd.
Seifuku Densetsu Pretty Fighter - Genki / Sol
Sengoku Basara X - Capcom / Arc System Works
Shadow Fighter - Gremlin Graphics
Shanghai Karate - Players
Shaq-Fu - Delphine
Shin Koihime Musō: Otome Taisen Sangokushi Engi - BaseSon
Shogun Warriors / Fujiyama Buster - Kaneko
Skullgirls - Lab Zero Games
Sokko Seitokai Sonic Council - Banpresto
SNK Gals' Fighters - SNK
SNK vs. Capcom series - Capcom / SNK
SnapDragon / Karate Chop - Bubble Bus
Sokko Seitokai: Sonic Council - Banpresto
Spectral vs. Generation - Idea Factory / IGS
Spitting Image - Domark
Street Combat - NCS
Street Fighter series - Capcom
Street Fighter
Street Fighter II: The World Warrior
Street Fighter II: Champion Edition
Street Fighter II: Hyper Fighting
Super Street Fighter II: The New Challengers
Super Street Fighter II Turbo
Hyper Street Fighter II: The Anniversary Edition
Super Street Fighter II Turbo HD Remix
Ultra Street Fighter II: The Final Challengers
Street Fighter Alpha: Warriors' Dreams
Street Fighter Alpha 2
Street Fighter Alpha 2 Gold
Street Fighter Zero 2 Alpha
Street Fighter Alpha 3
Street Fighter Alpha Anthology
Street Fighter III
Street Fighter III: 2nd Impact
Street Fighter III: 3rd Strike
Street Fighter: The Movie - IT / Capcom
Street Smart - SNK
Strip Fighter series - Games Express
Sumo Wrestlers - HES
Super Black Belt Karate - Computer Applications
Super Cosplay War Ultra - Team FK
Super Fighter - C&E
Super Gem Fighter Mini Mix / Pocket Fighter - Capcom
Superior Soldiers / Perfect Soldiers - Irem
Survival Arts - Sammy
Swashbuckler - Paul Stephenson
Sword Slayer / Spartacus the Swordslayer - Players
Taekwon-Do - Human Entertainment
Tao Taido - Video System
Tattoo Assassins - Data East
Tatsunoko Fight - Takara
Teenage Mutant Ninja Turtles: Tournament Fighters - Konami
Thai Boxing - Anco Software
Thea Realm Fighters - High Voltage Software
Them's Fightin' Herds - Mane6
Time Killers - Incredible Technologies
Timeslaughter - Bloodlust Software
Tongue of the Fatman / Mondu's Fight Palace / Slaughter Sport / Fatman - Activision
Tuff E Nuff / Dead Dance - Jaleco
Twinkle Queen - Milestone
Uchi Mata / Brian Jack's Uchi Mata - Martech
Ultra Vortek - Atari
Under Night In-Birth - Type-Moon / French-Bread / Ecole Software
Untouchable - Creative Edge Studios
Urban Champion - Nintendo
Vanguard Princess - Sugeno
Vanguard Princess Prime - Sugeno
Variable Geo series - TGL / Giga
Violence Fight series
Violence Fight - Taito
Solitary Fighter / Violence Fight II - Taito
Voltage Fighter Gowcaizer - Technos
VR Troopers - Syrox Developments
Waku Waku 7 - Sunsoft
Warriors of Elysia - Creative Edge Studios
The Way of the Exploding Fist series
The Way of the Exploding Fist / Kung-Fu: The Way of the Exploding Fist - Beam Software
Fist II: The Legend Continues / Fist: The Legend Continues / Exploding Fist II: The Legend Continues - Beam Software
Fist+ / Exploding Fist + - Beam Software
Way of the Tiger - Gremlin Graphics
Way of the Warrior - Naughty Dog
Weaponlord - Visual Concepts
Windy X Windam – Success / Ninja Studio
World Heroes series - ADK / SNK
Xuan Dou Zhi Wang / King of Combat - Jade Studio and Tencent Games
Yie Ar Kung-Fu series
Yie Ar Kung-Fu - Konami / Imagine
Yie Ar Kung-Fu II: The Emperor Yie-Gah - Imagine / Konami
Yu Yu Hakusho Final - Namcot
Zatch Bell! Electric Arena / Konjiki no Gash Bell! Yuujou no Zakeru - Banpresto
2.5D
2.5D fighting games are displayed in full 3D graphics, but the gameplay is based on traditional 2D style games.
All Star Fighters - Essential Games
Battle Fantasia - Arc System Works
Battle Stadium D.O.N. - Namco Bandai Games / Eighting / Q Entertainment
Blade Arcus from Shining - Sega
Blade Strangers - Studio Saizensen / Nicalis
Canimals Fighters - Voozclub Co. Ltd / Playplus
Cartoon Network: Punch Time Explosion
Digimon Rumble Arena series - Namco Bandai Games
Digimon Rumble Arena
Digimon Rumble Arena 2
Dragon Ball FighterZ - Arc System Works
Dragon Blast - Dragon Tea
Dream Mix TV World Fighters - Hudson Soft
Fantasy Strike - Sirlin Games
Fight of Gods - Digital Crafter
Fighter Uncaged Series
Fighters Uncaged - Ubisoft
Fighter Within - Ubisoft
Fighting EX Layer - Arika
Fullmetal Alchemist: Dream Carnival - Bandai / Eighting
Genei Tougi series - Racdym
Critical Blow
Guilty Gear series - Arc System Works
Guilty Gear Xrd
Guilty Gear Strive
Hinokakera - Reddish Region
Injustice series - Warner Bros. Interactive Entertainment / NetherRealm Studios
Injustice: Gods Among Us
Injustice 2
Killer Instinct (2013) - Double Helix Games / Iron Galaxy Studios
The King of Fighters series - SNK
The King of Fighters XIV
The King of Fighters XV
Konjiki no Gash Bell!! Go! Go! Mamono Fight!! - Eighting
Marvel vs. Capcom (series) - Capcom
Marvel vs Capcom 3: Fate of Two Worlds
Ultimate Marvel vs. Capcom 3
Marvel vs. Capcom: Infinite
Mashbox - Microsoft Studios
Mortal Kombat (series)
Mortal Kombat - Warner Bros. Interactive Entertainment / NetherRealm Studios
Mortal Kombat X - Warner Bros. Interactive Entertainment / NetherRealm Studios
Mortal Kombat 11 - Warner Bros. Interactive Entertainment / NetherRealm Studios
Moshi Fighters - Mind Candy / Activision / Sumo Digital
Mythic Blades - Vermillion Entertainment
Naruto: Ultimate Ninja series - Namco Bandai Games
- AOne Games
One Piece: Gear Spirit - Bandai
PlayStation All-Stars Battle Royale
Power Rangers: Battle for the Grid - Animoca Brands
Rakugakids - Konami
Rise of the Robots series - Mirage Media
Rise 2: Resurrection - Mirage Media
Rising Thunder - Radiant Entertainment
Samurai Shodown (2019) - SNK
Slap Happy Rhythm Busters - Polygon Magic
Sonic Battle - Sega / Sonic Team
Street Fighter EX series - Arika/Capcom
Street Fighter EX
Street Fighter EX Plus
Street Fighter EX Plus α
Street Fighter EX2
Street Fighter EX2 Plus
Street Fighter EX3
Street Fighter IV (2008) - Capcom
Super Street Fighter IV (2010)
Super Street Fighter IV: 3D Edition
Super Street Fighter IV: Arcade Edition
Ultra Street Fighter IV
Street Fighter V - Capcom
Street Fighter V: Arcade Edition
Street Fighter V: Champion Edition
Street Fighter X Tekken - Capcom
Super Smash Bros. series - Nintendo / HAL Laboratory / Sora / Namco Bandai Games
Super Smash Bros.
Super Smash Bros. Melee
Super Smash Bros. Brawl
Super Smash Bros. for Nintendo 3DS / Wii U
Super Smash Bros. Ultimate
Tamagotchi Battle - Bandai Namco Games / Bandai
Tatsunoko vs. Capcom: Cross Generation of Heroes - Capcom
Tatsunoko vs. Capcom: Ultimate All-Stars - Capcom
Teenage Mutant Ninja Turtles: Smash-Up - Ubisoft / Game Arts
Viewtiful Joe: Red Hot Rumble - Capcom
Ultraman Nexus - Bandai
Punch Planet - Sector-K Games
3D
3D fighting games add three-dimensional movement. These often emphasize sidestepping.
.hack//Versus - Cyber Connect 2
ARMS - Nintendo
Ballz - Accolade
Battle Arena Toshinden series - Tamsoft
Battle Tryst - Konami
Bio F.R.E.A.K.S. - Saffire
Bloody Roar / Beastorizer series - Hudson / Eighting / Raizing
Buriki One - SNK
Capcom Fighting All-Stars - Capcom
Cardinal Syn - Kronos
Castlevania Judgment - Konami / Eighting
Celebrity Deathmatch - Big Ape
Criticom - Kronos
Custom Robo series - Noise
Dark Edge - Sega
Dark Rift - Kronos
Dead or Alive series - Team Ninja
Dead or Alive
Dead or Alive 2
Dead or Alive 3
Dead or Alive Ultimate
Dead or Alive 4
Dead or Alive: Dimensions
Dead or Alive 5
Dead or Alive 6
Deadly Arts - Konami
Def Jam series - Aki / EA Canada / EA Chicago
Destrega - Koei
Dragon Ball Z: Budokai Tenkaichi series - Spike
Dual Heroes - Hudson Soft
Ehrgeiz - Dream Factory
Evil Zone - Yuke's
Fate/tiger colosseum - Capcom / Cavia / Type-Moon
Fate/unlimited codes - Capcom / Cavia / Eighting / Type-Moon
Fight for Life - Atari
Fighters Megamix - Sega-AM2
Fighter's Destiny series - Imagineer / Genki
Fighter's Impact series - Taito
Fighter's Impact
Fighter's Impact A
Fighting Bujutsu - Konami
Fighting Bujutsu 2nd!
Fighting Layer - Arika
Fighting Vipers - Sega-AM2
Fighting Vipers 2 - Sega-AM2
Final Fight Revenge - Capcom
FIST - Genki
FX Fighter series - Argonaut Games
FX Fighter
FX Fighter Turbo
Genei Tougi series - Racdym
Genei Tougi: Shadow Struggle
Girl Fight - Kung Fu Factory
Groove Adventure Rave: Fighting Live - Konami
Heaven's Gate - Racdym
Hiryu no Ken series - Culture Brain
Flying Dragon - Culture Brain
Iron and Blood - Take-Two Interactive
Jojo's Bizarre Adventure: All Star Battle - Cyber Connect 2
Kabuki Warriors - Genki (company) / Lightweight
Kakuto Chojin: Back Alley Brutal - Dream Publishing
Kensei: Sacred Fist - Konami
Killing Zone - Naxat Soft
Kinnikuman Muscle Grand Prix series - AKI Corporation / Banpresto
KOF: Maximum Impact series - SNK Playmore
Kung Fu Chaos - Just Add Monsters / Microsoft Game Studios
Legend of the Dragon - The Game Factory
Mace: The Dark Age - Midway
Magical Battle Arena - Fly-System / AreaZERO
- Twelve Interactive
Marvel Nemesis: Rise of the Imperfects - Nihilistic / EA Canada / Team Fusion
Mortal Kombat series - Midway
Mortal Kombat 4
Mortal Kombat: Deadly Alliance
Mortal Kombat: Deception
Mortal Kombat: Armageddon
Mortal Kombat vs. DC Universe
Naruto: Clash of Ninja (series) - Eighting / Takara Tomy
Naruto: Ninja Destiny series (Nintendo DS)
Naruto: Ultimate Ninja Storm - Namco Bandai Games
Naruto Shippuden: Ultimate Ninja Storm 2
Naruto Shippuden: Ultimate Ninja Storm Generations
Naruto Shippuden: Ultimate Ninja Storm 3
Naruto Shippuden: Ultimate Ninja Storm Revolution
Naruto Shippuden: Ultimate Ninja Storm 4
One Must Fall: Battlegrounds
One Piece series - Ganbarion
Pokkén Tournament - Bandai Namco Entertainment
Power Stone series - Capcom
Power Stone
Power Stone 2
Poy Poy series - Konami
Poy Poy
Poy Poy 2
Psychic Force series - Taito
Psychic Force
Psychic Force 2012
Rival Schools series - Capcom
Rival Schools: United By Fate
Project Justice
Robo Pit - Kokopeli
Rumble Roses series - Yuke's / Konami
Shaolin - THQ
Shijō Saikyō no Deshi Kenichi: Gekitō! Ragnarok Hachikengō - Capcom
Sonic The Fighters - Sega-AM2
Star Wars: Masters of Teras Kasi - LucasArts
Star Wars: The Clone Wars – Lightsaber Duels
Soulcalibur series - Namco
Soul Edge
Soulcalibur
Soulcalibur II
Soulcalibur III
Soulcalibur IV
Soulcalibur: Broken Destiny
Soulcalibur V
Soulcalibur: Lost Swords
Soulcalibur VI
Stake: Fortune Fighters - Gameness
Star Gladiator series - Capcom
Super Dragon Ball Z - Namco Bandai Games
Tao Feng: Fist of the Lotus - Studio Gigante
Tech Romancer - Capcom
Tekken series - Namco
Tekken
Tekken 2
Tekken 3
Tekken Tag Tournament
Tekken 4
Tekken Advance
Tekken 5
Tekken 5: Dark Resurrection
Tekken 6
Tekken 6: Bloodline Rebellion
Tekken 7
Tekken Tag Tournament 2
Tekken 3D: Prime Edition
Tekken Revolution
Tenth Degree - Atari
Theatre Of Pain - Mirage Media
The Fight: Lights Out -
SCE
The Grim Adventures of Billy & Mandy - Midway
Thrill Kill - Paradox Development
Time Warriors - Silmarils
Tobal series - Dream Factory
Tom and Jerry: War of the Whiskers - VIS Entertainment
Tournament of Legends - High Voltage Software
Transformers: Beast Wars Transmetals - Takara
Vs. - THQ
Virtua Fighter series - Sega-AM2
War Gods - Midway
Warpath: Jurassic Park - Black Ops Entertainment / DreamWorks Interactive
Wu-Tang: Shaolin Style - Paradox Development
X: Unmei no Sentaku - Bandai
X-Men fighting games - Paradox Development / Activision
X-Men: Mutant Academy
X-Men: Mutant Academy 2
X-Men: Next Dimension
Xena: Warrior Princess: The Talisman of Fate - Saffire
Yu Yu Hakusho: Dark Tournament - Digital Fiction
Zatch Bell! Mamodo Battles / Konjiki no Gash Bell! Yuujou no Tag Battle 2 - Eighting
Zatch Bell! Mamodo Fury / Konjiki no Gash Bell! Gekitou! Saikyou no Mamonotachi - Mechanic Arms
Zeno Clash - ACE Team
Zero Divide series - ZOOM Inc.
Zero Divide
Zero Divide 2: The Secret Wish
Zero Divide: The Final Conflict
Weapon-based
Adding melee weapons to a fighting game often makes attack range more of a factor, as opponents may wield swords,knife,katana or other kind of weapons of drastically different sizes.
2D
Barbarian - Palace Software
Battle Blaze - American Sammy
Blade Arcus from Shining - Sega
Blandia - Allumer
BlazBlue - Arc System Works
Chaos Breaker / Dark Awake - Eolith / Taito
Dragoon Might - Konami
Dual Blades - Vivid Image
Fu'un series - SNK
Savage Reign
Kizuna Encounter
Gladiator - Domark
Guilty Gear - Arc System Works
Hana no Keiji: Kumo no Kanata ni - Yojigen
Highlander - Ocean
Knight Games - English Software
Knuckle Heads - Namco
Samurai Deeper Kyo - Bandai
The Last Blade series - SNK
Martial Champion - Konami
Ninja Master's -Haoh-Ninpo-Cho- - Alpha Denshi
Red Earth / War-Zard - Capcom
Revengers of Vengeance - Extreme Entertainment Group
Sai Combat - Mirrorsoft
Samurai Shodown / Samurai Spirits series - SNK
Sarayin Esrari - Akemre
Suiko Embu series
Outlaws of the Lost Dynasty / Suiko Enbu/Dark Legend - Data East
Suiko Enbu-Fuun Saiki - Data East
Shadow Fight series
Shadow Fight - Nekki
Shadow Fightt 2 - Nekki
Sword Slayer / Spartacus the Swordslayer - Players
Time Killers - Strata
Touhou Project series
Touhou Project 7.5 - Immaterial and Missing Power
Touhou Project 10.5 - Scarlet Weather Rhapsody
Touhou Project 12.3 - Touhou Hisōtensoku
Touhou Project 13.5 - Hopeless Masquerade
Touhou Project 14.5 - Urban Legend in Limbo
Touhou Project 15.5 - Antinomy of Common Flowers
WeaponLord - Namco
2.5D
Battle Fantasia - Arc System Works
Granblue Fantasy Versus - Arc System Works
Samurai Shodown (2019) - SNK
3D
Battle Arena Toshinden series - Tamsoft
Bleach (video game series)
Bushido Blade series - Square-Enix / Lightweight
Bushido Blade
Bushido Blade 2
Cardinal Syn - Kronos
Criticom - Kronos Digital Entertainment / Vic Tokai
Dark Rift - Kronos Digital Entertainment / Vic Tokai
Deadliest Warrior: The Game - Pipeworks Software
Dynasty Warriors - Koei
Kengo - Genki
Last Bronx - Sega AM3
Mace: The Dark Age - Midway
Mortal Kombat series
Mortal Kombat 4 - Midway
Mortal Kombat: Deadly Alliance - Midway
Mortal Kombat: Deception - Midway
Mortal Kombat: Armageddon - Midway
Mortal Kombat X - Midway
Mortal Kombat 11 - Midway
Samurai Shodown series
Samurai Shodown 64 - SNK
Samurai Shodown 64: Warriors Rage SNK
Samurai Shodown: Warriors Rage - SNK
Samurai Shodown: Sen - SNK Playmore
Sarayin Esrari - Akemre
Shadow Fight series
Shadow Fight 3 - Nekki
Shadow Fight Arena - Nekki
Soul series - Namco
Star Gladiator series
Star Gladiator - Capcom
Plasma Sword: Nightmare of Bilstein - Capcom
Star Wars: Masters of Teräs Käsi - LucasArts
Tag team-based
Fighting games that feature tag teams as the core gameplay element. Teams of players may each control a different character, or a single player may control multiple characters but play one at a time. Other fighters feature tag-teaming as an alternate game mode.
2D
Blade Arcus from Shining - Sega
BlazBlue: Cross Tag Battle - Arc System Works
The Eye of Typhoon / Kyoku Cho Gou Ken - Viccom
Kizuna Encounter: Super Tag Battle - SNK
Marvel vs. Capcom series - Capcom
NeoGeo Battle Coliseum - SNK Playmore
Umineko: Golden Fantasia - 07th Expansion
The King of Fighters series - SNK/SNK Playmore
The King of Fighters 2003
The King of Fighters XI
Power Instinct series - Atlus
Gogetsuji Legends / Power Instinct Legends
Groove on Fight
Rage of the Dragons - Evoga / Noise Factory
Skullgirls - Lab Zero Games/Marvelous
SNK vs. Capcom: The Match of the Millennium
The Killing Blade - International Games System
Twinkle Queen - Milestone
Konjiki no Gash Bell! Yuujou no Zakeru Dream Tag Tournament - Banpresto
2.5D
Capcom's Versus series
Tatsunoko vs. Capcom: Ultimate All-Stars - Capcom / Eighting
Marvel vs. Capcom 3: Fate of Two Worlds / Ultimate Marvel vs. Capcom 3 - Capcom
Dragon Ball FighterZ - Arc System Works
Mortal Kombat - NetherRealm Studios
Power Rangers: Battle for the Grid - Animoca Brands
SNK Heroines: Tag Team Frenzy - SNK / Abstraction Games
Street Fighter X Tekken - Capcom
3D
Dead or Alive series - Tecmo
Dead or Alive 2
Dead or Alive 3
Dead or Alive Ultimate
Dead or Alive 4
Dead or Alive: Dimensions
Dead or Alive 5
Dead or Alive 5 Ultimate
Dead or Alive 5 Last Round
Naruto: Gekitou Ninja Taisen 3 - Eighting / Takara Tomy
Naruto: Gekitou Ninja Taisen 4 - Eighting / Takara Tomy
Naruto: Clash of Ninja Revolution 2 - Eighting / D3 Publisher / Takara Tomy
Naruto Shippuden: Clash of Ninja Revolution 3 - Eighting / D3 Publisher / Takara Tomy
Naruto Shippūden: Gekitō Ninja Taisen! Special - Eighting / Takara Tomy
One Piece: Burning Blood
Street Fighter EX3 - Arika/Capcom
Tekken Tag Tournament series - Namco
Tekken Tag Tournament
Tekken Tag Tournament 2
Platform fighters
While traditional 2D/3D fighting game mechanics are more or less descendants of Street Fighter II, platform fighters tend to blend fighting with elements taken from platform games. A typical match is arranged as a battle royal. Compared to traditional fighting games, attack inputs are simpler and emphasis is put on dynamic maneuvering in the arena, using the level design to get an advantage. Another major gameplay element involves using items, which may randomly spawn anywhere in the arena. Other terms which were used to refer to this sub-genre included "Smash Clones", "Party Brawler", "Party Fighter", and "Arena Fighter" (that is also being used to define another style of 3D fighting game).
2D
Armor Mayhem - Louissi
Blue Mischief - Team WING
Brawlhalla - Blue Mammoth Games
Guilty Gear Dust Strikers - Arc System Works
Jump Stars series - Ganbarion
Jump Super Stars
Jump Ultimate Stars
Kanon and AIR Smash - micro dream studio++
Lethal League - Team Reptile
Paperbound - Dissident Logic
Rivals of Aether - Dan Fornace
Shovel Knight Showdown - Yacht Club Games
Shrek: Fairy Tale Freakdown - Prolific
Sugoi Hebereke - Sunsoft
The Outfoxies - Namco
2.5D
Antistatic - Blue Hexagons
Armajet - Super Bit Machine
Brawlout - Angry Mob Games
Cartoon Network: Punch Time Explosion - Papaya Studio
DreamMix TV World Fighters - Bitstep
Konjiki no Gash Bell!! Go! Go! Mamono Fight!! - Eighting
Kung Fu Panda: Showdown of Legendary Legends - Vicious Cycle Software
Lethal League Blaze - Team Reptile
Neon Genesis Evangelion: Battle Orchestra - Headlock
Nickelodeon All-Star Brawl - Ludosity / Fair Play Labs
One Piece: Gear Spirit - Bandai
Onimusha Blade Warriors - Capcom
PlayStation All-Stars Battle Royale - SuperBot Entertainment
Rumble Arena - Rekall Games
Slap City - Ludosity
Super Smash Bros. series - Nintendo / HAL Laboratory / Sora / Bandai Namco Studios
Super Smash Bros.
Super Smash Bros. Melee
Super Smash Bros. Brawl
Super Smash Bros. for Nintendo 3DS / Wii U
Super Smash Bros. Ultimate
Tales of VS. - Matrix Software
Teenage Mutant Ninja Turtles: Smash Up - Game Arts
Viewtiful Joe: Red Hot Rumble - Capcom
3D
Barbarian - Saffire
The Grim Adventures of Billy & Mandy (video game) - Midway Games
Groove Adventure Rave: Fighting Live - Konami
Keriotosse! - Taya
Kung Fu Chaos - Just Add Monsters / Microsoft Game Studios
Pocket Kanon & Air - Studio SiestA
Poy Poy series - Konami
Poy Poy
Poy Poy 2
Rakugaki Showtime - Treasure
Shrek SuperSlam - Activision
Stake: Fortune Fighters - Gameness Art Software Inc.
Suzumiya Haruhi no Chourantou - Souvenir Circ.
Suzumiya Haruhi no Gekitou - Souvenir Circ.
Teenage Mutant Ninja Turtles: Mutant Melee - Konami
Tom and Jerry: War of the Whiskers - VIS Entertainment
Arena fighting games
Arena Fighters usually focuses on more free-controlling 3D movement and camera that follows the character, unlike other traditional 3D fighting games like the Tekken series that still maintain the side view and side-scrolling orientation to the attacks, also usually put emphasis on offense over defense. Games are often based on popular anime series or other IPs.
3D
ARMS - Nintendo
Castlevania Judgment - Konami
Demon Slayer: Kimetsu no Yaiba – The Hinokami Chronicles - CyberConnect2
Dissidia: Final Fantasy
Dissidia 012 Final Fantasy
Dragon Ball Z: Budokai Tenkaichi (series) - Spike
Dragon Ball: Raging Blast - Spike
Dragon Ball: Raging Blast 2 - Spike
Groove Adventure Rave: Fighting Live - Konami
Godzilla video games - Toho / Atari
King of the Monsters series - SNK
War of the Monsters - Incognito Entertainment / Sony
J-Stars Victory VS - Spike Chunsoft
Naruto: Ultimate Ninja Storm
Naruto Shippuden: Ultimate Ninja Storm 2
Naruto Shippuden: Ultimate Ninja Storm Generations
Naruto Shippuden: Ultimate Ninja Storm 3
Naruto Shippuden: Ultimate Ninja Storm Revolution
Power Stone series
Power Stone - Capcom
Power Stone 2 - Capcom
Override 2: Super Mech League
Pokkén Tournament - Bandai Namco Entertainment
Saint Seiya: Brave Soldiers - Dimps
Saint Seiya: Soldiers' Soul - Dimps
Shijō Saikyō no Deshi Kenichi: Gekitō! Ragnarok Hachikengō - Capcom
Spawn: In the Demon's Hand
Yu Yu Hakusho: Dark Tournament - Digital Fiction
Zatch Bell! Mamodo Battles / Konjiki no Gash Bell! Yuujou no Tag Battle 2 - Eighting
Zatch Bell! Mamodo Fury / Konjiki no Gash Bell! Gekitou! Saikyou no Mamonotachi - Mechanic Arms
4-way simultaneous fighting
Games in which four players face off at once. Other games may feature 4-way fighting as alternate game modes, but here it's more central to the way the game is usually played.
2D
Bleach Nintendo DS games
Bleach: The Blade of Fate
Bleach: Dark Souls
Guilty Gear series - Arc System Works
Guilty Gear Isuka - Sammy
Guilty Gear: Dust Strikers
Jump Stars series - Ganbarion
Jump Super Stars
Jump Ultimate Stars
Lethal League - Team Reptile
Naruto Shippūden: Ninjutsu Zenkai! Cha-Crash!!
Sonic Battle - Sonic Team
Yū Yū Hakusho Makyō Tōitsusen
2.5D
Battle Stadium D.O.N - Eighting
Cartoon Network: Punch Time Explosion - Crave
DreamMix TV World Fighters - Hudson Soft
Konjiki no Gash Bell!! Go! Go! Mamono Fight!! - 8ing
Lethal League Blaze - Team Reptile
Neon Genesis Evangelion: Battle Orchestra - Headlock
Nickelodeon All-Star Brawl - Ludosity / Fair Play Labs
One Piece: Gear Spirit - Bandai
PlayStation All-Stars Battle Royale - Sony Computer Entertainment
Sonic Battle - Sega / Sonic Team
Street Fighter X Tekken - Capcom
Super Smash Bros. series - Nintendo / HAL Laboratory / Sora / Bandai Namco Studios
Super Smash Bros.
Super Smash Bros. Melee
Super Smash Bros. Brawl
Super Smash Bros. for Nintendo 3DS / Wii U
Super Smash Bros. Ultimate
Teenage Mutant Ninja Turtles: Smash-Up - Ubisoft/Game Arts
Viewtiful Joe: Red Hot Rumble - Capcom
3D
ARMS - Nintendo
Destrega - Koei
The Grim Adventures of Billy & Mandy - Midway Games
Groove Adventure Rave: Fighting Live - Konami
Naruto: Clash of Ninja (series) - Eighting / Takara Tomy
Naruto: Clash of Ninja 2
Naruto: Clash of Ninja Revolution
Naruto: Clash of Ninja Revolution 2
Naruto: Gekitō Ninja Taisen! 3
Naruto: Gekitō Ninja Taisen! 4
Naruto Shippuden: Clash of Ninja Revolution 3
Naruto Shippūden: Gekitō Ninja Taisen! EX
Naruto Shippūden: Gekitō Ninja Taisen! EX 2
Naruto Shippūden: Gekitō Ninja Taisen! EX 3
Naruto Shippūden: Gekitō Ninja Taisen! Special
Power Stone series
Power Stone - Capcom
Power Stone 2 - Capcom
Shrek SuperSlam - Activision
Teenage Mutant Ninja Turtles: Mutant Melee - Konami
Thrill Kill - Virgin Interactive
Viewtiful Joe: Red Hot Rumble - Capcom
Wu-Tang: Shaolin Style - Activision
Sports (combat) subgenres
Sports-based combat (also known as sport-fighters or combat sports games) are games that fall firmly within both the Combat and Sports game genres. Such games are usually based on boxing, mixed martial arts, and wrestling, and each sport seen as their own separate subgenres. Often the combat is far more realistic than combat in fighting games (though the amount of realism can vary greatly), and many feature real-world athletes and franchises and they also very distinct from fighting games.
Boxing
Boxing games go back farther than any other kind of fighting game, starting with Sega's Heavyweight Champ in 1976, the game often called the first video game to feature hand-to-hand fighting. Fighters wear boxing gloves and fight in rings, and fighters can range from actual professional boxers to aliens to Michael Jackson.
10... Knock Out! - Amersoft
3D Boxing - Amsoft
3D World Boxing Champion - Simulmondo
4-D Boxing - Distinctive Software
4D Sports Boxing - Mindscape
ABC Wide World of Sports Boxing - Cinemaware
Animal Boxing - Destineer
ARMS - Nintendo
Barry McGuigan World Championship Boxing / Star Rank Boxing - Gamestar / Activision
Best Bout Boxing - Jaleco
Boxing - Activision
Boxing - Mattel Electronics
Boxing Angel
Boxing Fever
Boxing Legends of the Ring - Sculptured Software
Black & Bruised - Majesco Entertainment
By Fair Means or Foul / Pro Boxing Simulator - Superior Software / Alligata / Codemasters
Canimals Boxing Championship - Voozclub Co. Ltd / Playplus
Devastating Blow - Beyond Belief
Evander "Real Deal" Holyfield's Boxing - Sega
FaceBreaker - EA Canada
Final Blow - Taito / STORM (The Sales Curve)
Fight Night (1985) - Accolade / U.S. Gold
Fight Night series
Fight Night 2004 - EA Sports
Fight Night Round 2 - EA Sports
Fight Night: Round 3 - EA Chicago
Fight Night Round 4 - EA Canada
Fight Night Champion - EA Canada
Foes of Ali - Gray Matter Studios
Frank Bruno's Boxing - Elite
George Foreman's KO Boxing - Beam Software
Greatest Heavyweights of the Ring - Sega
Hajime no Ippo: Road to Glory
Heavyweight Champ series
Heavyweight Champ (1976)
Heavyweight Champ (1987)
Knockout - Alligata
Knockout Kings series - EA Sports
Knockout Kings 99
Knockout Kings
Knockout Kings 2000
Knockout Kings 2001
Knockout Kings 2002
Knockout Kings 2003
Legend of Success Joe - Wave Corp. / SNK
Muhammad Ali Heavyweight Boxing - Park Place
Neutral Corner / USA Boxing - KAB Software
Online Boxing series
Online Boxing 2D(2001) - onlineboxing.net
Online Boxing 3D(2009) - 3dboxing.com
Poli Diaz - OperaSoft
Power Punch II - ASC
Pro Boxing - Artworx
Punch-Out!! series - Nintendo
Punch-Out!!
Super Punch-Out!!
Mike Tyson's Punch-Out!!
Super Punch-Out!! (SNES)
Punch-Out!! (Wii)
Prize Fighter
Ready 2 Rumble series
Ready 2 Rumble Boxing - Midway
Ready 2 Rumble Boxing: Round 2 - Midway / Point of View (developer)
Ready 2 Rumble Boxing: Round 3 - 10TACLE Studios / Stereo Mode
Ready 2 Rumble Revolution - AKI Corporation / Atari
Real Steel - Yuke's
Riddick Bowe Boxing - Malibu Interactive
Ring King - Data East
Ringside - EAS / Mentrox / Goldline
Rocky series
Creed: Rise to Glory - Survios
Rocky Super Action Boxing - Coleco
Rocky - Sega
Rocky (6th gen consoles) - Ubisoft
Rocky Legends - Ubisoft
Rocky Balboa - Ubisoft
Rocky / Rocco - Dinamic / Gremlin
Sierra Championship Boxing - Sierra On-Line
Star Rank Boxing II - Gamestar / Activision
Street Cred Boxing - Players
TKO - Accolade
Teleroboxer - Nintendo R&D3
The Big KO - Tynesoft
The Champ - Linel
The Final Round (1988) - Konami
Title Fight - Sega
Victorious Boxers series
Victorious Boxers: Ippo's Road to Glory - New Corporation
Victorious Boxers 2: Fighting Spirit - New Corporation
Victorious Boxers: Revolution - AQ Interactive / GrandPrix
Wade Hixton's Counter Punch - Inferno Games
Wii Sports: Wii Boxing - Nintendo
Boxing management
Boxing games where combat is not directly human-controlled in the ring. Instead, a boxer is trained via a resource management game scheme, and bouts are directed via instructions given prior to each round.
Boxing Manager - Cult
Online Boxing Manager - OBM
Ringside Seat - SSI
TKO Professional Boxing - Lance Haffner Games
The Boxer (game) - Cult
World Championship Boxing Manager - Goliath Games / Krisalis Software
World Championship Boxing Manager Online - WCBM Online
Mixed martial arts
While most versus fighting games could be considered mixed martial arts games, listed here are games that are based on actual MMA franchises or tournaments.
TDT-Online - TDT
Ultimate Fighting Championship - Anchor Inc.
UFC: Sudden Impact - Opus
UFC: Tapout - Dream Factory
UFC: Throwdown - Opus
UFC 2009 Undisputed - Yuke's
UFC Undisputed 2010 - Yuke's
UFC Undisputed 3 - Yuke's
UFC Personal Trainer (video game) - Yuke's
EA Sports MMA - EA Sports
EA Sports UFC - EA Sports
EA Sports UFC 2 - EA Sports
EA Sports UFC 3 - EA Sports
EA Sports UFC 4 - EA Sports
Supremacy MMA - Kung Fu Factory
Astral Bout / Sougou Kakutougi Astral Bout - King Records
Astral Bout 2 / Sougou Kakutougi Astral Bout 2 The Total Fighters - King Records
Astral Bout 3 / Fighting Network Rings: Astral Bout 3 / Sougou Kakutougi Astral Bout 3 - King Records
Saikyō: Takada Nobuhiko Super Famicom 1995
Fighting Network RINGS: PS one 1997
Buriki One World Grappler Tournament ARCADE 1999
Grappler Baki Baki Sadai no Tournament / Fighting Fury PS2 2000
PRIDE FC: Fighting Championships PS2 2003
The Wild Rings Xbox 2003
PrideGP Grand Prix 2003
The Ishu Kakutougi/World Fighting PS2 2003
K-1 PREMIUM 2004 Dynamite PS2 2004
Garouden Breakblow PS2 2005
K-1 PREMIUM 2005 Dynamite PS2 2005
Garouden Breakblow Fist or Twist PS2 2007
MMA Tycoon - Browser 2009
TheFlyingKnee - Browser/Animated 2011
Kickboxing
K-1 World GP
K-1 World GP 2006
K-1 Premium 2005 Dynamite!!
K-1 World GP 2005
K-1 World Max 2005
K-1 Premium 2004 Dynamite!!
K-1 World Grand Prix 2003
K-1 World Grand Prix: The Beast Attack!
K-1 World Grand Prix
K-1 Pocket Grand Prix 2
K-1 Pocket Grand Prix
K-1 World Grand Prix 2001
K-1 World Grand Prix 2001 Kaimakuden
K-1 Oujya ni Narou!
K-1 Grand Prix
K-1 Revenge
Legend of K-1 Grand Prix '96
K-1 The Arena Fighters
Fighting Illusion K-1 Grand Prix Sho
Legend of K-1 The Best Collection
Wrestling
Wrestling games are either based on or have elements of wrestling, such as professional wrestling, grappling, and/or the wrestling ring itself.
Super Pro Wrestling - INTV Corporation
3 Count Bout / Fire Suplex - SNK
All Japan Pro-Wrestling: Soul of Champion - Human Entertainment
All Star Pro-Wrestling series - Square / Square Enix
American Tag Team Wrestling / Tag Team Wrestling - Zeppelin
Backyard Wrestling series - Paradox Development
Backyard Wrestling: Don't Try This at Home
Backyard Wrestling 2: There Goes the Neighborhood
Backyard Wrestling 2K8
The Big Pro Wrestling! / Tag Team Wrestling - Technos / Quicksilver Software
Blazing Tornado - Human Entertainment
Body Slam / Dump Matsumoto - Sega / Firebird
Championship Wrestling (video game) - Epyx
Def Jam Vendetta - Aki / EA Canada
Fire Pro Wrestling series
Fire Pro Wrestling Combination Tag - Human Entertainment
Fire Pro Wrestling 2nd Bout - Human Entertainment
Super Fire Pro Wrestling - Human Entertainment
Thunder Pro Wrestling Biographies / Fire Pro Wrestling Gaiden - Human Entertainment
Fire Pro Wrestling 3 Legend Bout - Human Entertainment
Super Fire Pro Wrestling 2 - Human Entertainment
Super Fire Pro Wrestling 3 Final Bout - Human Entertainment
Super Fire Pro Wrestling 3 Easy Type - Human Entertainment
Fire Pro Women: All Star Dream Slam / Fire Pro Joshi: All Star Dream Slam - Human Entertainment
Super Fire Pro Wrestling Special - Human Entertainment
Wrestling Universe: Fire Pro Women: Dome Super Female Big Battle: All Japan Women VS J.W.P. - Human Entertainment
Super Fire Pro Wrestling: Queen's Special - Human Entertainment
Fire Pro Another Story: Blazing Tornado - Human Entertainment
Super Fire Pro Wrestling X - Human Entertainment
Fire Pro Wrestling: Iron Slam '96 - Human Entertainment
Super Fire Pro Wrestling X Premium - Human Entertainment
Fire Pro Wrestling S: 6 Men Scramble - Human Entertainment
Fire Pro Wrestling G - Human Entertainment
Fire Pro Wrestling for WonderSwan - Spike
Fire Pro Wrestling i - Spike
Fire Pro Wrestling D - Spike
Fire Pro Wrestling - Spike
Fire Pro Wrestling J - Spike
Fire Pro Wrestling 2 - Spike
Fire Pro Wrestling Z - Spike
Fire Pro Wrestling Returns - Spike
Fire Pro Wrestling World - Spike
Gekitou Burning Pro Wrestling - BPS
HAL Wrestling / Pro Wrestling - Human Entertainment
HammerLock Wrestling - Jaleco
Funaki Masakatsu no Hybrid Wrestler: Tōgi Denshō - Technos
Intergalactic Cage Match / Cage Match - Mastertronic / Entertainment USA
King of Colosseum (Green) NOAH x Zero-One Disc - Spike
King of Colosseum II - Spike
Galactic Wrestling / Kinnikuman Generations - Bandai
Legends of Wrestling series
MUSCLE / Tag Team Match M.U.S.C.L.E. / Kinnikuman Muscle Tag Match - Bandai
Mat Mania Challenge - Technos / Atari
Natsume Championship Wrestling - Natsume
New Japan x All Japan x Pancrase Disc - Spike
Popeye 3 - Alternative
Power Move Pro Wrestling - Future Amusement
Power Pro Wrestling: Max Voltage / Jikkyou Power Pro Wrestling '96: Max Voltage - Konami
Pro Wrestling (Sega Master System) - Sega
Pro Wrestling (NES) - Nintendo
Pure Wrestle Queens / JWP Joshi Pro Wrestling: Pure Wrestle Queens - Jaleco
Cutie Suzuki no Ringside Angel - Asmik
Rock'n Wrestle / Bop'n Wrestle - Beam Software
Rumble Roses series - Konami / Yuke's
Saturday Night Slam Masters series
Saturday Night Slam Masters / Muscle Bomber: The Body Explosion - Capcom
Muscle Bomber Duo: Ultimate Team Battle / Muscle Bomber Duo: Heat Up Warriors - Capcom
Ring of Destruction: Slam Masters II / Super Muscle Bomber: The International Blowout - Capcom
Sgt Slaughter's Mat Wars - Mindscape
Shin Nihon Pro Wrestling Tokon Hono series - Yuke's
Shin Nihon Pro Wrestling Tokon Hono: Brave Spirits
Shin Nihon Pro Wrestling Tokon Hono 2: The Next Generation
The Simpsons Wrestling - Big Ape Productions
Stardust Suplex - Varie
Super Star Pro Wrestling - Nihon Bussan
Take Down - Gamestar
Tecmo World Wrestling - Tecmo
Title Match Pro Wrestling - Absolute Entertainment
TNA Impact! - Midway
TNA Impact!: Cross The Line - Midway
Wrestling - KAB Software
Wrestle Kingdom - Yuke's
Wrestle Kingdom 2 - Yuke's
Wrestle War - Sega
Wrestling Superstars - Codemasters
Wrestling video games based on WWE/WWF properties.
Extreme Championship Wrestling (ECW) series
ECW Anarchy Rulz - Acclaim
ECW Hardcore Revolution - Acclaim
SmackDown! series - THQ / Yuke's
WWF SmackDown!
WWF SmackDown! 2: Know Your Role
WWF SmackDown! Just Bring It
WWE SmackDown! Shut Your Mouth
WWE SmackDown! Here Comes The Pain
WWE SmackDown! vs. RAW
WWE SmackDown! vs. RAW 2006
WWE SmackDown vs. Raw 2007
WWE SmackDown vs. Raw 2008
WWE SmackDown vs. Raw 2009
WWE SmackDown vs. Raw 2010
WWE SmackDown vs. Raw 2011
World Championship Wrestling (WCW) series
WCW Backstage Assault
WCW Mayhem
WCW Nitro
WCW/nWo Revenge
WCW/nWo Thunder
WCW SuperBrawl Wrestling
WCW: The Main Event
WCW vs. nWo: World Tour
WCW vs. the World
WCW Wrestling
WrestleMania series
WWF WrestleMania - Acclaim
WWF WrestleMania Challenge - LJN
WWF WrestleMania (Microcomputer) - Ocean
WWF Super WrestleMania - LJN
WWF WrestleMania: Steel Cage Challenge - LJN
WWF WrestleMania: The Arcade Game - Midway
WWF Road to WrestleMania - Natsume
WWE Road to WrestleMania X8 - Natsume
WWF WrestleMania 2000 - AKI
WWE WrestleMania X8 - Yuke's
WWE WrestleMania XIX - Yuke's
WWE WrestleMania 21 - Studio Gigante
WWE Legends of WrestleMania - Yuke's
WWF/WWE series
MicroLeague Wrestling - MicroLeague
WWE '12 - Yuke's
WWE 13 - Yuke's
WWE 2K14 - Yuke's
WWE 2K15 - Yuke's
WWE 2K16 - Yuke's
WWE 2K17 - Yuke's
WWE 2K18 - Yuke's
WWE 2K19 - Yuke's
WWE 2K20 - Yuke's
WWE All-Stars - THQ
WWF Superstars - Technos
WWF WrestleFest - Technos
WWF Superstars (Game Boy) - LJN
WWF Superstars 2 - LJN
WWF European Rampage Tour - Ocean
WWF Royal Rumble - LJN
WWF RAW - LJN
WWF King of the Ring - LJN
WWF Rage in the Cage - Acclaim
WWF In Your House - Acclaim
WWF War Zone - Acclaim
WWF Attitude - Acclaim
WWF No Mercy - AKI
WWF Royal Rumble (Dreamcast) - Yuke's
WWE RAW - Anchor
WWE RAW 2 - Anchor
WWE Survivor Series - Natsume
WWE Day of Reckoning - Yuke's
WWE Day of Reckoning 2 - Yuke's
Ball/Disc sports
Games involving flying objects that can include balls and discs, where the players can only interact with each other through the object, and may or may not include goalposts.
Lethal League series - Team Reptile
Lethal League
Lethal League Blaze
Windjammers series - Data East / Dotemu
Windjammers - Data East
Windjammers 2 - Dotemu
By theme
Anime-based fighting games
Games based on popular anime series and 3D variants often feature cell shading. "Anime fighters" also usually have very fast-paced action and put emphasis on offense over defense. Another common feature is that they typically have fighting systems built around doing long combos of dozens of attacks. But overall they appear in a variety of fighting game sub-genres.
2D
Dragon Ball Z: Supersonic Warriors - Bandai
Dragon Ball Z: Budokai (series) - Dimps
Fist of the North Star - Arc System Works
JoJo's Bizarre Adventure - Capcom
Phantom Breaker: Omnia - Mages (company)
2.5D
Battle Stadium D.O.N. - Namco Bandai Games / Eighting / Q Entertainment
DNF Duel - Arc System Works
Dragon Ball Z: Budokai 3 - Dimps
Dragon Ball Z: Burst Limit - Dimps
Dragon Ball FighterZ - Arc System Works
Dragon Ball Z: Infinite World - Dimps
Dream Mix TV World Fighters - Hudson Soft
Guilty Gear Xrd - Arc System Works
Guilty Gear -Strive- - Arc System Works
One Piece: Gear Spirit - Bandai
3D
Demon Slayer: Kimetsu no Yaiba – The Hinokami Chronicles - CyberConnect2
Dragon Ball Z: Budokai Tenkaichi (series) - Spike
Dragon Ball: Raging Blast - Spike
Dragon Ball: Raging Blast 2 - Spike
Groove Adventure Rave: Fighting Live - Konami
Naruto: Clash of Ninja (series) - Eighting / Takara Tomy
Naruto: Clash of Ninja 2
Naruto: Clash of Ninja Revolution
Naruto: Clash of Ninja Revolution 2
Naruto: Gekitō Ninja Taisen! 3
Naruto: Gekitō Ninja Taisen! 4
Naruto Shippuden: Clash of Ninja Revolution 3
Naruto Shippuden: Clash of Ninja for Wii U
Naruto Shippūden: Gekitō Ninja Taisen! EX
Naruto Shippūden: Gekitō Ninja Taisen! EX 2
Naruto Shippūden: Gekitō Ninja Taisen! EX 3
Naruto Shippūden: Gekitō Ninja Taisen! Special
Naruto: Ultimate Ninja Storm
Naruto Shippuden: Ultimate Ninja Storm 2
Naruto Shippuden: Ultimate Ninja Storm Generations
Naruto Shippuden: Ultimate Ninja Storm 3
Naruto Shippuden: Ultimate Ninja Storm Revolution
Saint Seiya: Soldiers' Soul - Dimps
Shijō Saikyō no Deshi Kenichi: Gekitō! Ragnarok Hachikengō - Capcom
Super Dragon Ball Z - Bandai
Yu Yu Hakusho: Dark Tournament - Digital Fiction
Zatch Bell! Mamodo Battles / Konjiki no Gash Bell! Yuujou no Tag Battle 2 - Eighting
Zatch Bell! Mamodo Fury / Konjiki no Gash Bell! Gekitou! Saikyou no Mamonotachi - Mechanic Arms
Crossover
Fighting games featuring characters from more than one franchise. Typically, these consist of characters across multiple game and/or comic franchises. Others are initially singular franchises featuring guest characters, often via DLC.
Aquapazza: Aquaplus Dream Match - Examu
Battle Stadium D.O.N. - Namco Bandai Games / Eighting / Q Entertainment
Blade Strangers - Nicalis
BlazBlue: Cross Tag Battle - Arc System Works
Bounty Battle - Dark Screen Games
Brawlhalla - Blue Mammoth Games
Brawlout - Angry Mob Games
Cartoon Network: Punch Time Explosion / Cartoon Network: Punch Time Explosion XL - Papaya Studio
Dead or Alive series - Tecmo
Dead or Alive 4
Dead or Alive: Dimensions
Dead or Alive 5
Dead or Alive 6
Dengeki Bunko: Fighting Climax - Sega
DreamMix TV World Fighters - Bitstep
Fight of Gods - Digital Crafter
Fighters Megamix - Sega / Tiger Electronics
Fighting EX Layer / Fighting EX Layer: Another Dash - Arika
Injustice series - NetherRealm Studios
Injustice: Gods Among Us
Injustice 2
J-Stars Victory VS - Spike Chunsoft
Jump Force - Spike Chunsoft
Jump Super Stars - Ganbarion
Marvel Nemesis: Rise of the Imperfects - Nihilistic / EA Canada / Team Fusion
Mashbox
Melty Blood: Type Lumina - French Bread
Mortal Kombat series - Midway Games / Avalanche Software / Eurocom / Just Games Interactive / Midway Studios Los Angeles / Other Ocean Interactive / Point of View, Inc. / NetherRealm Studios
Mortal Kombat vs. DC Universe
Mortal Kombat
Mortal Kombat X
Mortal Kombat 11
NeoGeo Battle Coliseum - SNK Playmore
Nickelodeon All-Star Brawl - Ludosity / Fair Play Labs
Nitroplus Blasterz: Heroines Infinite Duel - Examu
PlayStation All-Stars Battle Royale - SuperBot Entertainment
Samurai Shodown - SNK
SNK Gals' Fighters - Yumekobo
SNK Heroines: Tag Team Frenzy - SNK / Abstraction Games
SNK vs. Capcom series - Capcom / SNK
Soulcalibur series - Project Soul / Bandai Namco Studios
Soulcalibur II
Soulcalibur Legends
Soulcalibur IV / Soulcalibur II HD Online
Soulcalibur: Broken Destiny
Soulcalibur V
Soulcalibur VI
Street Fighter X Tekken - Capcom
Sunday vs Magazine: Shūketsu! Chōjō Daikessen - Konami
Super Smash Bros. series - Nintendo / HAL Laboratory / Sora / Bandai Namco Studios
Super Smash Bros.
Super Smash Bros. Melee
Super Smash Bros. Brawl
Super Smash Bros. for Nintendo 3DS / Wii U
Super Smash Bros. Ultimate
Teenage Mutant Ninja Turtles: Smash-Up (Wii version) - Game Arts
Tekken 7 - Bandai Namco Studios
The King of Fighters series - SNK / Eolith / BrezzaSoft / Noise Factory
The King of Fighters '94 / The King of Fighters '94 Re-Bout
The King of Fighters '95
The King of Fighters '96
The King of Fighters '97
The King of Fighters '98 / The King of Fighters '98 Ultimate Match
King of Fighters R-1
The King of Fighters '99
King of Fighters R-2
The King of Fighters 2000
The King of Fighters 2001
The King of Fighters 2002 / The King of Fighters 2002 Unlimited Match
The King of Fighters 2003
The King of Fighters: Maximum Impact
The King of Fighters Neowave
The King of Fighters XI
The King of Fighters: Maximum Impact 2 / The King of Fighters: Maximum Impact Regulation-A
The King of Fighters XII
The King of Fighters XIII
The King of Fighters XIV
The King of Fighters XV
Twinkle Queen - Milestone
Under Night In-Birth / Under Night In-Birth Exe:Late / Under Night In-Birth Exe:Late[st] / Under Night In-Birth Exe:Late[cl-r] - Arc System Works / French Bread / Ecole Software
VS. Series - Capcom
Capcom Fighting Evolution
Marvel vs. Capcom sub-series
Super Gem Fighter: Mini Mix
Tatsunoko vs. Capcom: Ultimate All-Stars - Capcom / Eighting
Eroge
Fighting eroge (erotic games). Fighting games with pornographic elements.
Battle Raper series - Illusion
Battle Raper
Battle Raper 2
Strip Fighter series - Studio S
Strip Fighter II - Games Express
Strip Fighter IV - Studio S
Super Strip Fighter IV - Studio S
Ultra Strip Fighter IV Omeco Edition - Studio S
Strip Fighter 5 - Studio S
Strip Fighter 5 Abnormal Edition - Studio S
Strip Fighter IV Rainbow - Studio S
Strip Fighter 3 Naked Street King - Studio S
Strip Fighter 5 Chimpocon Edition - Studio S
Variable Geo - TGL / Giga
Mech
Fighters with a mecha or robot theme.
Armored Warriors series - Capcom
Armored Warriors
Cyberbots: Full Metal Madness
Gundam: Battle Assault (Series)
Joy Mech Fight - Nintendo
Mighty Morphin Power Rangers: The Fighting Edition - Bandai
Neon Genesis Evangelion: Battle Orchestra - Headlock
One Must Fall: 2097 - Epic Games
Power Quest - Sunsoft
Real Steel - Yuke's
Rise of the Robots - Mirage Media / Time Warner Interactive
Rise 2: Resurrection - Mirage Media / Acclaim Entertainment
Rising Thunder - Radiant Entertainment
Robopit - Kokopeli Digital Studios / Altron
Shin Kidō Senki Gundam Wing: Endless Duel - Natsume Co. Ltd. / Bandai
Super Robot Spirits - Banpresto
Tech Romancer - Capcom
Teleroboxer - Nintendo R&D3
WarTech: Senko No Ronde - G.rev
Virtual On series - Sega
Monster/Kaiju
These games feature monsters as playable characters, usually set in destructible city environments.
Godzilla video games - Toho / Atari
King of the Monsters series - SNK
War of the Monsters - Incognito Entertainment / Sony
RPG
Fighting games with RPG elements, like character building or variable storylines.
Flying Dragon
Draglade
Granblue Fantasy Versus - Arc System Works
Legaia series
Legend of Legaia
Legaia 2: Duel Saga
Revengers of Vengeance
Crash 'n the Boys: Street Challenge
Red Earth
River City Ransom
Shadow Fight series - Nekki
Tenkaichi Bushi Keru Nagūru
Tobal series
Tobal No.1
Tobal 2
Virtua Quest
Dissidia Series
Dissidia: Final Fantasy
Dissidia 012 Final Fantasy
Shaolin - THQ
The King of Fighters All Star - Netmarble / SNK
Super deformed
Super deformed refers to a popular type of Japanese caricature where the subject is made to have exaggerated toddler-like features, such as an oversized head and short chubby limbs. Their movements and expressions while super deformed also tend to be exaggerated.
Battle Arena Toshinden series – Tamsoft
Battle Arena Nitoshinden
Exteel – NCsoft (only when "SD Mode" is selected)
Fate/tiger colosseum – Capcom / Cavia / Type-Moon
Flying Dragon series – Culture Brain
SD Hiryū no Ken
SD Hiryū no Ken Gaiden
SD Hiryū no Ken SD Hiryū no Ken Gaiden 2
Flying Dragon (only when "SD Mode" is selected)
SD Hiryū no Ken Densetsu
SD Hiryū no Ken EX
Glove On Fight – Watanabe Seisakujo
Guilty Gear series – Arc System Works
Guilty Gear Petit
Guilty Gear Petit 2
King of Fighters series – SNK
King of Fighters R-1
King of Fighters R-2
Marvel Super Hero Squad – THQ
SNK Gals' Fighters – Yumekobo
SNK vs. Capcom: The Match of the Millennium – SNK
Super Gem Fighter Mini Mix / Pocket Fighter – Capcom
Virtua Fighter series – Sega-AM2
Virtua Fighter Kids
See also
List of beat 'em ups
M.U.G.E.N.
References
External links
Wiktionary - Appendix:Fighting game terms
Fighting games |
33965883 | https://en.wikipedia.org/wiki/Outline%20of%20Wikipedia | Outline of Wikipedia | Wikipedia is a free, web-based, collaborative and multilingual encyclopedia website & project supported by the non-profit Wikimedia Foundation. It has more than 48 million articles ( in English) written collaboratively by volunteers around the world. Almost all of its articles can be edited by anyone with access to the site, and it has about 100,000 regularly active contributors.
What type of thing is Wikipedia?
Reference work – compendium of information, usually of a specific type, compiled in a book for ease of reference. That is, the information is intended to be quickly found when needed. Reference works are usually referred to for particular pieces of information, rather than read beginning to end. The writing style used in these works is informative; the authors avoid use of the first person, and emphasize facts.
Encyclopedia – type of reference work or compendium holding a comprehensive summary of information from either all branches of knowledge or a particular branch of knowledge. Encyclopedias are divided into articles or entries, which are usually accessed alphabetically by article name. Encyclopedia entries are longer and more detailed than those in most dictionaries.
Internet encyclopedia project (online encyclopedia) – large database of useful information, accessible via the World Wide Web.
Database – organized collection of data. The data is typically organized to model aspects of reality in a way that supports processes requiring information. For example, modelling the availability of rooms in hotels in a way that supports finding a hotel with vacancies.
Online database – database accessible from a network, including from the Internet (such as on a web page).
Website – collection of related web pages containing images, videos, or other digital assets. A website is hosted on at least one web server, accessible via a network such as the Internet or a private local area network through an Internet address known as a Uniform Resource Locator. All publicly accessible websites collectively constitute the World Wide Web.
Wiki – website that allows the creation and editing of any number of interlinked web pages via a web browser using a simplified markup language or a WYSIWYG text editor. Wikis are typically powered by wiki software and are often developed and used collaboratively by multiple users. Examples include community websites, corporate intranets, knowledge management systems, and note services. The software can also be used for personal notetaking.
Community – group of interacting people with social cohesion, who may share common values.
Community of action – community in which participants endeavor collaboratively to bring about change.
Community of interest – community of people who share a common interest or passion. These people exchange ideas and thoughts about the given passion, but may know (or care) little about each other outside of this area. The common interest on Wikipedia is knowledge.
Community of purpose – community that serves a functional need, smoothing the path of the member for a limited period surrounding a given activity. For example, researching a topic on Wikipedia.org, buying a car on autobytel.com, or antique collectors on icollector.com or individual.
Virtual community – social network of individuals who interact through specific media, potentially crossing geographical and political boundaries in order to pursue mutual interests or goals.
Online community – virtual community that exists online and whose members enable its existence through taking part in membership ritual. An online community can take the form of an information system where anyone can post content, such as a Bulletin board system or one where only a restricted number of people can initiate posts, such as Weblogs.
Wiki community – users, especially the editors, of a particular wiki.
Collective memory – shared pool of information held in the memories of two or more members of a group.
Implementation of Wikipedia
Structure of Wikipedia
List of Wikipedias – Wikipedia is implemented in many languages. As of April 2018, there were 304 Wikipedias, of which 294 are active.
Logo of Wikipedia – unfinished globe constructed from jigsaw pieces—some pieces are still missing at the top—inscribed with glyphs from many different writing systems.
Articles – written works published in a print or electronic medium. Each Wikipedia is divided into many articles, with each article focusing on a particular topic.
Types of articles on Wikipedia
Prose articles –
Lists –
Item lists –
Article indexes (on the English Wikipedia) –
Outlines (on the English Wikipedia) –
Content management on Wikipedia – processes for the collection, managing, and publishing of information on Wikipedia
Deletionism and inclusionism in Wikipedia – opposing philosophies of editors of Wikipedia concerning the appropriate scope of the encyclopedia, and the appropriate point for a topic to be included as an encyclopedia article or be "deleted".
Notability in English Wikipedia – metric used to determine topics meriting a dedicated encyclopedia article. It attempts to assess whether a topic has "gained sufficiently significant attention by the world at large and over a period of time" as evidenced by significant coverage in reliable secondary sources that are independent of the topic.
Reliability of Wikipedia – Wikipedia is open to anonymous and collaborative editing, so assessments of its reliability usually include examinations of how quickly false or misleading information is removed. An early study conducted by IBM researchers in 2003—two years following Wikipedia's establishment—found that "vandalism is usually repaired extremely quickly—so quickly that most users will never see its effects" and concluded that Wikipedia had "surprisingly effective self-healing capabilities".
Vandalism on Wikipedia – the act of editing the project in a malicious manner that is intentionally disruptive. Vandalism includes the addition, removal, or other modification of the text or other material that is either humorous, nonsensical, a hoax, spam or promotion of a subject, or that is of an offensive, humiliating, or otherwise degrading in nature. There are various measures taken by Wikipedia to prevent or reduce the amount of vandalism.
Wiki magic – described by Jimmy Wales as a phenomenon whereby an author may write the beginnings of an article at the end of the day, only to wake up in the morning and find the stub converted into a much more substantial article.
Computer technology that makes Wikipedia work:
Hardware
Computers – general purpose devices that can be programmed to carry out sets of arithmetic or logical operations automatically. A computer that is used to host server software is called a "server". It takes many servers to make Wikipedia available to the world. These servers are run by the WikiMedia Foundation.
Software – Wikipedia is powered by the following software on WikiMedia Foundation's computers (servers). It takes all of these to make Wikipedia pages available on the World Wide Web:
Operating systems used on WikiMedia Foundation's servers:
Ubuntu Server – used on all Wikipedia servers except those used for image file storage
Solaris – used on Wikipedia's image file storage servers
MediaWiki – main web application that makes Wikipedia work. It's a free web-based wiki software application developed by the Wikimedia Foundation (WMF), written in PHP, that is used to run all of WMF's projects, including Wikipedia. Numerous other wikis around the world also use it.
Content storage – Wikipedia's content (it's articles and other pages) are stored in MariaDB databases. WikiMedia Foundation's wikis are grouped into clusters, and each cluster is served by several MariaDB servers, in a single-master configuration.
Distributed object storage – distributed objects are software modules that are designed to work together, but reside either in multiple computers connected via a network. One object sends a message to another object in a remote machine to perform some task.
Ceph –
Swift –
Proxy servers – act as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource available from a different server and the proxy server evaluates the request as a way to simplify and control its complexity. Proxies were invented to add structure and encapsulation to distributed systems. Today, most proxies are web proxies, facilitating access to content on the World Wide Web. The proxy servers used for Wikipedia are:
For serving up HTML pages – Squid and Varnish caching proxy servers in front of Apache HTTP Server. Apache processes requests via HTTP, the basic network protocol used to distribute information on the World Wide Web.
For serving up image files – Squid and Varnish caching proxy servers in front of Sun Java System Web Server
DNS proxies – WikiMedia Foundation's DNS proxy servers run PowerDNS. It's a DNS server program that runs under Unix (including Ubuntu). DNS stands for "domain name system".
Load balancing –
Linux Virtual Server (LVS) – Wikipedia uses LVS on commodity servers to load-balance incoming requests. LVS is also used as an internal load balancer to distribute MediaWiki and Lucene back-end requests.
PyBal – Wikimedia Foundation's own system for back-end monitoring and failover.
Caching
Memcached – Wikipedia uses Memcached for caching of database query and computation results.
For full-text search – Wikipedia uses Lucene, with extensive customization contributed by Robert Stojnic.
Wikimedia configuration files
Setting up Wikipedia on a home computer
Downloading Wikipedia's database (all article text)
Installing MediaWiki (the software that runs Wikipedia)
Wikipedia community
Community of Wikipedia – loosely-knit network of volunteers, sometimes known as "Wikipedians", who make contributions to the online encyclopedia, Wikipedia. A hierarchy exists whereby certain editors are elected to be given greater editorial control by other community members.
Volunteering – altruistic activity, intended to promote good or improve human quality of life, but people also volunteer for their own skill development, to meet others, to make contacts for possible employment, to have fun, and a variety of other reasons that could be considered self-serving. Volunteerism is the act of selflessly giving your life to something you believe free of pay. Wikipedia is written entirely by volunteers.
Virtual volunteering – working on a task on-line, off-site from the organization being assisted, without the requirement or expectation of being paid, using a computer or other Internet-connected device. Wikipedia is developed on-line by contributors using their web browsers.
Micro-volunteering – tasks done by a volunteer, or a team of volunteers, without payment, either online or offline in small increments of time.
Motivations of Wikipedia contributors – article includes various studies about the motivations of Wikipedia contributors.
Arbitration Committee (ArbCom) – panel of editors elected by the Wikipedia community that imposes binding rulings with regard to disputes between editors of the online encyclopedia. It acts as the court of last resort for disputes among editors.
The Signpost – on-line community-written and community-edited newspaper, covering stories, events and reports related to Wikipedia and the Wikimedia Foundation sister projects.
Viewing Wikipedia off-line
Kiwix – free and open-source offline web browser created by Emmanuel Engelhart and Renaud Gaudin in 2007. It was first launched to allow offline access to Wikipedia, but has since expanded to include other projects from the Wikimedia foundation as well as public domain texts from the Project Gutenberg.
XOWA – open-source application written primarily in Java by anonymous developers, intended for users who wish to run their own copy of Wikipedia, or any other compatible Wiki offline without an internet connection. XOWA is compatible with Microsoft Windows, OSX, Linux and Android.
Diffusion of Wikipedia
Diffusion – process by which a new idea or new product is accepted by the market. The rate of diffusion is the speed that the new idea spreads from one consumer to the next. In economics it is more often named "technological change".
Diffusion of innovations – process by which an innovation is communicated through certain channels over time among the members of a social system.
List of Wikipedias – Wikipedia has spread around the world, being made available to people in their native tongues. As of April 2018, there were 299 Wikipedias.
Websites that use Wikipedia
Books LLC – publishes print-on-demand paperback and downloadable compilations of English texts and documents from open knowledge sources such as Wikipedia.
DBpedia –
Koru search engine –
Wikipediavision –
Websites that mirror Wikipedia
Answers.com –
Bing –
Facebook –
Reference.com –
TheFreeDictionary.com –
Wapedia –
Wikipedia derived encyclopedias
Books LLC –
VDM Publishing –
Veropedia –
WikiPilipinas –
WikiPock –
WikiReader –
Parodies of Wikipedia
Bigipedia – a comedy series broadcast by BBC Radio 4 in July 2009, which was set on a website which was a parody of Wikipedia. Some of the sketches were directly inspired by Wikipedia and its articles.
Encyclopedia Dramatica –
La Frikipedia –
Stupidedia –
Uncyclopedia – satirical website that parodies Wikipedia. Founded in 2005 as an originally English-language wiki, the project currently spans over 75 languages. The English version has over 30,000 pages of content, second only to the Brazilian/Portuguese.
Wikipedia-related media
Wikipedia Signpost – on-line community-written and community-edited newspaper, covering stories, events and reports related to Wikipedia and the Wikimedia Foundation sister projects.
Books about Wikipedia
Common Knowledge?: An Ethnography of Wikipedia –
The Cult of the Amateur –
Good Faith Collaboration –
How Wikipedia Works –
La révolution Wikipédia –
Wikipedia: A New Community of Practice? –
The Wikipedia Revolution: How a Bunch of Nobodies Created the World's Greatest Encyclopedia –
Wikipedia – The Missing Manual –
The World and Wikipedia: How We are Editing Reality –
Films about Wikipedia
List of films about Wikipedia
Third-party software related to Wikipedia
DBpedia (from "DB" for "database") – database built from the structured content of Wikipedia, including infoboxes, etc. It is made available for free on the World Wide Web. DBpedia allows users to semantically query relationships and properties associated with Wikipedia resources, including links to other related datasets.
Kiwix – free program used to view Wikipedia offline (no Internet connection). This is done by reading the content of the project stored in a file of the ZIM format, which contains the compressed contents of Wikipedia. Kiwix is designed for computers without Internet access, and in particular, computers in schools in the Third World, where Internet service is scant.
WikiTaxonomy – hierarchy of classes and instances (an ontology) automatically generated from Wikipedia's category system
YAGO (Yet Another Great Ontology) – knowledge base developed at the Max Planck Institute for Computer Science in Saarbrücken. It is automatically extracted from Wikipedia and other sources. It includes knowledge about more than 10 million entities and contains more than 120 million facts about these entities.
Mobile apps
QRpedia – mobile Web-based system which uses QR codes to deliver Wikipedia articles to users, in their preferred language. The QRpedia server uses Wikipedia's API to determine whether there is a version of the specified Wikipedia article in the language used by the device, and if so, returns it in a mobile-friendly format. If there is no version of the article available in the preferred language, then the QRpedia server performs a search for the article title on the relevant language's Wikipedia, and returns the results.
WikiNodes – app for the Apple iPad for browsing Wikipedia using a radial tree approach to visualize how articles and subsections of articles are interrelated. It is a visual array of related items (articles or sections of an article), which spread on the screen, as a spiderweb of icons.
Reliability analysis programs
Wiki-Watch – free page analysis tool that automatically assesses the reliability of Wikipedia articles in English and German. It produces a five-level evaluation score corresponding to its assessment of reliability.
Wikibu – assesses the reliability of German Wikipedia articles. It was originally designed for use in schools to improve information literacy.
WikiTrust – assesses the credibility of content and author reputation of wiki articles using an automated algorithm. WikiTrust is a plug-in for servers using the MediaWiki platform, such as Wikipedia.
General Wikipedia concepts
Wikipedia iOS apps –
Henryk Batuta hoax – hoax perpetrated on the Polish Wikipedia in the form of an article about Henryk Batuta (born Izaak Apfelbaum), a fictional socialist revolutionary and Polish Communist. The fake biography said Batuta was born in Odessa in 1898 and participated in the Russian Civil War. The article was created on November 8, 2004, and exposed as a hoax 15 months later when on February 1, 2006, it was listed for deletion.
Bomis – former dot-com company founded in 1996 by Jimmy Wales and Tim Shell. Its primary business was the sale of advertising on the Bomis.com search portal, and to provide support for the free encyclopedia projects Nupedia and Wikipedia.
Conflict of interest editing on Wikipedia –
Crnogorska Enciklopedija –
Deletionpedia –
Democratization of knowledge –
Enciclopedia Libre Universal en Español –
Essjay controversy –
Gene Wiki –
Péter Gervai –
Good Faith Collaboration –
Internet Watch Foundation and Wikipedia –
Interpedia – an early proposal for a collaborative Internet encyclopedia
Rick Jelliffe –
Kidnapping of David Rohde –
Alan Mcilwraith –
National Portrait Gallery and Wikimedia Foundation copyright dispute –
Network effect –
Nupedia –
Wikipedia:Nupedia and Wikipedia –
Edward Owens (hoax) –
Simon Pulsifer –
QRpedia – multilingual, mobile interface to Wikipedia
La révolution Wikipédia –
WikiScanner –
Speakapedia –
The Truth According to Wikipedia –
Truth in Numbers? –
Universal Edit Button –
US Congressional staff edits to Wikipedia –
User-generated content –
Wolfgang Werlé and Manfred Lauber –
Wiki –
Wikidumper.org –
Wikipedia biography controversy –
Wikipedia CD Selection –
Wikipedia Review –
Wikipedia in culture
Politics of Wikipedia
Censorship of Wikipedia –
Church of Scientology editing on Wikipedia –
Corporate Representatives for Ethical Wikipedia Engagement –
Wikipedia for World Heritage – effort underway to get Wikipedia listed as a UNESCO World Heritage Site.
History of Wikipedia
History of Wikipedia – Wikipedia was formally launched on 15 January 2001 by Jimmy Wales and Larry Sanger, using the concept and technology of a wiki pioneered by Ward Cunningham. Initially, Wikipedia was created to complement Nupedia, an online encyclopedia project edited solely by experts, by providing additional draft articles and ideas for it. Wikipedia quickly overtook Nupedia, becoming a global project in multiple languages and inspiring a wide range of additional reference projects.
Nupedia – the predecessor of Wikipedia. Nupedia was an English-language Web-based encyclopedia that lasted from March 2000 until September 2003. Its articles were written by experts and licensed as free content. It was founded by Jimmy Wales and underwritten by Bomis, with Larry Sanger as editor-in-chief.
Wayback Machine – digital time capsule created by the Internet Archive non-profit organization, based in San Francisco, California. The service enables users to see archived versions of web pages (including Wikipedia) across time, which the Archive calls a "three dimensional index". Internet Archive bought the domain waybackmachine.org for their own site. It is currently in its beta test.
Wikipedia on the Wayback Machine
Founders of Wikipedia
Larry Sanger – chief organizer (2001–2002) of Wikipedia. He moved on and founded Citizendium.
Jimmy Wales – historically cited as a co-founder of Wikipedia, though he has disputed the "co-" designation, declaring himself the sole founder. Wales serves on the Board of Trustees of the Wikimedia Foundation, the non-profit charitable organization he helped establish to operate Wikipedia, holding its board-appointed "community founder seat".
Academic studies about Wikipedia – In recent years there have been numerous academic studies about Wikipedia in peer-reviewed publications. This research can be grouped into two categories. The first analyzed the production and reliability of the encyclopedia content, while the second investigated social aspects, such as usage and administration. Such studies are greatly facilitated by the fact that Wikipedia's database can be downloaded without needing to ask the assistance of the site owner.
Flagged Revisions – software extension to the MediaWiki wiki software that allows moderation of edits to Wiki pages. It was developed by the Wikimedia Foundation for use on Wikipedia and similar wikis hosted on its servers. On June 14, 2010, English Wikipedia began a 2-month trial of a similar feature known as pending changes. In May 2011, this feature was removed indefinitely from all articles, after a discussion among English Wikipedia editors.
Wikipedia-inspired projects
Citizendium – is a wiki for providing free knowledge where authors use their real, verified names.
Conservapedia – is an English-language wiki encyclopedia project written from an American conservative point of view.
Infogalactic – is intended to have less alleged politically progressive, left-wing, or "politically correct" bias than Wikipedia, and to allow articles or statements that would not be allowed on Wikipedia because of problems with Wikipedia's policies on reliable sources, or due to alleged biases held by Wikipedia editors.
Knol – was a Google project that aimed to include user-written articles on a range of topics.
Scholarpedia – is an English-language online wiki-based encyclopedia with features commonly associated with open-access online academic journals, which aims to have quality content.
Uncyclopedia – is a satirical website that parodies Wikipedia. Its logo, a hollow "puzzle potato", parodies Wikipedia's globe puzzle logo, and it styles itself "the content-free encyclopedia", which is a parody of Wikipedia's slogan, "the free encyclopedia". The project spans over 75 languages. The English version has approximately 30,000 pages of content, second only to the Portuguese.
Wikipedia in culture
Wikipedia in culture –
Wikiracing – game using the online encyclopedia Wikipedia which focuses on traversing links from one page to another. The average number of links separating any two Wikipedia pages is 3.67.
People in relation to Wikipedia
Larry Sanger – chief organizer (2001–2002) of Wikipedia. He moved on and founded Citizendium.
Jimmy Wales – historically cited as a co-founder of Wikipedia, though he has disputed the "co-" designation, declaring himself the sole founder. Wales serves on the Board of Trustees of the Wikimedia Foundation, the non-profit charitable organization he helped establish to operate Wikipedia, holding its board-appointed "community founder" seat.
Andrew Lih – veteran Wikipedia contributor, and in 2009 published the book The Wikipedia Revolution: How a Bunch of Nobodies Created the World's Greatest Encyclopedia. Lih has been interviewed in a variety of publications, including Salon.com and The New York Times Freakonomics blog, as an expert on Wikipedia.
Critics of Wikipedia
Murat Bardakçı – on Turkish television, he declared that Wikipedia should be banned.
Nicholas G. Carr – in his 2005 blog essay titled "The Amorality of Web 2.0," he criticized the quality of volunteer Web 2.0 information projects such as Wikipedia and the blogosphere and argued that they may have a net negative effect on society by displacing more expensive professional alternatives.
Jorge Cauz – president of Encyclopædia Britannica Inc.. In July 2006, in an interview in The New Yorker, he stated that Wikipedia would "decline into a hulking, mediocre mass of uneven, unreliable, and, many times, unreadable articles" and that "Wikipedia is to Britannica as American Idol is to the Juilliard School."
Conservapedia English-language wiki project started in 2006 by homeschool teacher and attorney Andy Schlafly, son of conservative activist Phyllis Schlafly, to counter what he called the liberal bias of Wikipedia.
Gay Nigger Association of America – anti-blogging Internet trolling organization. On Wikipedia, members of the group created a page about themselves, while adhering to every rule of Wikipedia in order to use the system against itself.
Aaron Klein –
Jaron Lanier –
Robert McHenry –
Patrick Nielsen Hayden –
Andrew Orlowski –
Robert L. Park –
Jason Scott Sadofsky –
Larry Sanger –
Andrew Schlafly –
John Seigenthaler –
Lawrence Solomon –
Sam Vaknin –
Wikipedia Review –
Tom Wolfe –
Wikipedia Foundations and Organizations
Wikimedia Foundation – the non profit based in San Francisco, California, USA which was established to own and manage the trademarks and the servers for Wikipedia and its sister projects.
Wikipedia-related projects
Wikipedia's sister projects
Wikimedia projects
Commons – online repository of free-use images, sound and other media files, hosted by the Wikimedia Foundation.
MediaWiki website – home of MediaWiki (the software that runs Wikipedia), and where it gets developed.
Meta-Wiki – central site to coordinate all Wikimedia projects.
Wikibooks – Wiki hosted by the Wikimedia Foundation for the creation of free content textbooks and annotated texts that anyone can edit.
Wikidata – free and open knowledge base that can be read and edited by both humans and machines.
Wikinews – free-content news source wiki and a project of the Wikimedia Foundation that works through collaborative journalism.
Wikiquote – freely available collection of quotations from prominent people, books, films and proverbs, with appropriate attributions.
Wikisource – online digital library of free content textual sources on a wiki, operated by the Wikimedia Foundation.
Wikispecies – wiki-based online project supported by the Wikimedia Foundation. Its aim is to create a comprehensive free content catalogue of all species and is directed at scientists, rather than at the general public.
Wikiversity – Wikimedia Foundation project which supports learning communities, their learning materials, and resulting activities.
Wikivoyage – free web-based travel guide for travel destinations and travel topics written by volunteer authors.
Wiktionary – multilingual, web-based project to create a free content dictionary, available in 158 languages, run by the Wikimedia Foundation.
Wikipedias by language
Afrikaans (af)
Albanian (sq)
Alemannic (als)
Arabic (ar)
Aragonese (an)
Armenian (hy)
Azeri (az)
Bambara (bm)
Basque (eu)
Belarusian (be-x-old)
Belarusian (be)
Bengali (bn)
Bosnian (bs)
Bulgarian (bg)
Cantonese (zh-yue)
Catalan (ca)
Cebuano (ceb)
Chechen (ce)
Chinese (zh)
Chuvash (cv)
Croatian (hr)
Czech (cs)
Danish (da)
Dutch Low Saxon (nds-nl)
Dutch (nl)
Egyptian Arabic (arz)
English (en)
Esperanto (eo)
Estonian (et)
Finnish (fi)
French (fr)
Galician (gl)
Georgian (ka)
German (de)
Greek (el)
Haitian Creole (ht)
Hebrew (he)
Hindi (hi)
Hungarian (hu)
Indonesian (id)
Irish (ga)
Italian (it)
Japanese (ja)
Javanese (jv)
Kannada (kn)
Kazakh (kk)
Korean (ko)
Latin (la)
Latvian (lv)
Lithuanian (lt)
Macedonian (mk)
Malayalam (ml)
Malay (ms)
Marathi (mr)
Minangkabau (min)
Min Nan (zh-min-nan)
Mongolian (mn)
Neapolitan (nap)
Nepal Bhasa (new)
Nepalese (ne)
Northern Sami (se)
Norwegian (Bokmål) (no)
Norwegian (Nynorsk) (nn)
Occitan (oc)
Oriya (or)
Punjabi (Eastern) (pa)
Persian (fa)
Polish (pl)
Portuguese (pt)
Ripuarian (ksh)
Romanian (ro)
Russian (ru)
Sanskrit (sa)
Scots (sco)
Serbian (sr)
Serbo-Croatian (sh)
Silesian (szl)
Simple English (simple)
Slovak (sk)
Slovene (sl)
Spanish (es)
Swahili (sw)
Swedish (sv)
Tagalog (tl)
Tamil (ta)
Telugu (te)
Thai (th)
Turkish (tr)
Ukrainian (uk)
Urdu (ur)
Uzbek (uz)
Vietnamese (vi)
Võro (fiu-vro)
Waray-Waray (war)
Welsh (cy)
Volapük (vo)
Wolof (wo)
Yiddish (yi)
Zulu (zu)
More...
See also
Wikipedia:Contents – network of outlines of Wikipedia's content
Outline of knowledge – outline about knowledge, and of the body of all human knowledge
The Signpost – on-line community-written and community-edited newspaper, covering stories, events and reports related to Wikipedia and the Wikimedia Foundation sister projects.
Wikipedia:Help
List of wikis
List of online encyclopedias
Wikipedia:Semapedia –
References
External links
Wikipedia – multilingual portal (contains links to all language editions of the project)
Wikipedia mobile phone portal
Wikipedia topic page at The New York Times
Wikipedia
Wikipedia |
5418830 | https://en.wikipedia.org/wiki/GNUmed | GNUmed | GNUmed is a Free/Libre electronic medical record (EMR) for Unix-like systems (BSD, Linux, and UNIX systems), Microsoft Windows, macOS and other platforms. GNUmed aims to provide medical software that respects the privacy of patients and that is based on open standards.
GNUmed is based on third party projects such as free software/open source DBMS PostgreSQL and is written mostly in Python. It is supported by a graphical user interface (GUI) based on WxPython.
History
The first version of the GNUmed was created by Horst Herb. When Herb ceased active development, the development of GNUmed was picked up by Karsten Hilbert who took over as project leader and partly overhauled the project.
Karsten Hilbert was not alone in his efforts. Several other developers joined the team and helped at one time or the other: Syan Tan, Ian Haywood, Hilmar Berger, Sebastian Hilbert, Carlos Moro, Michael Bonert, Richard Terry, Tony Lembke and many more. While some concentrated on coding, many others, like Jim Busser and Rogerio Luz Coelho helped by creating documentation or by submitting other comments.
The name was initially chosen to give credit to the GNU project and GNUmed's connection to the medical profession. The logo depicts a Gnu as a reference to the GNU project accompanied by a Python as a reference to the programming language as well as to the medical profession.
At the time, GNUmed was just another free software project aiming to become an alternative to the established EMRs. It has since evolved to rival other EMRs in terms of functionality and performance.
Usage
GNUmed is primarily used to manage electronic medical records. It provides means of archiving paper records as well as collecting metadata on these records. Some uses include administrative tasks such as adding and activating patients, and recording tasks such as data on patients' allergies or immunization records.
Features
GNUmed supports a variety of features, many implemented as plugins which extend the core functionality. These can range a medical paper record archiving system to vaccination status handling. A list of features is provided in GNUmed's documentation system.
By making use of GNUmed's interface 3rd party software can interact with GNUmed to make use of its features.
See also
List of GNU packages
GNU Project
GNU Health
References
General references
Further reading
External links
GNU Project software
Free health care software
Electronic health record software
Healthcare software for MacOS
Healthcare software for Windows
Healthcare software for Linux
Software that uses wxPython |
169633 | https://en.wikipedia.org/wiki/Outline%20of%20computer%20science | Outline of computer science | Computer science (also called computing science) is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.
What is computer science?
Computer science can be described as all of the following:
Academic discipline
Science
Applied science
Subfields
Mathematical foundations
Coding theory – Useful in networking, programming and other areas where computers communicate with each other.
Game theory – Useful in artificial intelligence and cybernetics.
Discrete Mathematics
Graph theory – Foundations for data structures and searching algorithms.
Mathematical logic – Boolean logic and other ways of modeling logical queries; the uses and limitations of formal proof methods
Number theory – Theory of the integers. Used in cryptography as well as a test domain in artificial intelligence.
Algorithms and data structures
Algorithms – Sequential and parallel computational procedures for solving a wide range of problems.
Data structures – The organization and manipulation of data.
Artificial intelligence
Outline of artificial intelligence
Artificial intelligence – The implementation and study of systems that exhibit an autonomous intelligence or behavior of their own.
Automated reasoning – Solving engines, such as used in Prolog, which produce steps to a result given a query on a fact and rule database, and automated theorem provers that aim to prove mathematical theorems with some assistance from a programmer.
Computer vision – Algorithms for identifying three-dimensional objects from a two-dimensional picture.
Soft computing, the use of inexact solutions for otherwise extremely difficult problems:
Machine learning - Development of models that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data.
Evolutionary computing - Biologically inspired algorithms.
Natural language processing - Building systems and algorithms that analyze, understand, and generate natural (human) languages.
Robotics – Algorithms for controlling the behaviour of robots.
Communication and security
Networking – Algorithms and protocols for reliably communicating data across different shared or dedicated media, often including error correction.
Computer security – Practical aspects of securing computer systems and computer networks.
Cryptography – Applies results from complexity, probability, algebra and number theory to invent and break codes, and analyze the security of cryptographic protocols.
Computer architecture
Computer architecture – The design, organization, optimization and verification of a computer system, mostly about CPUs and Memory subsystem (and the bus connecting them).
Operating systems – Systems for managing computer programs and providing the basis of a usable system.
Computer graphics
Computer graphics – Algorithms both for generating visual images synthetically, and for integrating or altering visual and spatial information sampled from the real world.
Image processing – Determining information from an image through computation.
Information visualization – Methods for representing and displaying abstract data to facilitate human interaction for exploration and understanding.
Concurrent, parallel, and distributed systems
Parallel computing - The theory and practice of simultaneous computation; data safety in any multitasking or multithreaded environment.
Concurrency (computer science) – Computing using multiple concurrent threads of execution, devising algorithms for solving problems on multiple processors to achieve maximal speed-up compared to sequential execution.
Distributed computing – Computing using multiple computing devices over a network to accomplish a common objective or task and thereby reducing the latency involved in single processor contributions for any task.
Databases
Outline of databases
Relational databases – the set theoretic and algorithmic foundation of databases.
Structured Storage - non-relational databases such as NoSQL databases.
Data mining – Study of algorithms for searching and processing information in documents and databases; closely related to information retrieval.
Programming languages and compilers
Compiler theory – Theory of compiler design, based on Automata theory.
Programming language pragmatics – Taxonomy of programming languages, their strength and weaknesses. Various programming paradigms, such as object-oriented programming.
Programming language theory
Formal semantics – rigorous mathematical study of the meaning of programs.
Type theory – Formal analysis of the types of data, and the use of these types to understand properties of programs — especially program safety.
Scientific computing
Computational science – constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems.
Numerical analysis – Approximate numerical solution of mathematical problems such as root-finding, integration, the solution of ordinary differential equations; the approximation of special functions.
Symbolic computation – Manipulation and solution of expressions in symbolic form, also known as Computer algebra.
Computational physics – Numerical simulations of large non-analytic systems
Computational chemistry – Computational modelling of theoretical chemistry in order to determine chemical structures and properties
Bioinformatics and Computational biology – The use of computer science to maintain, analyse, store biological data and to assist in solving biological problems such as Protein folding, function prediction and Phylogeny.
Computational neuroscience – Computational modelling of neurophysiology.
Software engineering
Outline of software engineering
Formal methods – Mathematical approaches for describing and reasoning about software design.
Software engineering – The principles and practice of designing, developing, and testing programs, as well as proper engineering practices.
Algorithm design – Using ideas from algorithm theory to creatively design solutions to real tasks.
Computer programming – The practice of using a programming language to implement algorithms.
Human–computer interaction – The study and design of computer interfaces that people use.
Reverse engineering – The application of the scientific method to the understanding of arbitrary existing software.
Theory of computation
Automata theory – Different logical structures for solving problems.
Computability theory – What is calculable with the current models of computers. Proofs developed by Alan Turing and others provide insight into the possibilities of what may be computed and what may not.
List of unsolved problems in computer science
Computational complexity theory – Fundamental bounds (especially time and storage space) on classes of computations.
Quantum computing theory – Explores computational models involving quantum superposition of bits.
History
History of computer science
List of pioneers in computer science
Professions
Programmer (Software developer)
Teacher/Professor
Software engineer
Software architect
Software tester
Hardware engineer
Data analyst
Interaction designer
Network administrator
Data scientist
Data and data structures
Data structure
Data type
Associative array and Hash table
Array
List
Tree
String
Matrix (computer science)
Database
Programming paradigms
Imperative programming/Procedural programming
Functional programming
Logic programming
Object oriented programming
Class
Inheritance
Object
See also
Abstraction
Big O notation
Closure
Compiler
Cognitive science
External links
ACM report on a recommended computer science curriculum (2008)
Directory of free university lectures in Computer Science
Collection of Computer Science Bibliographies
Photographs of computer scientists (Bertrand Meyer's gallery)
Computer science
Computer science
Outline
Computer science topics |
30947 | https://en.wikipedia.org/wiki/Titan | Titan | Titan most often refers to:
Titan (moon), the largest moon of Saturn
Titans, a race of deities in Greek mythology
Titan or Titans may also refer to:
Arts and entertainment
Fictional entities
Fictional locations
Titan in fiction, fictionalized depictions of the moon of Saturn
Titan (Marvel Comics location), a moon
Titan (Marvel Cinematic Universe), its Marvel Cinematic Universe counterpart
Titan, a moon in the list of locations of the DC Universe
Titan, a Fighting Fantasy gamebooks world
Fictional characters
Titan (Dark Horse Comics), a superhero
Titan (Imperial Guard), a Marvel Comics superhero
Titan (New Gods), from DC Comics' Darkseid's Elite
Titan, in the Infershia Pantheon
Titan, in Megamind
Titan, in Sym-Bionic Titan
King Titan, on Stingray (1964 TV series)
Fictional species and groups
Titan (Dune)
Titan (Dungeons & Dragons)
Teen Titans, a DC superhero team
Titan Legions, units in the tabletop game Epic
Titans, in All Tomorrows
Titans, in Attack on Titan (manga series)
Titans, in Brütal Legend
Titans, in Destiny (video game)
Titans, in Godzilla: King of the Monsters (2019 film)
Titans, in the Marvel Universe, the fictional race of supervillain Thanos
Titans, in Mobile Suit Zeta Gundam
Titans, in Titanfall
Other fictional entities
Titan, a chemical in Batman: Arkham Asylum
Titan, a class of ship in Eve Online
Titan, a ship in The Wreck of the Titan: Or, Futility
Film and television
The Titan (film), a 2018 science fiction film directed by Lennart Ruff
The Titan: Story of Michelangelo, a 1950 German documentary film
Titan A.E., a 2000 animated film
Titans (2000 TV series), a 2000 American soap opera
Titans (2018 TV series), a 2018 live-action superhero series
Titans (Canadian TV series), a 1981–1982 docudrama series
Games
Titan (1988 video game), a puzzle game by Titus
Titan (Battlefield 2142)
Titan (Blizzard Entertainment project), a cancelled massive multiplayer game
Titan (board game), a board game
Titan (eSports), an electronic sports team
Titan (game engine)
Age of Mythology: The Titans, an expansion pack for the Age of Mythology computer game
Planetary Annihilation: Titans, an expansion pack RTS
Literature
Titan (Baxter novel), a 1997 science fiction novel by Stephen Baxter
Titan (Bova novel), a novel by Ben Bova in the Grand Tour series
Titan (Jean Paul novel), a novel by the German writer Jean Paul
Titan (John Varley novel), a 1979 novel in the Gaea Trilogy
Titan (Fighting Fantasy book), a 1986 fantasy encyclopedia edited by Marc Gascoigne
Star Trek: Titan, a novel series
The Game-Players of Titan, a 1963 science fiction novel by Philip K. Dick
The Sirens of Titan, a 1959 science fiction novel by Kurt Vonnegut, Jr.
The Titan (collection), a collection of short stories by P. Schuyler Miller
The Titan (novel), a 1914 novel by Theodore Dreiser
The Titans (comic book) (1999–2003), published by DC Comics, featuring the Teen Titans superhero team
The Titans (novel), a novel in the Kent Family Chronicles series by John Jakes
Music
Titán (band), a Mexican band
Tytan (band), a British rock band
Titan (album), a 2014 album by Septicflesh
Symphony No. 1 (Mahler), given the working title Titan
"Titan", by HammerFall from the album Threshold
"Titan", by the American band Bright from the album The Albatross Guest House
The Titan (EP), by Oh, Sleeper
"Titans", by Major Lazer featuring Sia and Labrinth from the album Music Is the Weapon (Reloaded)
Roller coasters
Titan (Six Flags Over Texas), a steel hyper coaster at Six Flags Over Texas, Arlington, Texas, US
Titan (Space World), a steel roller coaster at Space World, Kitakyushu, Japan
Brands and enterprises
Entertainment and media companies
Titan (transit advertising company), an American advertising company
Titan Corporation, a United States-based information technology company
Titan Entertainment Group, a British media company that includes Titan Books and Titan Comics
Titan Media, a pornographic film company
Titan Studios, a video game company
Manufacturers
Titan Aircraft, an aircraft kit manufacturer
Titan Cement, a Greek building materials company
Titan Chemical Corp, a Malaysian chemical company
Titan Company, an Indian watchmaking and luxury goods company
Titan Formula Cars, a race car manufacturer from 1967-1976
Titan Tire Corporation
Other brands and enterprises
Titan, a hockey equipment brand by The Hockey Company
Titan, a line of locks by Kwikset
Titan Airways, an airline
Titan Advisors, an American asset management firm
TITAN Salvage, a marine salvage and wreck removal company
People
Titán (wrestler) (born 1990), Mexican masked wrestler
Titan, gladiator from the 2008 TV series American Gladiators
Oliver Kahn (born 1969), German footballer known as Der Titan
Places
Titan (cave), Derbyshire, England
Titan, Saghar District, Afghanistan
Titan, Bucharest, a neighborhood of Bucharest, Romania
Titan metro station
Titan, Russia, a rural locality in Murmansk Oblast, Russia
Titan Tower (Fisher Towers), a natural tower in Utah, US
Science and technology
Computing
Smartphones
HTC Titan (Windows Mobile phone), a smartphone running the Windows Mobile operating system
HTC Titan, a smartphone running the Windows Phone operating system
HTC TyTN, a smartphone
Moto G (2nd generation), a Motorola smartphone with the codename Titan running the Android operating system
Other uses in computing
Titan (1963 computer), a 1960s British computer
Titan (game engine)
Titan (microprocessor), a scrapped family of 32-bit PowerPC-based microprocessor cores
Titan (supercomputer), an American supercomputer
Titan, a Facebook messaging platform
GTX Titan, a GPU by NVIDIA
TITAN2D, a geoflow simulation software application
Titan (security token), a security chip and key from Google
Cranes
Herman the German (crane vessel), former nickname for the floating crane Titan in the Panama Canal Zone
Australian floating crane Titan
Titan crane, a type of block setting crane
Titan Clydebank, a cantilever crane in Scotland
Natural sciences
Titan (moon), the largest moon of Saturn
"-titan", a commonly used taxonomic suffix to describe large animals
Titan beetle
Titan test, an intelligence test
Sports
Sports teams
Acadie–Bathurst Titan, a Canadian ice hockey team
Dresden Titans, a German basketball team
Gold Coast Titans, an Australian rugby league team
New York Titans (lacrosse), a 2006–2009 American lacrosse team
Orlando Titans, a 2010 American lacrosse team
Taunton Titans, first XV team of Taunton Rugby Football Club
Tennessee Titans, an American football team
Titanes F.C., a Venezuelan football team
Titanes de Barranquilla, a Colombian basketball team
Titans (cricket team), a South African cricket team
Titans of New York, an American football team
Titans RLFC, a Welsh rugby league team
Ulster Titans, a Northern Irish rugby team
Victoria Titans, an Australian basketball team
Championships
Titan Cup, a triangular cricket series between India, South Africa and Australia in 1996
Vehicles
Air- and spacecraft
Titan (rocket family)
Titan I
Titan II
Airfer Titan, a Spanish paramotor design
Cessna 404 Titan, a light aircraft
Ellipse Titan, a hang glider
Pro-Design Titan, an Austrian paraglider design
Titan Tornado, a family of cantilever high-wing, pusher configuration, tricycle gear-equipped kit aircraft manufactured by Titan Aircraft
Land vehicles
Apple electric car project, codenamed Titan
Chevrolet Titan, a cabover truck made 1968–1988
Leyland Titan (B15), a bus made 1977–1984
Leyland Titan (front-engined double-decker), a bus chassis made 1927–1969
Mazda Titan, a cabover truck sold in Japan
Nissan Titan, a pickup truck made 2003–present
Terex 33-19 "Titan", a haul truck
Volkswagen Titan, a truck in the Volkswagen Constellation line sold in Brazil
Maritime vessels
Titan (steam tug 1894), a Dutch steam tug
Titan (yacht), a 2010 Abeking & Rasmussen built yacht
Australian floating crane Titan
Empire Titan, a tugboat
USNS Titan (T-AGOS-15), a 1988 U.S. Navy ship
Titan, a floating crane in the Panama Canal Zone long known as Herman the German
Rail
Titan, a South Devon Railway Gorgon class locomotive
Other uses
Titan test, an intelligence test
Titan (dog), the world's tallest dog
Titan (prison), a proposed new classification of prison in England and Wales
Titan, a type of banknote of the pound sterling
Titan language, a language of Manus Island, Papua New Guinea
Titan the Robot, a costume
HMH-769, a helicopter squadron, nicknamed Titan
See also
The Titan Games, an American television series
Game Titan, a former American Video Game development studio
Remember the Titans, a 2000 American sports drama film
Project Titan (disambiguation)
Teen Titans (disambiguation)
Titanic (disambiguation)
Titanium (disambiguation)
Titian (disambiguation)
Titin, a protein |
16443510 | https://en.wikipedia.org/wiki/1372%20Haremari | 1372 Haremari | 1372 Haremari, provisional designation , is a rare-type Watsonian asteroid and a suspected trojan of Ceres from the central regions of the asteroid belt, approximately 26 kilometers in diameter. It was discovered on 31 August 1935, by astronomer Karl Reinmuth at the Heidelberg-Königstuhl State Observatory in southwest Germany. The asteroid was named for all female staff members of the Astronomical Calculation Institute.
Orbit and classification
Haremari is a member of the very small Watsonia family (), named after its parent body, namesake and largest member, 729 Watsonia.
It orbits the Sun in the central main-belt at a distance of 2.4–3.2 AU once every 4 years and 7 months (1,680 days). Its orbit has an eccentricity of 0.15 and an inclination of 16° with respect to the ecliptic. The body's observation arc begins with its first observation at Heidelberg in February 1928, more than seven years prior to its official discovery observation.
Trojan of Ceres
Long-term numerical integrations suggest, that Haremari is a trojan of Ceres, staying a 1:1 orbital resonance with the only dwarf planet of the asteroid belt. It is thought that
Haremari is currently transiting from a tadpole to a horseshoe orbit. Other suspected co-orbitals are the asteroids 855 Newcombia, 4608 Wodehouse and 8877 Rentaro.
Physical characteristics
In the SMASS classification, Haremari is a rare L-type asteroid with a moderate albedo. This type corresponds with the overall spectral type of the Watsonia family.
Rotation period
In November 2009, a rotational lightcurve of Haremari was obtained from photometric observations by Richard Durkee at the Shed of Science Observatory (). Lightcurve analysis gave a rotation period of 15.25 hours with a brightness amplitude of 0.12 magnitude ().
Diameter and albedo
According to the surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Haremari measures between 21.96 and 31.17 kilometers in diameter and its surface has an albedo between 0.0303 and 0.146.
The Collaborative Asteroid Lightcurve Link derives an albedo of 0.1097 and a diameter of 24.18 kilometers based on an absolute magnitude of 11.1.
Naming
This minor planet jointly honors all the female staff members of the Astronomical Calculation Institute (Heidelberg University) (), commonly known as ARI. In often published versions, "Haremari" is a composed name and means "the harem of A.R.I.".
Alternative version
According to Ingrid van Houten-Groeneveld, who worked as a young astronomer at Heidelberg, Reinmuth had often been asked by his colleges at ARI to name some of his discoveries after their female friends, as wells as after popular actresses (and not just the female staff at ARI). He then compiled all these proposals to the name "Haremari". However, as Groeneveld recorded, "Reinmuth did not want to publish the original meaning and he, therefore, devised the interpretation of the first sentence in 1948".
References
External links
Nueva órbita del asteroide 1.372 Haremari, José María González Aboin, Universidad de Madrid (1954)
Les noms des astéroïdes, M.-A. Combes, L'Astronomie, Vol. 87, (1974)
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
001372
Discoveries by Karl Wilhelm Reinmuth
Minor planets named for people
Named minor planets
001372
001372
001372
19350831 |
33970218 | https://en.wikipedia.org/wiki/DECbit | DECbit | DECbit is a TCP congestion control technique implemented in routers to avoid congestion. Its utility is to predict possible congestion and prevent it.
When a router wants to signal congestion to the sender, it adds a bit in the header of packets sent. When a packet arrives at the router, the router calculates the average queue length for the last (busy + idle) period plus the current busy period. (The router is busy when it is transmitting packets, and idle otherwise). When the average queue length exceeds 1, then the router sets the congestion indication bit in the packet header of arriving packets.
When the destination replies, the corresponding ACK includes a set congestion bit. The sender receives the ACK and calculates how many packets it received with the congestion indication bit set to one. If less than half of the packets in the last window had the congestion indication bit set, then the window is increased linearly. Otherwise, the window is decreased exponentially.
This technique dynamically manages the window to avoid congestion and increasing freight if it detects congestion and tries to balance bandwidth with respect to the delay.
Note that this technique does not allow for effective use of the line, because it fails to take advantage of the available bandwidth. Besides, the fact that the tail has increased in size from one cycle to another does not always mean there is congestion.
References
K. K. Ramakrishnan and Raj Jain, A binary feedback scheme for congestion avoidance in computer networks with a connectionless network layer, Proceedings of ACM SIGCOMM '88 Symposium proceedings on Communications architectures and protocols, Pages 303-313, Stanford, California, USA — August 16 - 18, 1988
Computer networking |
2443683 | https://en.wikipedia.org/wiki/B-Method | B-Method | The B method is a method of software development based on B, a tool-supported formal method based on an abstract machine notation, used in the development of computer software. It was originally developed in the 1980s by Jean-Raymond Abrial in France and the UK. B is related to the Z notation (also originated by Abrial) and supports development of programming language code from specifications. B has been used in major safety-critical system applications in Europe (such as the automatic Paris Métro lines 14 and 1 and the Ariane 5 rocket). It has robust, commercially available tool support for specification, design, proof and code generation.
Compared to Z, B is slightly more low-level and more focused on refinement to code rather than just formal specification — hence it is easier to correctly implement a specification written in B than one in Z. In particular, there is good tool support for this.
The same language is used in specification, design and programming.
Mechanisms include encapsulation and data locality.
Subsequently, another formal method called Event-B has been developed. Event-B is considered an evolution of B (also known as classical B). It is a simpler notation, which is easier to learn and use. It comes with tool support in the form of the Rodin tool.
The main components
B notation depends on set theory and first order logic in order to specify different versions of software that covers the complete cycle of project development.
Abstract machine
In the first and the most abstract version, which is called Abstract Machine, the designer should specify the goal of the design.
Refinement
Then, during a refinement step, they may pad the specification in order to clarify the goal or to turn the abstract machine more concrete by adding details about data structures and algorithms that define, how the goal is achieved.
The new version, which is called Refinement, should be proven to be coherent and including all the properties of the abstract machine.
The designer may make use of B libraries in order to model data structures or to include or import existing components.
Implementation
The refinement continues until a deterministic version is achieved: the Implementation.
During all of the development steps the same notation is used and the last version may be translated to a programming language for compilation.
Software
B-Toolkit
The B-Toolkit, developed by Ib Holm Sørensen et al., is a collection of programming tools designed to support the use of the B-Tool, a set theory based mathematical interpreter, for the purposes of a formal software engineering methodology known as the B method.
The toolkit uses a custom X Window Motif Interface for GUI management and runs primarily on the Linux, Mac OS X and Solaris operating systems. It has been developed by the UK based company B-Core (UK) Limited.
The B-Toolkit source code is now available.
Atelier B
Developed by ClearSy, Atelier B is an industrial tool that allows for the operational use of the B Method to develop defect-free proven software (formal software). Two versions are available: Community Edition available to anyone without any restriction, Maintenance Edition for maintenance contract holders only.
It is used to develop safety automatisms for the various subways installed throughout the world by Alstom and Siemens, and also for Common Criteria certification and the development of system models by ATMEL and STMicroelectronics.
Books
The B-Book: Assigning Programs to Meanings, Jean-Raymond Abrial, Cambridge University Press, 1996. .
The B-Method: An Introduction, Steve Schneider, Palgrave Macmillan, Cornerstones of Computing series, 2001. .
Software Engineering with B, John Wordsworth, Addison Wesley Longman, 1996. .
The B Language and Method: A Guide to Practical Formal Development, Kevin Lano, Springer-Verlag, FACIT series, 1996. .
Specification in B: An Introduction using the B Toolkit, Kevin Lano, World Scientific Publishing Company, Imperial College Press, 1996. .
Modeling in Event-B: System and Software Engineering, Jean-Raymond Abrial, Cambridge University Press, 2010. .
Conferences
Conference Z2B, Nantes, France, oct. 10-12 1995
First B Conference, Nantes, France, nov. 25-27 1996
Second B Conference, Montpellier, France, ap. 22-24 1998,
ZB'2000, York, U.K., 28 aug, 2 sept. 2000,
ZB'2002, Grenoble, France, 23-25 jan. 2002,
ZB'2003, Turku, Finland, 4-6 jun. 2003
ZB'05, Guildford, U.K., 2005
B'2007, Besançon, France, 2007
B, from research to teaching, Nantes, France, 16 June 2008
B, from research to teaching, Nantes, France, 8 June 2009
B, from research to teaching, Nantes, France, 7 June 2010
ABZ conference: ABZ 2008, British Computer Society, London, UK, 16–18 September 2008
ABZ conference: ABZ 2010,Oxford, Québec, Canada, 23–25 February 2010
ABZ conference: ABZ 2012, Pisa, Italy, 18–22 June 2012
ABZ conference: ABZ 2014, Toulouse, France, 2–6 June 2014
ABZ conference: ABZ 2016, Linz, Austria, 23–27 May 2016
See also
APCB (Association de Pilotage des Conférences B)
BHDL
References
External links
B Method.com: this site is designed to present different work and subjects concerning the B method, a formal method with proof
Atelier B.eu : Atelier B is a systems engineering workshop, which enables software to be developed that is guaranteed to be flawless
Site B Grenoble
Formal methods
Formal methods tools
Formal specification languages |
21481 | https://en.wikipedia.org/wiki/Notary%20public | Notary public | A notary public ( notary or public notary; notaries public) of the common law is a public officer constituted by law to serve the public in non-contentious matters usually concerned with general financial transactions, estates, deeds, powers-of-attorney, and foreign and international business. A notary's main functions are to validate the signature of a person (for purposes of signing a document); administer oaths and affirmations; take affidavits and statutory declarations, including from witnesses; authenticate the execution of certain classes of documents; take acknowledgments (e.g., of deeds and other conveyances); protest notes and bills of exchange; provide notice of foreign drafts; prepare marine or ship's protests in cases of damage; provide exemplifications and notarial copies; and, to perform certain other official acts depending on the jurisdiction. Such transactions are known as notarial acts, or more commonly, a notarizations. The term notary public only refers to common-law notaries and should not be confused with civil-law notaries.
With the exceptions of Louisiana, Puerto Rico, Quebec (whose private law is based on civil law), and British Columbia (whose notarial tradition stems from scrivener notary practice), a notary public in the rest of the United States and most of Canada has powers that are far more limited than those of civil-law or other common-law notaries, both of whom are qualified lawyers admitted to the bar: such notaries may be referred to as notaries-at-law or lawyer notaries. Therefore, at common law, notarial service is distinctly different from the practice of law, and giving legal advice and preparing legal instruments is forbidden to lay notaries such as those appointed throughout most of the United States. Despite these distinctions, lawyers in the United States may apply to become notaries, and this class of notary is allowed to provide legal advice, such as determining the type of act required (affidavit, acknowledgment, etc.).
Overview
Notaries are appointed by a government authority, such as a court, governor or lieutenant governor, or by a regulating body often known as a society or faculty of notaries public. For lawyer notaries, an appointment may be for life, while lay notaries are usually commissioned for a briefer term (often 3 to 5 years in the U.S.), with the possibility of renewal.
In most common law countries, appointments and their number for a given notarial district are highly regulated. However, since the majority of American notaries are lay persons who provide officially required services, commission numbers are not regulated, which is part of the reason why there are far more notaries in the United States than in other countries (4.5 million vs. approx. 740 in England and Wales and approx. 1,250 in Australia and New Zealand). Furthermore, all U.S. and some Canadian notarial functions are applied to domestic affairs and documents, where fully systematized attestations of signatures and acknowledgment of deeds are a universal requirement for document authentication. In the U.S., notaries public do not authenticate documents in a traditional sense: instead, they authenticate that the signature(s) on a document belongs to the person(s) claiming to be the signer(s), thus ensuring trust among interested parties. By contrast, outside North American common law jurisdictions, notarial practice is restricted to international legal matters or where a foreign jurisdiction is involved, and almost all notaries are also qualified lawyers.
For the purposes of authentication, most countries require commercial or personal documents which originate from or are signed in another country to be notarized before they can be used or officially recorded or before they can have any legal effect. To these documents a notary affixes a notarial certificate–a separate document stating the notarial act performed and upon which the party(ies) and notary sign–which attests to the execution of the document, usually by the person who appears before the notary, known as an appearer or constituent (U.S.). In the U.S., many documents include the notarial wording within the document, thus eliminating the need for an additional page for the certificate only (i.e., the document is signed and notarized, including application of the Notary’s seal). In cases where notaries are also lawyers, such a notary may also draft legal instruments known as notarial acts or deeds which have probative value and executory force, as they do in civil law jurisdictions. Originals or secondary originals are then filed and stored in the notary's archives, or protocol. As noted, lay notaries public in the U.S. are forbidden to advise signers as to which type of act suits the signer’s situation: instead, the signer must provide the certificate/wording that is appropriate.
Notaries are generally required to undergo special training in the performance of their duties, often culminating in an examination and ongoing education/re-examination upon commission renewal. Some must also first serve as an apprentice before being commissioned or licensed to practice their profession. In some countries, even licensed lawyers, e.g., barristers or solicitors, must follow a prescribed specialized course of study and be mentored for two years before being allowed to practice as a notary (e.g., British Columbia, England). However, notaries public in the U.S., of which the vast majority are lay people, require only a brief training seminar and are expressly forbidden to engage in any activities that could be construed as the unlicensed practice of law unless they are also qualified attorneys. That said, even lay notaries public must know all applicable laws in their jurisdiction (e.g., state) to practice, and a commission could be revoked for a single deviation from such laws. Notarial practice is universally considered to be distinct and separate from that of an attorney (solicitor/barrister). In England and Wales, there is a course of study for notaries which is conducted under the auspices of the University of Cambridge and the Society of Notaries of England and Wales. In the State of Victoria, Australia, applicants for appointment must first complete a Graduate Diploma of Notarial Practice which is administered by the Sir Zelman Cowen Centre in Victoria University, Melbourne. The United States is a notable exception to these practices: lawyer-notaries need only be approved by their jurisdiction and possibly by a local court or bar association.
In bi-juridical jurisdictions, such as South Africa or Louisiana, the office of notary public is a legal profession with educational requirements similar to those for attorneys. Many even have institutes of higher learning that offer degrees in notarial law. Therefore, despite their name, "notaries public" in these jurisdictions are in effect civil law notaries.
History
Notaries public (also called "notaries", "notarial officers", or "public notaries") hold an office that can trace its origins back to the ancient Roman Republic, when they were called scribae ("scribes"), tabelliones forenses, or personae publicae.
The history of notaries is set out in detail in Chapter 1 of Brooke's Notary (13th edition):
A collection of articles on notary history, including Ancient Egypt, Phoenicia, Babylonia, Rome, Greece, medieval Europe, the Renaissance, Columbus, Spanish Conquistadors, French Louisiana, New England colonial notaries, Republic of Texas notaries and Colorado Old West notaries, is available in the notary history section of the Colorado Notary Blog at the following link.
Common law jurisdictions
The duties and functions of notaries public are described in Brooke's Notary on page 19 in these terms:
A notary, in almost all common law jurisdictions other than most of North America, is a practitioner trained in the drafting and execution of legal documents. Historically, notaries recorded matters of judicial importance in addition to private transactions or events where an officially authenticated record or a document drawn up with professional skill or knowledge was required. The functions of notaries specifically include the preparation of certain types of documents (including international contracts, deeds, wills, and powers of attorney) and certification of their due execution, administering of oaths, witnessing affidavits and statutory declarations, certification of copy documents, noting and protesting of bills of exchange, and the preparation of ships' protests.
Documents certified by notaries are sealed with the notary's seal (which may be a traditional embossed marking or a modern stamp) and are often, as a matter of best practice or else jurisdictional law, recorded by the notary in a register (also called a "protocol") maintained and permanently kept by him or her. The use of a seal by definition means a "notarial act" was performed.
In countries subscribing to the Hague Convention Abolishing the Requirement of Legalization for Foreign Public Documents or Apostille Convention, additional steps are required for use of documents across international borders. Some documents must be notarized locally and then sealed by the regulating authority (e.g., in the U.S., the Secretary of State of the state in which the notary is commissioned)–sometimes, documents may skip directly to this level–and then a final act of certification is required, known as an apostille. The apostille is issued by a government department (usually the Foreign Affairs Department; the Department of State in the U.S.; or similar). For countries which are not subscribers to that convention, an "authentication" or "legalization" must be provided by one of a number of methods, including by the Foreign Affairs Ministry of the country from which the document is being sent or the embassy, Consulate-General, consulate or High Commission of the country to which it is being sent.
Information on individual countries
Australia
In all Australian states and territories (except Queensland) notaries public are appointed by the Supreme Court of the relevant state or territory. Very few have been appointed as a notary for more than one state or territory.
Queensland, like New Zealand, continues the practice of appointment by the Archbishop of Canterbury acting through the Master of the Faculties.
Australian notaries are lawyers and are members of the Australian and New Zealand College of Notaries, the Society of Notaries of New South Wales Inc., the Public Notaries Society of Western Australia Inc, and other state-based societies. The overall number of lawyers who choose to become a notary is relatively low. For example, in South Australia (a state with a population of 1.5 million), of the over 2,500 lawyers in that state only about 100 are also notaries and most of those do not actively practice as such. In Melbourne, Victoria, in 2002 there were only 66 notaries for a city with a population of 3.5 million and only 90 for the entire state. In Western Australia, there are approximately 58 notaries as at 2017 for a city with a population of 2.07 million people. Compare this with the United States where it has been estimated that there are nearly 5 million notaries for a nation with a population of 296 million.
As Justice Debelle of the Supreme Court of South Australia said in the case of In The Matter of an Application by Marilyn Reys Bos to be a Public Notary [2003] SASC 320, delivered 12 September 2003, in refusing the application by a non-lawyer for appointment as a notary:
Historically there have been some very rare examples of patent attorneys or accountants being appointed, but that now seems to have ceased.
However, there are three significant differences between notaries and other lawyers.
the duty of a notary is to the transaction as a whole, and not just to one of the parties. In certain circumstances a notary may act for both parties to a transaction as long as there is no conflict between them, and in such cases it is their duty is to ensure that the transaction that they conclude is fair to both sides.
a notary will often need to place and complete a special clause onto or attach a special page (known as an eschatocol) to a document in order to make it valid for use overseas.In the case of some documents which are to be used in some foreign countries it may also be necessary to obtain another certificate known either as an "authentication" or an "apostille" (see above) (depending on the relevant foreign country) from the Department of Foreign Affairs and Trade.
a notary identifies themselves on documents by the use of their individual seal. Such seals have historical origins and are regarded by most other countries as of great importance for establishing the authenticity of a document.
Their principal duties include:
attestation of documents and certification of their due execution for use internationally
preparation and certification of powers of attorney, wills, deeds, contracts and other legal documents for use internationally
administering of oaths for use internationally
witnessing affidavits, statutory declarations and other documents for use internationally
certification of copy documents for use internationally
exemplification of official documents for use internationally
noting and protesting of bills of exchange (which is rarely performed)
preparation of ships' protests
providing certificates as to Australian law and legal practice for use internationally
It is usual for Australian notaries to use an embossed seal with a red wafer, and now some notaries also use an inked stamp replicating the seal. It is also common for the seal or stamp to include the notary's chosen logo or symbol.
In South Australia and Scotland, it is acceptable for a notary to use the letters "NP" after their name. Thus a South Australian notary may have "John Smith LLB NP" or similar on his business card or letterhead.
Australian notaries do not hold "commissions" which can expire. Generally, once appointed they are authorized to act as a notary for life and can only be "struck off" the Roll of Notaries for proven misconduct. In certain states, for example, New South Wales and Victoria, they cease to be qualified to continue as a notary once they cease to hold a practicing certificate as a legal practitioner. Even judges, who do not hold practicing certificates, are not eligible to continue to practice as notaries.
Notaries in some states of Australia are regulated by legislation. In New South Wales the Public Notaries Act 1997 applies and in Victoria the Public Notaries Act 2001 applies.
There are also Notary Societies throughout Australia and the societies keep a searchable list of their members. In New South Wales, The Society of Notaries of New South Wales Inc.; in Queensland The Society of Notaries Queensland Inc.; in South Australia the Notaries' Society of South Australia Inc. and in Victoria, The Society of Notaries of Victoria Inc..
Notaries collecting information for the purposes of verification of the signature of the deponent might retain the details of documents which identify the deponent, and this information is subject to the Privacy Act 1988. A notary must protect the personal information the notary holds from misuse and loss and from unauthorised access, modification or disclosure.
All Australian jurisdictions also have justices of the peace (JP) or commissioners for affidavits and other unqualified persons who are qualified to take affidavits or statutory declarations and to certify documents. However they can only do so if the relevant affidavit, statutory declaration or copy document is to be used only in Australia and not in a foreign country, with the possible exception of a few Commonwealth countries not including the United Kingdom or New Zealand except for very limited purposes. Justices of the peace (JPs) are (usually) laypersons who have minimal, if any, training (depending on the jurisdiction) but are of proven good character. Therefore, a US notary resembles an Australian JP rather than an Australian notary.
Canada
Canadian notaries public (except in the Province of British Columbia and Quebec) are very much like their American counterparts, generally restricted to administering oaths, witnessing signatures on affidavits and statutory declarations, providing acknowledgements, certifying true copies, and so forth.
British Columbia
In British Columbia, a notary public is more like a British or Australian notary. Notaries are appointed for life by the Supreme Court of British Columbia and as a self-regulating profession, the Society of Notaries Public of British Columbia is the regulatory body overseeing and setting standards to maintain public confidence. A BC notary is also a commissioner for taking affidavits for British Columbia, by reason of office. Furthermore, BC notaries exercise far greater power, able to dispense legal advice and draft public instruments including:
Notarization – notarizations/attestations of signatures, affidavits, statutory declarations, certified true copies, letters of invitation for foreign travel, authorization of minor child travel, execution/authentications of international documents, passport application documentation, proof of identity for travel purposes
Real estate law – home purchase/sale; business purchase/sale; mortgages and refinancing; residential, commercial, & manufactures home transfer of title; restrictive covenants & builder's liens
Wills & estate planning – preparation and searches of last wills and testaments, advance directives, representation agreements & power of attorney
Contract law – preparation of contracts and agreements, commercial lease and assignments
easements and right of way
insurance loss declarations
marine bills of sale & mortgages
marine protestations
personal property security agreements
purchaser's side for foreclosures
subdivisions & statutory building schemes
zoning applications
Nova Scotia
In Nova Scotia a person may be a notary public, a commissioner of oaths, or both. A notary public and a commissioner of oaths are regulated by the provincial Notaries and Commissioners Act. Individuals hold a commission granted to them by the Minister of Justice.
Under the Act a notary public in has the "power of drawing, passing, keeping and issuing all deeds and contracts, charter-parties and other mercantile transactions in this Province, and also of attesting all commercial instruments brought before him for public protestation, and otherwise of acting as is usual in the office of notary, and may demand, receive and have all the rights, profits and emoluments rightfully appertaining and belonging to the said calling of notary during pleasure."
Under the Act a commissioner of oaths is "authorized to administer oaths and take and receive affidavits, declarations and affirmations within the Province in and concerning any cause, matter or thing, depending or to be had in the Supreme Court, or any other court in the Province."
Every barrister of the Supreme Court of Nova Scotia is a commissioner of oaths but must receive an additional commission to act as a notary public.
"A Commissioner of Oaths is deemed to be an officer of the Supreme Court of Nova Scotia. Commissioners take declarations concerning any matter to come before a court in the Province.". Additionally, individuals with other specific qualifications, such as being a current Member of the Legislative Assembly, commissioned officer of the Royal Canadian Mounted Police or Canadian Forces make act as if explicitly being a commissioner of oaths.
Quebec
In Quebec, civil-law notaries (notaires) are full lawyers licensed to practice notarial law and regulated by the Chamber of Notaries of Quebec. Quebec notaries draft and prepare major legal instruments (notarial acts), provide complex legal advice, represent clients (out of court) and make appearances on their behalf, act as arbitrator, mediator, or conciliator, and even act as a court commissioner in non-contentious matters. To become a notary in Quebec, a candidate must hold a bachelor's degree in civil law and a one-year Master's in notarial law and serve a traineeship (stage) before being admitted to practice.
The concept of notaries public in Quebec does not exist. Instead, the province has Commissioners of Oaths (Commissaires à l'assermentation) who may administer oaths in Québec (and outside of Québec, if authorized) for a procedure or a document intended for Québec (or Federal matters). A Quebec commissioner for oaths can not certify documents or attest that a copy of a document is in accordance to the original; only a notaire can do it.
India
The central government appoints notaries for the whole or any part of the country. State governments, too, appoint notaries for the whole or any part of the states. On an application being made, any person who had been practicing as a Lawyer for at least ten years is eligible to be appointed a notary. The applicant, if not a legal practitioner, should be a member of the Indian Legal Service or have held an office under the central or state government, requiring special knowledge of law, after enrollment as an advocate or held an office in the department of Judge, Advocate-General or in the armed forces.
Iran
Notary public is a trained lawyer that should pass some special examinations to be able to open their office and start their work. Persian meaning of this word is means head of the office and their assistant called . Both these persons should have bachelor's degree in law or master's degree in civil-law.
Ireland
There is archival evidence showing that public notaries, acting pursuant to papal and imperial authority, practised in Ireland in the 13th century, and it is reasonable to assume that notaries functioned here before that time. In Ireland, public notaries were at various times appointed by the Archbishop of Canterbury and the Archbishop of Armagh. The position remained so until the Reformation.
After the Reformation, persons appointed to the office of public notary either in Great Britain or Ireland received the faculty by royal authority, and appointments under faculty from the Pope and the emperor ceased.
In 1871, under the Matrimonial Causes and Marriage Law (Ireland) Amendment 1870, the jurisdiction previously exercised by the Archbishop of Armagh in the appointment of notaries was vested in and became exercisable by the Lord Chancellor of Ireland.
In 1920, the power to appoint notaries public was transferred to the Lord Lieutenant of Ireland. The position in Ireland changed once again in 1924 following the establishment of the Irish Free State. Under the Courts of Justice Act, 1924 the jurisdiction over notaries public was transferred to the Chief Justice of the Irish Free State.
In 1961, under the Courts (Supplemental Provisions) Act of that year, and the power to appoint notaries public became exercisable by the Chief Justice. This remains the position in Ireland, where notaries are appointed on petition to the Supreme Court, after passing prescribed examinations. The governing body is the Faculty of Notaries Public in Ireland. The vast majority of notaries in Ireland are also solicitors. A non-solicitor, who was successful in the examinations as set by the governing body, applied in the standard way to the Chief Justice to be appointed a notary public. The Chief Justice heard the adjourned application on 3 March 2009 and appointed the non-solicitor as a notary on 18 July 2011.
In Ireland notaries public cannot agree on a standard fee due to competition law. In practice the price per signature appears to be €100. A cheaper alternative is to visit a commissioner for oaths who will charge less per signature, but that is only possible where whoever is to receive a document will recognize the signature of a commissioner for oaths.
Malaysia
A notary public is a lawyer authorized by the Attorney General. The fees are regulated by the Notary Public (Fees) Rules 1954.
A commissioner for oaths is a person appointed by the Chief Justice under section 11 of Court of Judicature Act 1964, and Commissioners for Oaths Rules 1993.
New Zealand
A notary public in New Zealand is a lawyer authorised by the Archbishop of Canterbury in England to officially witness signatures on legal documents, collect sworn statements, administer oaths and certify the authenticity of legal documents usually for use overseas.
The Master of the Faculties appoints notaries in the exercise of the general authorities granted by s 3 of the Ecclesiastical Licences Act 1533 and Public Notaries Act 1833. Recommendations are made by the New Zealand Society of Notaries, which normally requires and applicant to have 10 years' experience post admission as a lawyer and 5 years as a Law Firm Partner or equivalent.
Sri Lanka
Notaries in Sri Lanka are more akin to civil law notaries, their main functions are conveyancing, drafting of legal instruments, etc. They are appointed under the Notaries Ordinance No 1 of 1907.<ref>&path=5</ref> They must pass exam held by the Ministry of Justice and apprentice under senior notary for a period of two years. Alternatively, attorneys at law who pass the conveyancing exam are also admitted as a notary public under warrant of the Minister. The Minister of Justice may appoint any attorney at law as a commissioner for oaths, authorized to certify and authenticate the affidavit/documents and any such other certificates that are submitted by the general public with the intention of certifying by the commissioner for oath.
United Kingdom
England and Wales
After the passage of the Ecclesiastical Licences Act 1533, which was a direct result of the Reformation in England, all notary appointments were issued directly through the Court of Faculties. The Court of Faculties is attached to the office of the Archbishop of Canterbury.
In England and Wales there are two main classes of notaries – general notaries and scrivener notaries. Their functions are almost identical. All notaries, like solicitors, barristers, legal executives, costs lawyers and licensed conveyancers, are also commissioners for oaths. They also acquire the same powers as solicitors and other law practitioners, with the exception of the right to represent others before the courts (unless also members of the bar or admitted as a solicitor) once they are commissioned notaries. In practice almost all English notaries, and all Scottish ones, are also solicitors, and usually practise as solicitors.
Commissioners of oaths are able to undertake the bulk of routine domestic attestation work within the UK. Many documents, including signatures for normal property transactions, do not need professional attestation of signature at all, a lay witness being sufficient.
In practice the need for notaries in purely English legal matters is very small; for example they are not involved in normal property transactions. Since a great many solicitors also perform the function of commissioners for oaths and can witness routine declarations etc. (all are qualified to do so, but not all offer the service), most work performed by notaries relates to international matters in some way. They witness or authenticate documents to be used abroad. Many English notaries have strong foreign language skills and often a foreign legal qualification. The work of notaries and solicitors in England is separate although most notaries are solicitors. The Notaries Society gives the number of notaries in England and Wales as "about 1,000," all but seventy of whom are solicitors.
Scrivener notaries get their name from the Scriveners' Company. Until 1999, when they lost this monopoly, they were the only notaries permitted to practise in the City of London. They used not to have to first qualify as solicitors, but they had knowledge of foreign laws and languages.
Currently to qualify as a notary public in England and Wales it is necessary to have earned a law degree or qualified as a solicitor or barrister in the past five years, and then to take a two-year distance-learning course styled the Postgraduate Diploma in Notarial Practice. At the same time, any applicant must also gain practical experience. The few who go on to become scrivener notaries require further study of two foreign languages and foreign law and a two-year mentorship under an active Scrivener notary.
The other notaries in England are either ecclesiastical notaries whose functions are limited to the affairs of the Church of England or other qualified persons who are not trained as solicitors or barristers but satisfy the Master of the Faculties of the Archbishop of Canterbury that they possess an adequate understanding of the law. Both the latter two categories are required to pass examinations set by the Master of Faculties.
The regulation of notaries was modernised by section 57 of the Courts and Legal Services Act 1990.
Notarial services generally include:
attesting the signature and execution of documents
authenticating the execution of documents
authenticating the contents of documents
administration of oaths and declarations
drawing up or noting (and extending) protests of happenings to ships, crews and cargoes
presenting bills of exchange for acceptance and payment, noting and protesting bills in cases of dishonour and preparing acts of honour
attending upon the drawing up of bonds
drawing mercantile documents, deeds, sales or purchases of property, and wills in English and (via translation), in foreign languages for use in Britain, the Commonwealth and other foreign countries
providing documents to deal with the administration of the estate of people who are abroad, or own property abroad
authenticating personal documents and information for immigration or emigration purposes, or to apply to marry, divorce, adopt children or to work abroad
verification of translations from foreign languages to English and vice versa
taking evidence in England and Wales as a commissioner for oaths for foreign courts
provision of notarial copies
preparing and witnessing powers of attorney, corporate records, contracts for use in Britain or overseas
authenticating company and business documents and transactions
international Internet domain name transfers
Scotland
Notaries public have existed in Scotland since the 13th century and developed as a distinct element of the Scottish legal profession. Those who wish to practice as a notary must petition the Court of Session. This petition is usually presented at the same time as a petition to practice as a solicitor, but can sometimes be earlier or later. However, to qualify, a notary must hold a current Practising Certificate from the Law Society of Scotland, a new requirement from 2007, before which all Scottish solicitors were automatically notaries.
Whilst notaries in Scotland are always solicitors, the profession remains separate in that there are additional rules and regulations governing notaries and it is possible to be a solicitor, but not a notary. Since 2007 an additional Practising Certificate is required, so now most, but not all, solicitors in Scotland are notaries – a significant difference from the English profession. They are also separate from notaries in other jurisdictions of the United Kingdom.
The profession is administered by the Council of the Law Society of Scotland under the Law Reform (Miscellaneous Provisions) (Scotland) Act 1990.
In Scotland, the duties and services provided by the notary are similar to England and Wales, although they are needed for some declarations in divorce matters for which they are not in England. Their role declined following the Law Agents (Scotland) Amendment Act 1896 which stipulated only enrolled law agents could become notaries and the Conveyancing (Scotland) Act 1924 which extended notarial execution to law agents. The primary functions of a Scottish notary are:
oaths, affidavits, and affirmations
affidavits in undefended divorces and for matrimonial homes
maritime protests
execution or certification for foreign jurisdictions, e.g., estates, court actions, powers of attorney, etc.
notarial execution for the blind or illiterate
entry of a person to overseas territories
completion of the documentation required for the registration of a company in certain foreign jurisdictions; and
drawing for repayment of Bonds of Debenture
United States
In the United States, a notary public is a person appointed by a state government (e.g., the governor, lieutenant governor, court of common pleas, or in some cases the state legislature). Most states then issue commissions, only after successful appointment, via the Secretary of State’s office. The actively commissioned notary’s primary role is to serve the public as an impartial witness when important documents are signed. Since the notary is a state officer, a notary's duties may vary widely from state to state and in most cases bars a notary from acting outside their home state unless they also have a commission there as well. Likewise, as a public official, notaries generally must perform legal acts to any requesting party: in other words, notaries, being public officers, cannot turn down a request except in limited circumstances, such as failure to pay (if charging a fee), suspected fraud or coercion, etc.
In 32 states, the main requirements to earn a commission are to fill out a form and pay a fee; further, many states have restrictions concerning notaries with criminal histories and thus require a comprehensive criminal background check (at the proposed notary’s own cost), including at renewal. However, the requirements vary from state to state. Notaries in 18 states and the District of Columbia are required to take a course, pass an exam, or both. The education or exam requirements in Delaware and Kansas only apply to notaries who will perform electronic notarizations.
A notary is almost always permitted to notarize a document anywhere in the state where their commission is issued even though commissions are typically issued to the county where the notary resides (not works). However, notaries are typically prohibited via their home state’s (and often a “foreign” state’s) laws from performing acts outside of the state(s) where commissioned. That said, notaries can typically perform acts for out-of-state visitors, as long as the notary is within their own state’s boundaries. Additionally, some states simply issue a commission "at large" meaning no indication is made as to from what county the person's commission was issued, but some states do require the notary include the county of issue of their commission as part of the jurat/notarial acts, or where seals are required, to indicate the county of issue of their commission on the seal. Merely because a state requires indicating the county where the commission was issued does not necessarily mean that the notary is restricted to notarizing documents in that county, although some states may impose this as a requirement.
Some states (Montana, Wyoming, North Dakota, among others) allow a notary who is commissioned in a state bordering that state to also act as a notary in the state if the other allows the same. Thus, someone who was commissioned in Montana could notarize documents in Wyoming and North Dakota, and a notary commissioned in Wyoming could notarize documents in Montana. A notary from Wyoming could not notarize documents while in North Dakota (or the inverse) unless they had a commission from North Dakota or a state bordering North Dakota that also allowed North Dakota notaries to practice in that state as well.
Notaries in the United States are much less closely regulated than notaries in most other common-law countries, typically because U.S. notaries have little legal authority. In the United States, a lay notary may not offer legal advice or prepare documents–except in Louisiana and Puerto Rico–and in most cases cannot recommend how a person should sign a document or what type of notarial act is necessary. There are some exceptions; for example, Florida notaries may take affidavits, draft inventories of safe deposit boxes, draft protests for payment of dishonored checks and promissory notes, and solemnize marriages. In most states, a notary can also certify or attest a copy or facsimile. Otherwise, such certification must be provided by the appropriate regulatory body: for instance, a birth certificate must often be certified by the state (or local) department of vital statistics or health. Best practices suggest that notaries should not perform acts related to certified copies of official documents.
The most common notarial acts in the United States are the taking of acknowledgements and jurats (which include either an oath or attestation). Many professions may require a person to double as a notary public. For example, clerks of court and U.S. court reporters are often notaries because this enables them to swear in witnesses (deponents) when they are taking depositions. Furthermore, many secretaries, bankers, and some lawyers are commonly notaries public. Despite their limited role, some American notaries may also perform a number of far-ranging acts not generally found anywhere else. Depending on the jurisdiction, they may: take depositions (in OH, a notary can issue a legal warrant to appear for an individual who refuses to be deposed), certify any and all petitions (ME), witness third-party absentee ballots (ME), provide no-impediment marriage licenses, solemnize civil marriages (ME, FL, SC), witness the opening of a safe deposit box or safe and take an official inventory of its contents, take a renunciation of dower or inheritance (SC), and related tasks.
Acknowledgment
"An acknowledgment is a formal [oral] declaration before an authorized public officer, such as a judge or notary. It is made by a person executing [signing, marking] an instrument who states that it was their free act and deed." That is, the person signed it without undue influence and for the purposes detailed in it. It does not testify to the truth of the matter(s) asserted within the document. A certificate of acknowledgment is a written statement signed (and in some jurisdictions, sealed) by the notary or other authorized official. This certificate serves to prove that the acknowledgment occurred. The form of the certificate varies from jurisdiction to jurisdiction, but will be similar to the following:
Before me, the undersigned authority, on this __ day of ___, 20__ personally appeared _ [signer(s)], to me well known to be the person who executed the foregoing instrument or provided satisfactory identification, and he/she acknowledged before me that he/she executed the same as his/her voluntary act and deed.
Oath, affirmation, and jurat
A jurat is the official written statement by a notary public. It indicates that the notary both administered and witnessed an oath or affirmation for an oath of office or on an affidavit. That is, the signer(s) has [verbally] sworn to or affirmed the truth of information contained in a document, under penalty of perjury, whether that document is a lengthy deposition or a simple statement on an application form. The simplest form of jurat and the oath or affirmation administered by a notary are:
Jurat: "Sworn before me this ___ day of , 20__ by _ [oath-taker(s) and signer(s)]"
Oath [also the typical portion read aloud by the notary for a jurat, prior to signing]: "Do you solemnly swear that the contents of this affidavit subscribed by you are correct and true?" [Verbal response; not written for oaths of office; signed after swearing “yes” for a jurat]
Affirmation (to those who oppose swearing to God [i.e., opposed to an oath, as in a jurat]): "Do you solemnly, sincerely, and truly declare and affirm that the statements made by you are true and correct?" The notarial certificate would thus read: “Affirmed before me this ___ day of , 20__ by _ [affirming party(ies) and signer(s)]"
Venue
In the U.S., notarial acts normally include what is called a venue or caption, that is, an official listing of the place where a notarization occurred, usually in the form of the state and county and with the abbreviation "ss." (for Latin scilicet, "to wit") normally referred to as a "subscript", often in these forms:
The venue is usually set forth at the beginning of the instrument or at the top of the notary's certificate. If at the head of the document, it is usually referred to as a caption. In times gone by, the notary would indicate the street address at which the ceremony was performed, and this practice, though unusual today, is occasionally encountered. Venue is used contemporarily because it limits fraud by identifying where the act took place, and it further facilitates finding the notary to examine his/her journal.
Records
The laws throughout the United States vary on the requirement for a notary to keep and maintain records. Some states require records, some suggest or encourage records, or do not require or recommend records at all.
States
California
The California Secretary of State, Notary Public & Special Filings Section, is responsible for appointing and commissioning qualified persons as notaries public for four-year terms.
Prior to sitting for the notary exam, one must complete a mandatory six-hour course of study. This required course of study is conducted either in an online, home study, or in-person format via an approved notary education vendor. Both prospective notaries as well as current notaries seeking reappointment must undergo an "expanded" FBI and California Department of Justice background check.
Various statutes, rules, and regulations govern notaries public. California law sets maximum, but not minimum, fees for services related to notarial acts (e.g., per signature: acknowledgment $15, jurat $15, certified power of attorney $15, et cetera) A finger print (typically the right thumb) may be required in the notary journal based on the transaction in question (e.g., deed, quitclaim deed, deed of trust affecting real property, power of attorney document, et cetera). Documents with blank spaces cannot be notarized (a further anti-fraud measure). California explicitly prohibits notaries public from using literal foreign language translation of their title.
The use of a notary seal is required.
Colorado
Notarial acts performed in Colorado are governed under the Notaries Public Act, 12-55-101, et seq. Pursuant to the Act, notaries are appointed by the Secretary of State for a term not to exceed four years. Notaries may apply for appointment or reappointment online at the Secretary of State's website. A notary may apply for reappointment to the notary office 90 days before their commission expires. Since May 2010, all new notaries and expired notaries are required to take an approved training course and pass an examination to ensure minimal competence of the Notaries Public Act. A course of instruction approved by the Secretary of State may be administered by approved vendors and shall bear an emblem with a certification number assigned by the Secretary of State's office. An approved course of instruction covers relevant provisions of the Colorado Notaries Public Act, the Model Notary Act, and widely accepted best practices. In addition to courses offered by approved vendors, the Secretary of State offers free certification courses at the Secretary of State's office. To sign up for a free course, visit the notary public training page at the following link. A third party seeking to verify the status of a Colorado notary may do so by visiting the Secretary of State's website at the following link. Constituents seeking an apostille or certificate of magistracy are requested to complete the form found on the following page before sending in their documents or presenting at the Secretary of State's office.
Florida
Florida notaries public are appointed by the governor to serve a four-year term. New applicants and commissioned notaries public must be bona fide residents of the State of Florida, and first time applicants must complete a mandatory three-hour education course administered by an approved educator. Florida state law also requires that a notary public post bond in the amount of $7,500.00. A bond is required in order to compensate an individual harmed as a result of a breach of duty by the notary. Applications are submitted and processed through an authorized bonding agency. Florida is one of three states (Maine and South Carolina are the others) where a notary public can solemnize the rites of matrimony (perform a marriage ceremony).
The Florida Department of State appoints civil law notaries, also called "Florida International Notaries", who must be Florida attorneys who have practiced law for five or more years. Applicants must attend a seminar and pass an exam administered by the Florida Department of State or any private vendor approved by the department. Such civil law notaries are appointed for life and may perform all of the acts of a notary public in addition to preparing authentic acts.
Illinois
Notaries public in Illinois are appointed by the Secretary of State for a four-year term. Also, residents of a state bordering Illinois (Iowa, Kentucky, Missouri, Wisconsin) who work or have a place of business in Illinois can be appointed for a one-year term. Notaries must be United States citizens (though the requirement that a notary public must be a United States citizen is unconstitutional; see Bernal v. Fainter), or aliens lawfully admitted for permanent residence; be able to read and write the English language; be residents of (or employed within) the State of Illinois for at least 30 days; be at least 18 years old; not be convicted of a felony; and not had a notary commission revoked or suspended during the past 10 years.
An applicant for the notary public commission must also post a $5,000 bond, usually with an insurance company and pay an application fee of $10. The application is usually accompanied with an oath of office. If the Secretary of State's office approves the application, the Secretary of State then sends the commission to the clerk of the county where the applicant resides. If the applicant records the commission with the county clerk, they then receive the commission. Illinois law prohibits notaries from using the literal Spanish translation in their title and requires them to use a rubber stamp seal for their notarizations. The notary public can then perform their duties anywhere in the state, as long as the notary resides (or works or does business) in the county where they were appointed.
Kentucky
A notary public in Kentucky is appointed by either the secretary of state or the governor to administer oaths and take proof of execution and acknowledgements of instruments. Notaries public fulfill their duties to deter fraud and ensure proper execution. There are two separate types of notaries public that are commissioned in Kentucky. They are notary public: state at large and notary public: special commission. They have two distinct sets of duties and two different routes of commissioning. For both types of commissions, applicants must be eighteen (18) years of age, of good moral character (not a convicted felon) and capable of discharging the duties imposed upon him/her by law. In addition, the application must be approved by one of the following officials in the county of application: a circuit judge, the circuit court clerk, the county judge/executive, the county clerk, a county magistrate or member of the Kentucky General Assembly. The term of office for both types of notary public is four years.
A notary public: state at large is either a resident or non-resident of Kentucky who is commissioned to perform notarial acts anywhere within the physical borders of the Commonwealth of Kentucky that may be recorded either in-state or in another state. In order to become a notary public: state at large, the applicant must be a resident of the county or be principally employed in the county from which where the application is made. A completed application is sent to the Secretary of State's office with the required fee. Once the application is approved by the Secretary of State, the commission is sent to the county clerk in the county of application and a notice of appointment is sent to the applicant. The applicant will have thirty days to go to the county clerk's office where they will be required to 1.) Post either a surety or property bond (bonding requirements and amounts vary by county) 2.) Take the Oath/Affirmation of Office and 3.) File and record the commission with the county clerk.
A notary public: special commission is either a resident or non-resident of Kentucky who is commissioned to perform notarial acts either inside or outside the borders of the Commonwealth on documents that must be recorded in Kentucky. The main difference in the appointment process is that, unlike a notary public: state at large, a notary public: special commission is not required to post bond before taking the oath/affirmation nor are they required to be a resident or employed in Kentucky. In addition, where a notary public: state at large is commissioned directly by the secretary of state, a notary public: special commission is appointed by the governor on the recommendation of the secretary of state. It is permitted to hold a commission as both a notary public: state at large and a notary public: special commission, however separate applications and filing fees are required.
A Kentucky notary public is not required to use a seal or stamp and a notarization with just the signature of the notary is considered to be valid. It is, however, recommended that a seal or stamp be used as they may be required on documents recorded or used in another state. If a seal or stamp is used, it is required to have the name of the notary as listed on their commission as well as their full title of office (notary public: state at large or notary public: special commission). A notary journal is also recommended but not required (except in the case of recording protests, which must be recorded in a well-bound and indexed journal).
Louisiana
Louisiana notaries public are commissioned by the governor with the advice and consent of the state Senate. They are the only U.S. notaries to be appointed for life. The Louisiana notary public is a civil law notary with broad powers, as authorized by law, usually reserved for the American-style combination "barrister/solicitor" lawyers and other legally authorized practitioners in other states. A commissioned notary in Louisiana is a civil law notary that can perform/prepare many civil law notarial acts usually associated with attorneys and other legally authorized practitioners in other states, except represent another person or entity before a court of law for a fee (unless they are also admitted to the bar). Notaries are not allowed to give "legal" advice, but they are allowed to give "notarial" advice – i.e., explain or recommend what documents are needed or required to perform a certain act – and do all things necessary or incidental to the performance of their civil law notarial duties. They can prepare any document a civil law notary can prepare (to include inventories, appraisements, partitions, wills, protests, matrimonial contracts, conveyances, and, generally, all contracts and instruments in writing) and, if ordered or requested to by a judge, prepare certain notarial legal documents, in accordance with law, to be returned and filed with that court of law.
Maine
Maine notaries public are appointed by the secretary of state to serve a seven-year term. In 1981, the process to merge the office of justice of the peace into that of notary public began, with all the duties of a justice of the peace fully transferred to a notary public in 1988. Because of this, Maine is one of three states (Florida and South Carolina are the others) where a notary public has the authority to solemnize the rites of matrimony (perform a marriage ceremony).Maine Department of the Secretary of State. (n.d.). Notary Public Handbook. p. 8 Viewed 3 December 2006.
Maryland
Maryland notaries public are appointed by the governor on the recommendation of the secretary of state to serve a four-year term. New applicants and commissioned notaries public must be bona fide residents of the State of Maryland or work in the state. An application must be approved by a state senator before it is submitted to the secretary of state. The official document of appointment is imprinted with the signatures of the governor and the secretary of state as well as the Great Seal of Maryland. Before exercising the duties of a notary public, an appointee must appear before the clerk of one of Maryland's 24 circuit courts to take an oath of office.
A bond is not required. Seals are required, and a notary is required to keep a log of all notarial acts, indicating the name of the person, their address, what type of document is being notarized, the type of ID used to authenticate them (or that they are known personally) by the notary, and the person's signature. The notary's log is the only document for which a notary may write their own certificate.
When having a person make an affidavit, state law requires the person to state the phrase "under penalty of perjury."
Minnesota
Minnesota notaries public are commissioned by the governor with the advice and consent of the Senate for a five-year term. All commissions expire on 31 January of the fifth year following the year of issue. Citizens and resident aliens over the age of 18 years apply to the Secretary of State for appointment and reappointment. Residents of adjoining counties in adjoining states may also apply for a notary commission in Minnesota. Notaries public have the power to administer all oaths required or authorized to be administered in the state; take and certify all depositions to be used in any of the courts of the state; take and certify all acknowledgments of deeds, mortgages, liens, powers of attorney and other instruments in writing or electronic records; and receive, make out and record notarial protests. The Secretary of State's website () provides more information about the duties, requirements and appointments of notaries public.
Montana
Montana notaries public are appointed by the Secretary of State and serve a four-year term. A Montana notary public has jurisdiction throughout the states of Montana, North Dakota, and Wyoming. These states permit notaries from neighboring states to act in the state in the same manner as one from that state under reciprocity, e.g., as long as that state grants notaries from neighboring states to act in their state. [Montana Code 1-5-605]
Nevada
The Secretary of State is charged with the responsibility of appointing notaries by the provisions of Chapter 240 of the Nevada Revised Statutes. Nevada notaries public who are not also practicing attorneys are prohibited by law from using "notario", "notario publico" or any non-English term to describe their services. (2005 Changes to NRS 240)
New Jersey
Notaries are commissioned by the State Treasurer for a period of five years. Notaries must also be sworn in by the clerk of the county in which they reside. A person can become a notary in the state of New Jersey if they: (1) are over the age of 18; (2) are a resident of New Jersey OR is regularly employed in New Jersey and lives in an adjoining state; (3) have never been convicted of a crime under the laws of any state or the United States, for an offense involving dishonesty, or a crime of the first or second degree, unless the person has met the requirements of the Rehabilitated Convicted Offenders Act (). Notary applications must be endorsed by a state legislator.
Notaries in the state of New Jersey serve as impartial witnesses to the signing of documents, attests to the signature on the document, and may also administer oaths and affirmations. Seals are not required; many people prefer them and as a result, most notaries have seals in addition to stamps. Notaries may administer oaths and affirmations to public officials and officers of various organizations. They may also administer oaths and affirmations in order to execute jurats for affidavits/verifications, and to swear in witnesses.
Notaries are prohibited from predating actions; lending notary equipment to someone else (stamps, seals, journals, etc.); preparing legal documents or giving legal advice; appearing as a representative of another person in a legal proceeding. Notaries should also refrain from notarizing documents in which they have a personal interest.
Pursuant to state law, attorneys licensed in New Jersey may administer oaths and affirmations
New York
New York notaries are empowered to administer oaths and affirmations (including oaths of office), to take affidavits and depositions, to receive and certify acknowledgments or proofs (of execution) of deeds, mortgages and powers of attorney and other instruments in writing; to demand acceptance or payment of foreign and inland bills of exchange, promissory notes and obligations in writing, and to protest these (that is, certify them) for non-acceptance or non-payment. Additional powers include required presence at a forced opening of an abandoned safe deposit box and certain election law privileges regarding petitioning. They are not authorized to perform a civil marriage ceremony, nor certify "true copies" of certain publicly recorded documents. Every county clerk's office in New York State (including within the City of New York) must have a notary public available to serve the public free of charge, during business hours with no limit on quantity or type of document.
Attorneys admitted to the New York Bar are eligible to apply for and receive an appointment as a notary public in the State of New York. Nota bene: they are not "automatically" appointed as a notary public because they are a member of the New York Bar. An interested attorney is required to follow the same appointment process as a non-attorney; however, the proctored, written state examination requirement is waived by statute for members of the bar in good standing.
New York notaries initially must pass a test and then renew their status every 4 years.
Ohio
Notaries public in the State of Ohio are authorized and governed by Ohio Revised Code, Chapter 147. Until early 2019, new applicants were appointed by the judge of the Court of Common Pleas for their county of residence typically based on “good character” (though often via recommendation of the county bar association, after completing initial paperwork and paying the first round of fees); then, an Ohio Bureau of Criminal Investigation and FBI combination background check was required before proceeding: if an applicant had an FBI-BCI check from the preceding six months, it was accepted. Importantly, up until the laws changed in 2019, counties enjoyed wide-ranging jurisdiction over the process, though many required successful completion of an exam covering ORC 147, particularly focused on the distinctions between the types of notarial acts and prohibited practices. Then, commissions were issued by the Secretary of State (after mailing a passing grade to the SOS), mailed to the newly-admitted notary, and finally personally recorded at the county Recorder’s office. Commissions were valid for five years and had no re-examination requirements for renewal—only a new background check was required. Since the law changed in 2019, notaries are bound by the below-noted requirements, including for re-commissioning (i.e., they were not grandfathered).
Since the change in 2019, the process is almost entirely governed at the state level, with little leeway for the many counties. For new applicants, a three-hour examination is required, regardless of the place of residence; these are still provided at the county level. The commission is issued electronically via a secure web portal after having mailed the passing grade to the Secretary of State. Commissions are no longer recorded in the newly-commissioned officer’s home county. Commissions remain valid for five years; however, a one-hour refresher, web-based examination is required for renewal, in addition to the FBI-BCI background check. With the changes, notaries could now charge $5.00 per act: this fee is capped by state law. There are few yet important exclusions to this cap, such as for signing agents acting on real estate transactions and for reasonable expenses incurred for traveling to a signer’s location (if necessary).
Ohio allows both: electronic (e.g., affixing a seal and digital signature to a PDF) and remote online notarization (“RON”), though additional qualifications are required for RONs. For instance, a two-hour course via one of four approved vendors is required. Results must be manually submitted to the SOS’s office. Otherwise, no additional steps are required. A benefit to notaries is that they may charge five times the price for an in-person notarization ($25 vs. $5). Notably, Ohio’s RON laws require (only) the notary to be physically present in the State of Ohio at the time of the act; but, the signer may be located anywhere in the world: there simply must be a video connection such that the signer’s identification may be verified and/or for the oath/affirmation to be administered and answered.
Notably, an Ohio notary is solely and personally responsible for his/her commission and seal. While classes may be paid for by an employer, a notary is free to perform acts outside of work (so long as the primary job performance is not impacted). An employer-sponsored notary must still abide by all Ohio laws and has the right to refuse acts deemed illegal. Importantly, a notary does not have a conflict of interest when signing for his/her company’s own documents, so long as she/he does not directly benefit materially. Also, since commissions are issued to individuals, the notary does not have to make his/her seal available to the company; and at separation from employment, the notary is entitled by law to take the seal(s) and journal(s) upon departure.
Pennsylvania
A notary in the Commonwealth of Pennsylvania is empowered to perform seven distinct official acts: take affidavits, verifications, acknowledgments and depositions, certify copies of documents, administer oaths and affirmations, and protest dishonored negotiable instruments. A notary is strictly prohibited from giving legal advice or drafting legal documents such as contracts, mortgages, leases, wills, powers of attorney, liens or bonds. Pennsylvania is one of the few states with a successful Electronic Notarization Initiative.
South Carolina
South Carolina notaries public are appointed by the governor to serve a ten-year term. All applicants must first have that application endorsed by a state legislator before submitting their application to the Secretary of State. South Carolina is one of three states (Florida and Maine are the others) where a notary public can solemnize the rites of matrimony (perform a marriage ceremony) (2005). If you live in South Carolina but work in North Carolina, Georgia or Washington, DC, these states will permit you to become a notary public for their state. South Carolina does not offer this provision to out-of-state residents that work in South Carolina(2012).
Utah
Utah notaries public are appointed by the lieutenant governor to serve a four-year term. Utah used to require that impression seals be used, but now it is optional. The seal must be in purple ink.
Virginia
A Virginia notary must either be a resident of Virginia or work in Virginia, and is authorized to acknowledge signatures, take oaths, and certify copies of non-government documents which are not otherwise available, e.g. a notary cannot certify a copy of a birth or death certificate since a certified copy of the document can be obtained from the issuing agency. Changes to the law effective 1 July 2008 imposes certain new requirements; while seals are still not required, if they are used they must be photographically reproducible. Also, the notary's registration number must appear on any document notarized. Changes to the law effective 1 July 2008 will permit notarization of electronic signatures.
On 1 July 2012, Virginia became the first state to authorize a signer to be in a remote location and have a document notarized electronically by an approved Virginia electronic notary using audio-visual conference technology by passing the bills SB 827 and HB 2318.
Washington
In Washington any adult resident of the state, or resident of Oregon or Idaho who is employed in Washington or member of the United States military or their spouse, may apply to become a notary public. Applicants for commissioning as a Notary Public must: (a) be literate in the English language, (b) be endorsed by three adult residents of Washington who are not related to the applicant, (c) pay $30, (d) possess a surety bond in the amount of $10,000, (e) swear under oath to act in accordance with the state's laws governing the practice of notaries. In addition, the director of licensing is authorized to deny a commission to any applicant who has had a professional license revoked, has been convicted of a serious crime, or who has been found culpable of misconduct during a previous term as a notary public.
A notary public is appointed for a term of 4 years.
West Virginia
Notaries public in this state are also referred to under law as a Conservator of the Peace as per Attorney General decision on 4 June 1921
Wyoming
Wyoming notaries public are appointed by the Secretary of State and serve a four-year term. A Wyoming notary public has jurisdiction throughout the states of Wyoming and Montana. These states permit notaries from neighboring states to act in the state in the same manner as one from that state under reciprocity, e.g. as long as that state grants notaries from neighboring states to act in their state.
Controversies
A Maryland requirement that to obtain a commission, a notary declare their belief in God, as required by the Maryland Constitution, was found by the United States Supreme Court in Torcaso v. Watkins, to be unconstitutional. Historically, some states required that a notary be a citizen of the United States. However, the U.S. Supreme Court, in the case of Bernal v. Fainter , declared that to be impermissible.
In the U.S., there are reports of notaries (or people claiming to be notaries) having taken advantage of the differing roles of notaries in common law and civil law jurisdictions to engage in the unauthorized practice of law. The victims of such scams are typically illegal immigrants from civil law countries who need assistance with, for example, their immigration papers and want to avoid hiring an attorney. Confusion often results from the mistaken premise that a notary public in the United States serves the same function as a Notario Publico in Spanish-speaking countries (which are civil law countries, see below). For this reason, some states, like Texas, require that notaries specify that they are not Notario Publico'' when advertising services in languages other than English. Prosecutions in such cases are difficult, as the victims are often deported and thus unavailable to testify.
Military
Certain members of the United States Armed Forces are given the powers of a notary under federal law (). Some military members have authority to certify documents or administer oaths, without being given all notarial powers (, , ). In addition to the powers granted by the federal government, some states have enacted laws granting notarial powers to commissioned officers.
Embassies and consulates
Certain personnel at U.S. embassies and consulates may be given the powers of a notary under federal law ( and ).
Civil law jurisdictions
The role of notaries in civil law countries is much greater than in common law countries. Civilian notaries are full-time lawyers and holders of a public office who routinely undertake non-contentious transactional work done in common law countries by attorneys/solicitors, as well as, in some countries, those of government registries, title offices, and public recorders. The qualifications imposed by civil law countries are much greater, requiring generally an undergraduate law degree, a graduate degree in notarial law and practice, three or more years of practical training ("articles") under an established notary, and must sit a national examination to be admitted to practice. Typically, notaries work in private practice and earn fees, but a small minority of countries have salaried public service (or "government" / "state") notaries (e.g., Ukraine, Russia, Baden-Württemberg in Germany (until 2017), certain cantons of Switzerland, and Portugal).
Notaries in civil law countries have had a critical historical role in providing archives. A considerable amount of historical data of tremendous value is available in France, Spain and Italy thanks to notarial minutes, contracts and conveyances, some of great antiquity which have survived in spite of losses, deterioration and willful destruction.
Civil law notaries have jurisdiction over strictly non-contentious domestic civil-private law in the areas of property law, family law, agency, wills and succession, and company formation. The point to which a country's notarial profession monopolizes these areas can vary greatly. On one extreme is France (and French-derived systems) which statutorily give notaries a monopoly over their reserved areas of practice, as opposed to Austria where there is no discernible monopoly whatsoever and notaries are in direct competition with attorneys/solicitors.
In the few United States jurisdictions where trained notaries are allowed (such as Louisiana and Puerto Rico), the practice of these legal practitioners is limited to legal advice on purely non-contentious matters that fall within the purview of a notary's reserved areas of practice.
Thailand is a mixed law country with a strong civil law tradition. Public notaries in Thailand are Thai lawyers who have a special license.
Notable notaries
Upon the death of President Warren G. Harding in 1923, Calvin Coolidge was sworn in as president by his father, John Calvin Coolidge, Sr., a Vermont notary public. However, as there was some controversy as to whether a state notary public had the authority to administer the presidential oath of office, Coolidge took the oath, again, upon returning to Washington.
See also
Articles about common notarial certificates (varies by jurisdiction):
Acknowledgment (law)
Commissioner of deeds
Copy certification
Jurat
Barrister
eNotary
Lawyer
Legalization
Peace Commissioner
Solicitor
Justice of the Peace
Medallion signature guarantee
References
External links
The Society of Notaries of New South Wales Inc. (AUS)
The Society of Notaries of Victoria Inc. (AUS)
The Society of Notaries of Queensland Inc. (AUS)
The Notaries Society (UK)
The Society of Scrivener Notaries (UK)
The Faculty of Notaries Public in Ireland
The Society of Notaries Public of BC (Canada)
Identity documents
Legal professions
Notary
he:נוטריון |
17527010 | https://en.wikipedia.org/wiki/Robert%20Slade | Robert Slade | Robert Michael Slade, also known as Robert M. Slade and Rob Slade, is a Canadian information security consultant, researcher and instructor. He is the author of Robert Slade's Guide to Computer Viruses, Software Forensics, Dictionary of Information Security and co-author of Viruses Revealed. Slade is the author of thousands of technical book reviews, today published on the techbooks mailing list and in the RISKS Digest, and archived in his Internet Review Project. An expert on computer viruses and malware, he is also the Mr. Slade of "Mr. Slade's lists".
Family and education
Slade married Gloria J. Slade who edits much of his work and is the editor of Slade's book reviews. Unfortunately, Gloria died in December of 2021, and so, without her support, Rob is unlikely to publish more books (although he is still an active contributor to various online fora). Their grandchildren appear in the From field of every review, in 2008 from "Rob, grandpa of Ryan, Trevor, Devon & Hannah". He holds a bachelor's degree from the University of British Columbia, a master's in computer and information science education from the University of Oregon and a diploma in Christian studies from Regent College.
Malware and forensics
Slade became one of a small number of researchers who can be called the world's experts on malware. Fred Cohen named Slade's early work organizing computer viruses, software, BBSes and book reviews Mr. Slade's lists. Slade is one of fewer than thirty people worldwide who are credited for contributions in the final version of the VIRUS-L FAQ, which, with the Usenet group comp.virus and the VIRUS-L mailing list, was the public group of record for computer virus issues from 1988 to 1995. Until 1996 he maintained the Antiviral Software Evaluation FAQ, a quick reference for users seeking antivirus software and a vendor contacts list. He was a contributor as well to at least three other group computer virus FAQs before the Web came to prominence. He has written two books about viruses: he was sole author of Robert Slade's Guide to Computer Viruses, first published in 1994 (2nd edition 1996) and co-wrote Viruses Revealed with David Harley and Urs Gattiker in 2001.
Slade advanced the field of computer forensics when through his antivirus research he found that the intentions and identity of virus authors can be discovered in their program code. He created the first course ever offered in forensic programming. His book Software Forensics was published in 2004 and his chapter on the subject is in print in the Information Security Management Handbook as of the fifth edition.
Information security
Today Slade is a consultant to businesses and government—among his client list are Fortune 500 companies and the government of Canada—as well as to educational institutions. He created curricula and taught courses for Simon Fraser University, MacDonald Dettwiler, and the University of Phoenix. Slade creates seminars for local, federal and international training groups. He is a senior instructor for (ISC)² where he develops courses in information security and quality assurance (QA) for those who seek certification. Slade himself is one of the world's approximately 60,000 CISSPs, a certification used in private industry as well as, at least in the United States, in government and defense.
Slade moved his online security glossary in 2006 to the book Dictionary of Information Security. Virus Bulletin remarked about the unusual collection of five forewords, "that so many acknowledged experts are willing to contribute says something about the author's standing in the field"—the forewords were written by Fred Cohen, Jack Holleran, Peter G. Neumann, Harold Tipton and Gene Spafford. The dictionary is considered to be "dependable baseline definitions" and a "citable, common source".
Internet Review Project
Slade has "surveyed most of the literature" in his field and shared his knowledge in the Internet Review Project, the published book reviews for which he is perhaps most widely known. He reviews other works but gave first priority to information security. His reviews are often critical—to the project FAQ question, "Don't you like any books?", Slade replied that he may be cruel but is fair.
Bibliography
Notes
External links
[Home page.]
[Book reviews.]
[Home page.]
[Photos.]
[Video.]
Living people
Year of birth missing (living people)
People from North Vancouver
Canadian educators
Canadian technology writers
People associated with computer security
University of Phoenix faculty
University of British Columbia alumni
University of Oregon alumni |
68224052 | https://en.wikipedia.org/wiki/842%20%28compression%20algorithm%29 | 842 (compression algorithm) | 842, 8-4-2, or EFT is a data compression algorithm. It is a variation on the Lempel–Ziv compression algorithm with a limited dictionary length. With typical data, 842 gives 80 to 90 percent of the compression of LZ77 with much faster throughput and less memory use. Hardware implementations also provide minimal use of energy and minimal chip area.
842 compression can be used for virtual memory compression, for databases — especially column-oriented stores, and when streaming input-output — for example to do backups or to write to log files.
Algorithm
The algorithm operates on blocks of 8 bytes with sub-phrases of 8, 4 and 2 bytes. A hash of each phrase is used to look up a hash table with offsets to a sliding window buffer of past encoded data. Matches can be replaced by the offset, so the result for each block can be some mixture of matched data and new literal data.
Implementations
IBM have added hardware accelerators and instructions for 842 compression to their Power processors from POWER7+ onward.
A device driver for hardware-assisted 842 compression on a POWER processor was added to the Linux kernel in 2011. More recently, Linux can fallback to a software implementation, which of course is much slower. zram, a Linux kernel module for compressed RAM drives, can be configured to use 842.
Researchers have implemented 842 using graphics processing units and found about 30x faster decompression using dedicated GPUs. An open source library provides 842 for CUDA and OpenCL.
References
Lossless compression algorithms |
22602354 | https://en.wikipedia.org/wiki/Replicon%20%28company%29 | Replicon (company) | Replicon is one of the leading company in time tracking applications.
Products
Replicon's product suite products that count hours. Promax was designed to help managers set productivity goals for employees, contract workers, projects teams, and departments.
History
Replicon was co-founded in 1996 by Raj Narayanaswamy and Lakshmi Raj in Calgary, Canada. The company expanded its presence globally over the years and has set up major offices in other countries. The founders reckoned that simple everyday processes like time sheet management and expense reports cause major problems and hence went about building web-based applications to relieve the stress on businesses and optimize workforce productivity.
Selected awards and recognition
Replicon ranked in Top 250 Canadian Tech Companies for 2008
Replicon ranked as one of the fastest growing Canadian tech companies in 2004 and 2005 in Deloitte Technology Fast 50
Replicon ranked in Software Magazine's Annual 500 Ranking in 2004, 2005, 2006 and 2007
See also
Comparison of time tracking software
Project management software
Time tracking software
References
External links
Official Site
Time-tracking software
Web applications
Business software
Business software companies
Software companies established in 1996 |
2650751 | https://en.wikipedia.org/wiki/Specialized%20System%20Consultants | Specialized System Consultants | Specialized System Consultants (SSC), is a private media company that publishes magazines and reference manuals. SSC properties include LinuxGazette.com, ITgarage.com, the monthly international print magazine Linux Journal, and the webzine Tux Magazine.
Controversy
In 1996, the Linux magazine, Linux Gazette, which was at the time managed by creator John M. Fisk, was transferred to SSC (under Phil Hughes) on the understanding that the publication would continue to be open, free and non-commercial.
Due to conflicts between SSC and Linux Gazette staff members, the magazine split into two competing groups which still remain separated.
Mass media companies of the United States |
234751 | https://en.wikipedia.org/wiki/Recording%20studio | Recording studio | A recording studio is a specialized facility for sound recording, mixing, and audio production of instrumental or vocal musical performances, spoken words, and other sounds. They range in size from a small in-home project studio large enough to record a single singer-guitarist, to a large building with space for a full orchestra of 100 or more musicians. Ideally, both the recording and monitoring (listening and mixing) spaces are specially designed by an acoustician or audio engineer to achieve optimum acoustic properties (acoustic isolation or diffusion or absorption of reflected sound echoes that could otherwise interfere with the sound heard by the listener).
Recording studios may be used to record singers, instrumental musicians (e.g., electric guitar, piano, saxophone, or ensembles such as orchestras), voice-over artists for advertisements or dialogue replacement in film, television, or animation, foley, or to record their accompanying musical soundtracks. The typical recording studio consists of a room called the "studio" or "live room" equipped with microphones and mic stands, where instrumentalists and vocalists perform; and the "control room", where audio engineers, sometimes with record producers, as well, operate professional audio mixing consoles, effects units, or computers with specialized software suites to mix, manipulate (e.g., by adjusting the equalization and adding effects) and route the sound for analog or digital recording. The engineers and producers listen to the live music and the recorded "tracks" on high-quality monitor speakers or headphones.
Often, there will be smaller rooms called isolation booths to accommodate loud instruments such as drums or electric guitar amplifiers and speakers, to keep these sounds from being audible to the microphones that are capturing the sounds from other instruments or voices, or to provide "drier" rooms for recording vocals or quieter acoustic instruments such as an acoustic guitar or a fiddle. Major recording studios typically have a range of large, heavy, and hard-to-transport instruments and music equipment in the studio, such as a grand piano, Hammond organ, electric piano, harp, and drums.
Design and equipment
Layout
Recording studios generally consist of three or more rooms:
The live room of the studio where instrumentalists play their instruments, with their playing picked up by microphones and, for electric and electronic instruments, by connecting the instruments' outputs or DI unit outputs to the mixing board (or by miking the speaker cabinets for bass and electric guitar);
Isolation booths are small sound-insulated rooms with doors, designed for instrumentalists (or their loud speaker stacks). Vocal booths are similarly designed rooms for singers. In both types of rooms, there are typically windows so the performers can see other band members and other studio staff, as singers, bandleaders and musicians often give or receive visual cues;
The control room, where the audio engineers and record producers mix the mic and instrument signals with a mixing console, record the singing and playing onto tape (until the 1980s and early 1990s) or hard disc (1990s and following decades) and listen to the recordings and tracks with monitor speakers or headphones and manipulate the tracks by adjusting the mixing console settings and by using effects units; and
The machine room, where noisier equipment, such as racks of fan-cooled computers and power amplifiers, is kept to prevent the noise from interfering with the recording process.
Even though sound isolation is a key goal, the musicians, singers, audio engineers and record producers still need to be able to see each other, to see cue gestures and conducting by a bandleader. As such, the live room, isolation booths, vocal booths and control room typically have windows.
Recording studios are carefully designed around the principles of room acoustics to create a set of spaces with the acoustical properties required for recording sound with accuracy. Architectural acoustics includes acoustical treatment and soundproofing and also the consideration of the physical dimensions of the room itself to make the room respond to sound in the desired way. Acoustical treatment includes and the use of absorption and diffusion materials on the surfaces inside the room. To control the amount of reverberation, rooms in a recording studio may have a reconfigurable combination of reflective and non-reflective surfaces. Soundproofing provides sonic isolation between rooms and prevents sound from entering or leaving the property. A Recording studio in an urban environment must be soundproofed on its outer shell to prevent noises from the surrounding streets and roads from being picked up by microphones inside.
Equipment
Equipment found in a recording studio commonly includes:
A professional-grade mixing console
Additional small mixing consoles for adding more channels (e.g., if a drum kit needs to be miked and all of the channels of the large console are in use, an additional 16 channel mixer would enable the engineers to mix the mics for the kit)
Microphone preamplifiers
Multitrack recorder or digital audio workstation
Computers
A wide selection of microphones typical for different types of instruments
DI unit boxes
Microphone stands to enable engineers to place microphones at the desired locations in front of singers, instrumentalists or ensembles
Studio monitors designed for listening to recorded mixes or tracks
Studio monitoring headphones (typically closed-shell, to prevent sound from "leaking" out into the microphones)
"On Air" or "Recording" lighted signs to remind other studio users to be quiet
Outboard effect units, such as compressors, reverbs, or equalizers
Music stands
Instruments
Not all music studios are equipped with musical instruments. Some smaller studios do not have instruments, and bands and artists are expected to bring their own instruments, amplifiers and speakers. However, major recording studios often have a selection of instruments in their live room, typically instruments, amplifiers and speaker cabinets that are large, heavy, and difficult to transport (e.g., a Hammond organ) or infeasible (as in the case of a grand piano) to hire for a single recording session. Having musical instruments and equipment in the studio creates additional costs for a studio, as pianos have to be tuned and instruments and associated equipment needs to be maintained.
The types and brands of music equipment owned by a studio depend on the styles of music for the bands and artists that typically record there. Instruments that may be present in a studio include:
Keyboard instruments and related keyboard gear
Grand piano (e.g., Steinway)
Hammond organ and rotating Leslie speaker
Electric pianos
MIDI keyboard or MIDI-equipped stage piano
Vintage synthesizers (e.g., Moog synthesizers)
Keyboard amplifier
Acoustic drum kit: this may only include the wood-shelled drums and the stands. Studios typically own major brands such as Premier, Ludwig and Gretsch. Some studios have a selection of classic snares. Drummers typically prefer to use their own snare drum and cymbals
Bass amplifier and bass speaker cabinet (e.g., a tube Ampeg SVT amp and an 8x10" cabinet)
Guitar amplifier and guitar speaker cabinets (e.g., a Fender Twin and a Marshall tube amp and speaker stack. Tube amps made by Vox, Ampeg, and Gibson may also be available.
Vintage guitars and basses made by Fender, Gibson, and Rickenbacker
In rare cases, studios may have a mellotron ethnic drums, sitars, a double bass, or unusual instruments that bands might wish to try for a particular sound.
Guitarists and bassists are often expected to bring their own guitars, basses and effects pedals. Drummers often bring their own snare drum, cymbals and sticks or brushes. Musicians that play other easily transported instruments, such as instruments from the violin family, accordion, ukulele, banjo, brass horns, and woodwinds will also be expected to bring their own instruments.
Digital audio workstations
General-purpose computers rapidly assumed a large role in the recording process. With software, a powerful, good quality computer with a fast processor can replace the mixing consoles, multitrack recording equipment, synthesizers, samplers and effects unit (reverb, echo, compression, etc.) that a recording studio required in the 1980s and 1990s. A computer thus outfitted is called a digital audio workstation, or DAW.
While Apple Macintosh is used for most studio work, there is a breadth of software available for Microsoft Windows and Linux.
If no mixing console is used and all mixing is done using only a keyboard and mouse, this is referred to as mixing in the box (ITB). OTB describes mixing with other hardware and not just the PC software.
Project studios
A small, personal recording studio is sometimes called a project studio or home studio. Such studios often cater to the specific needs of an individual artist or are used as a non-commercial hobby. The first modern project studios came into being during the mid-1980s, with the advent of affordable multitrack recording devices, synthesizers and microphones. The phenomenon has flourished with falling prices of MIDI equipment and accessories, as well as inexpensive direct to disk recording products.
Recording drums and amplified electric guitar in a home studio is challenging because they are usually the loudest instruments. Acoustic drums require sound isolation in this scenario, unlike electronic or sampled drums. Getting an authentic electric guitar amp sound including power-tube distortion requires a power attenuator or an isolation cabinet, or booth. A convenient compromise is amplifier modeling, whether a modeling amp, preamp/processor, or software-based guitar amp simulator. Sometimes, musicians replace loud, inconvenient instruments such as drums, with keyboards, which today often provide somewhat realistic sampling.
The capability of digital recording introduced by ADAT and its comparatively low cost, originally introduced at $3995, were largely responsible for the rise of project studios in the 1990s. Today's project studios are built around software-based DAWs running on standard PC hardware.
Isolation booth
An isolation booth is a small room in a recording studio, which is both soundproofed to keep out external sounds and keep in the internal sounds, and like all the other recording rooms in sound industry, it is designed for having a lesser amount of diffused reflections from walls to make a good sounding room. A drummer, vocalist, or guitar speaker cabinet, along with microphones, is acoustically isolated in the room. A typical professional recording studio has a control room, a large live room, and one or more small isolation booths.
All rooms are soundproofed by varying methods, including but not limited to, double-layer 5/8" sheetrock with the seams offset from layer to layer on both sides of the wall that is filled with foam, batten insulation, a double wall, which is an insulated wall built next to another insulated wall with an air gap in-between, by adding foam to the interior walls and corners, and by using two panes of thick glass with an air gap between them. The surface densities of common building materials determines the transmission loss of various frequencies through materials.
Thomas A. Watson invented, but did not patent, the soundproof booth for use in demonstrating the telephone with Alexander Graham Bell in 1877. There are variations of the same concept, including a portable standalone isolation booth and a guitar speaker isolation cabinet. A gobo panel achieves the same effect to a much more moderate extent; for example, a drum kit that is too loud in the live room or on stage can have acrylic glass see-through gobo panels placed around it to deflect the sound and keep it from bleeding into the other microphones, allowing better independent control of each instrument channel at the mixing console.
In animation, vocal performances are normally recorded in individual sessions, and the actors have to imagine (with the help of the director or a reader) they are involved in dialogue. Animated films often evolve rapidly during both development and production, so keeping vocal tracks from bleeding into each other is essential to preserving the ability to fine-tune lines up to the last minute. Sometimes, if the rapport between the lead actors is strong enough and the animation studio can afford it, the producers may use a recording studio configured with multiple isolation booths in which the actors can see each another and the director. This enables the actors to react to one another in real time as if they were on a regular stage or film set.
History
1890s to 1930s
In the era of acoustical recordings (prior to the introduction of microphones, electrical recording and amplification), the earliest recording studios were very basic facilities, being essentially soundproof rooms that isolated the performers from outside noise. During this era it was not uncommon for recordings to be made in any available location, such as a local ballroom, using portable acoustic recording equipment. In this period, master recordings were made by cutting a rotating cylinder (later disc) made from wax. Performers were typically grouped around a large acoustic horn (an enlarged version of the familiar gramophone horn). The acoustic energy from the voices or instruments was channeled through the horn to a diaphragm to a mechanical cutting lathe, which inscribed the signal as a modulated groove directly onto the surface of the master.
1930s to 1970s
Electrical recording was common by the early 1930s, and mastering lathes were electrically powered, but master recordings still had to be cut into a disc, by now a lacquer, also known as an Acetate disc. In line with the prevailing musical trends, studios in this period were primarily designed for the live recording of symphony orchestras and other large instrumental ensembles. Engineers soon found that large, reverberant spaces like concert halls created a vibrant acoustic signature as the natural reverb enhanced the sound of the recording. In this period large, acoustically "live" halls were favored, rather than the acoustically "dead" booths and studio rooms that became common after the 1960s. Because of the limits of the recording technology, which did not allow for multitrack recording techniques, studios of the mid-20th century were designed around the concept of grouping musicians (e.g., the rhythm section or a horn section) and singers (e.g., a group of backup singers), rather than separating them, and placing the performers and the microphones strategically to capture the complex acoustic and harmonic interplay that emerged during the performance. In the 2000s, modern sound stages still sometimes use this approach for large film scoring projects that use large orchestras.
Halls and churches
Because of their superb acoustics, many of the larger studios were converted churches. Examples include George Martin's AIR Studios in London, the famed Columbia Records 30th Street Studio in New York City (a converted Armenian church, with a ceiling over 100 feet high), and the Decca Records Pythian Temple studio in New York (where artists like Louis Jordan, Bill Haley and Buddy Holly were recorded) which was also a large converted church that featured a high, domed ceiling in the center of the hall.
Facilities like the Columbia Records 30th Street Studio in New York and Abbey Road Studios in London were renowned for their 'trademark' sound—which was (and still is) easily identifiable by audio professionals—and for the skill of their staff engineers. As the need to transfer audio material between different studios grew, there was an increasing demand for standardization in studio design across the recording industry, and Westlake Recording Studios in West Hollywood was highly influential in the 1970s in the development of standardized acoustic design.
In New York City, Columbia Records had some of the most highly respected sound recording studios, including the Columbia 30th Street Studio at 207 East 30th Street, the CBS Studio Building at 49 East 52nd Street, Liederkranz Hall at 111 East 58th Street between Park and Lexington Avenues (a building built by and formerly belonging to a German cultural and musical society, The Liederkranz Club and Society), and one of their earliest recording studios, "Studio A" at 799 Seventh Avenue.
Technologies and techniques
Electric recording studios in the mid-20th century often lacked isolation booths, sound baffles, and sometimes even speakers, and it was not until the 1960s, with the introduction of the high-fidelity headphones that it became common practice for performers to use headsets to monitor their performance during recording and listen to playbacks. It was difficult to isolate all the performers—a major reason that this practice was not used simply because recordings were usually made as live ensemble 'takes' and all the performers needed to be able to see each other and the ensemble leader while playing. The recording engineers who trained in this period learned to take advantage of the complex acoustic effects that could be created through "leakage" between different microphones and groups of instruments, and these technicians became extremely skilled at capturing the unique acoustic properties of their studios and the musicians in performance.
The use of different kinds of microphones and their placement around the studio was a crucial part of the recording process, and particular brands of microphones were used by engineers for their specific audio characteristics. The smooth-toned ribbon microphones developed by the RCA company in the 1930s were crucial to the "crooning" style perfected by Bing Crosby, and the famous Neumann U47 condenser microphone was one of the most widely used from the 1950s. This model is still widely regarded by audio professionals as one of the best microphones of its type ever made. Learning the correct placement of microphones was a major part of the training of young engineers, and many became extremely skilled in this craft. Well into the 1960s, in the classical field it was not uncommon for engineers to make high-quality orchestral recordings using only one or two microphones suspended above the orchestra. In the 1960s, engineers began experimenting with placing microphones much closer to instruments than had previously been the norm. The distinctive rasping tone of the horn sections on the Beatles recordings "Good Morning Good Morning" and "Lady Madonna" were achieved by having the saxophone players position their instruments so that microphones were virtually inside the mouth of the horn.
The unique sonic characteristics of the major studios imparted a special character to many of the most famous popular recordings of the 1950s and 1960s, and the recording companies jealously guarded these facilities. According to sound historian David Simons, after Columbia took over the 30th Street Studios in the late 1940s and A&R manager Mitch Miller had tweaked it to perfection, Miller issued a standing order that the drapes and other fittings were not to be touched, and the cleaners had specific orders never to mop the bare wooden floor for fear it might alter the acoustic properties of the hall. There were several other features of studios in this period that contributed to their unique "sonic signatures". As well as the inherent sound of the large recording rooms, many of the best studios incorporated specially-designed echo chambers, purpose-built rooms which were often built beneath the main studio.
These were typically long, low rectangular spaces constructed from hard, sound-reflective materials like concrete, fitted with a loudspeaker at one end and one or more microphones at the other. During a recording session, a signal from one or more of the microphones in the studio could be routed to the loudspeaker in the echo chamber; the sound from the speaker reverberated through the chamber and the enhanced signal was picked up by the microphone at the other end. This echo-enhanced signal—which was often used to 'sweeten' the sound of vocals—could then be blended in with the primary signal from the microphone in the studio and mixed into the track as the master recording was being made. Special equipment was another notable feature of the "classic" recording studio. The biggest studios were owned and operated by large media companies like RCA, Columbia and EMI, who typically had their own electronics research and development divisions that designed and built custom-made recording equipment and mixing consoles for their studios. Likewise, the smaller independent studios were often owned by skilled electronics engineers who designed and built their own desks and other equipment. A good example of this is the famous Gold Star Studios in Los Angeles, the site of many famous American pop recordings of the 1960s. Co-owner David S. Gold built the studio's main mixing desk and many additional pieces of equipment and he also designed the studio's unique trapezoidal echo chambers.
During the 1950s and 1960s, the sound of pop recordings was further defined by the introduction of proprietary sound processing devices such as equalizers and compressors, which were manufactured by specialist electronics companies. One of the best known of these was the famous Pultec equalizer, which was used by almost all the major commercial studios of the time.
Multi-track recording
With the introduction of multi-track recording, it became possible to record instruments and singers separately and at different times on different tracks on tape, although it was not until the 1970s that the large recording companies began to adopt this practice widely, and throughout the 1960s many "pop" classics were still recorded live in a single take. After the 1960s, the emphasis shifted to isolation and sound-proofing, with treatments like echo and reverberation added separately during the mixing process, rather than being blended in during the recording. One regrettable outcome of this trend, which coincided with rising inner-city property values, was that many of the largest studios were either demolished or redeveloped for other uses. In the mid-20th century, recordings were analog, made on ¼-inch or ½-inch magnetic tape, or, more rarely, on 35 mm magnetic film, with multitrack recording reaching 8 tracks in the 1950s, 16 in 1968, and 32 in the 1970s. The commonest such tape is the 2-inch analog, capable of containing up to 24 individual tracks. Generally, after an audio mix is set up on a 24-track tape machine, the signal is played back and sent to a different machine, which records the combined signals (called printing) to a ½-inch two-track stereo tape, called a master.
Before digital recording, the total number of available tracks onto which one could record was measured in multiples of 24, based on the number of 24-track tape machines being used. Most recording studios now use digital recording equipment, which limits the number of available tracks only on the basis of the mixing console's or computer hardware interface's capacity and the ability of the hardware to cope with processing demands. Analog tape machines are still used by some audiophiles and sound engineers for their unique sonic characteristics. Scarcity and age have led to certain models of analog tape machines rising significantly in value.
Radio studios
Radio studios are very similar to recording studios, particularly in the case of production studios which are not normally used on-air, such as studios where interviews are taped for later broadcast. This type of studio would normally have all of the same equipment that any other audio recording studio would have, particularly if it is at a large station, or at a combined facility that houses a station group, but is also designed for groups of people to work collaboratively in a live-to-air situation (see Ahern, S, Making Radio).
Broadcast studios also use many of the same principles such as sound isolation, with adaptations suited to the live on-air nature of their use. Such equipment would commonly include a telephone hybrid for putting telephone calls on the air, a POTS codec for receiving remote broadcasts, a dead air alarm for detecting unexpected silence, and a broadcast delay for dropping anything from coughs to profanity. In the U.S., stations licensed by the Federal Communications Commission (FCC) also must have an Emergency Alert System decoder (typically in the studio), and in the case of full-power stations, an encoder that can interrupt programming on all channels which a station transmits to broadcast urgent warnings.
Computers are also used for playing ads, jingles, bumpers, soundbites, phone calls, sound effects, traffic and weather reports, and now full broadcast automation when no staff are present. For talk shows, a producer or assistant in a control room runs the show, including screening calls and entering the callers' names and subject into a queue, which the show's host can see and make a proper introduction with. Radio contest winner interviews can also be edited "on the fly" and put on the air within a minute or two after they have been recorded accepting their prize.
Additionally, digital mixing consoles can be interconnected via audio over Ethernet, or split into two parts, with inputs and outputs wired to a rackmount audio engine, and one or more control surfaces (mixing boards) or computers connected via serial port, allowing the producer or the talent to control the show from either point. With Ethernet and audio over IP (live) or FTP (recorded), this also allows remote access, so that DJs can do shows from a home studio via ISDN or the Internet. Additional outside audio connections are required for the studio/transmitter link for over-the-air stations, satellite dishes for sending and receiving shows, and for webcasting or podcasting.
See also
Film studio
List of music software
Re-amp
Recording studio as an instrument
Talkback (recording)
Television studio
References
Further reading
Cogan, Jim; Clark, William. Temples of Sound: Inside the Great Recording Studios. San Francisco: Chronicle Books, 2003.
Horning, Susan Schmidt. Chasing Sound: Technology, Culture, and the Art of Studio Recording from Edison to the LP. Baltimore: Johns Hopkins University Press, 2013.
Ramone, Phil; Granata, Charles L. Making Records: The Scenes Behind the Music. New York: Hyperion, 2007.
External links
The History of Sound Recording Technology
The Complete Directory of Recording Studios
Television terminology |
38309019 | https://en.wikipedia.org/wiki/RainStor | RainStor | RainStor was a software company that developed a database management system of the same name designed to manage and analyze big data for large enterprises. It uses de-duplication techniques to organize the process of storing large amounts of data for reference. The company's origin traces back to a special project conducted by the United Kingdom's Ministry of Defence with the purpose of storing volumes of data from years of field operations for ongoing analysis and training purposes.
RainStor was headquartered in San Francisco, California, United States with R&D in Gloucester, United Kingdom.
The company was acquired by Teradata in 2014.
History
RainStor was founded in 2002 in the United Kingdom. Originally it was named Clearpace. The company was originally created to exploit technology that was developed by the United Kingdom's Ministry of Defence to store big data under the brand name DeX. The company rebranded DeX as NParchive, which deduplicated and archived rarely used data, in 2008.
The company and product were renamed to RainStor (a portmanteau of relational archiving infrastructure storage) in December 2009, coinciding with a move of the management office from the United Kingdom to San Francisco. The release of version 3.5 of RainStor software, announced in May 2009, coincided with the company's rebranding. RainStor received $7.5 million in venture funding from Storm Ventures, Doughty Hanson Technology Ventures, Informatica, and The Dow Company in March 2010.
In 2011, it received some marketing awards.
In October 2012, RainStor received $12 million in venture funding from Credit Suisse, Doughty Hanson Technology Ventures, Storm Ventures, the Dow Chemical Company, and Rogers Venture Partners.
In October 2012, the company reported over 100 clients. RainStor worked with companies in the telecommunications and finance industries, as well as with government agencies.
Teradata acquired RainStor in December 2014.
Teradata dropped the RainStor product from its portfolio in January 2016 and it is no longer developed or marketed.
Product
RainStor provided software for query and analysis against large volumes of machine generated data and an online data archive for regulatory compliance data retention.
In October 2012, RainStor held two patents and was pursuing five additional patents.
The database uses a row/columnar hybrid repository. The archived data is accessed using Structured Query Language (SQL). RainStor software uses partition filtering, which excludes certain records from processing.
RainStor runs on Apache Hadoop. In June 2013, RainStor released version 5.5 of its software. The release added user authentication protocols, access controls and policies, data encryption and user activity logs.
In May 2014, the company announced protection for data from manipulation, malicious attacks, breaches, or deletion.
In December 2016, Teradata has removed Rainstor from its portfolio and it is no longer being developed or marketed.
References
Software companies established in 2004
Software companies of the United States
Companies based in San Francisco
Teradata |
164494 | https://en.wikipedia.org/wiki/Computer%20magazine | Computer magazine | Computer magazines are about computers and related subjects, such as networking and the Internet. Most computer magazines offer (or offered) advice, some offer programming tutorials, reviews of the latest technologies, and advertisements.
History
1940s–1950s
Mathematics of Computation established in 1943, articles about computers began to appear from 1946 (Volume 2, Number 15) to the end of 1954. Scientific journal.
Digital Computer Newsletter, (1949–1968), founded by Albert Eugene Smith.
Computers and Automation, (1951–1978), was arguably the first computer magazine. It began as Roster of Organizations in the Computing Machinery Field (1951–1952), and then The Computing Machinery Field (1952–1953). It was published by Edmund Berkeley. Computers and Automation held the first Computer Art Contest circa 1963 and maintained a bibliography on computer art starting in 1966. It also included a monthly estimated census of all installed computer systems starting in 1962.
IEEE Transactions on Computers from 1952, scientific journal.
Journal of the ACM from 1954, scientific journal.
Datamation from 1957, was another early computer and data processing magazine. It is still being published as an ePublication on the Internet. Futurist Donald Prell was its founder.
Information and Computation from 1957, scientific journal.
IBM Journal of Research and Development from 1957, scientific journal.
Communications of the ACM from 1958, mix of science magazine, trade magazine, and a scientific journal
The Computer Journal from 1958, scientific journal.
1960s–1970s
ACS Newsletter (1966–1976), Amateur Computer Society newsletter.
Computerworld (1967)
People's Computer Company Newsletter (1972–1981)
Amateur Computer Club Newsletter (ACCN; 1973–)
Dr. Dobb's Journal (1976–2014) was the first microcomputer magazine to focus on software, rather than hardware.
1980s
1980s computer magazines skewed their content towards the hobbyist end of the then-microcomputer market, and used to contain type-in programs, but these have gone out of fashion. The first magazine devoted to this class of computers was Creative Computing. Byte was an influential technical journal that published until the 1990s.
In 1983 an average of one new computer magazine appeared each week. By late that year more than 200 existed. Their numbers and size grew rapidly with the industry they covered, and BYTE and 80 Micro were among the three thickest magazines of any kind per issue. Compute!s editor in chief reported in the December 1983 issue that "all of our previous records are being broken: largest number of pages, largest-number of four-color advertising pages, largest number of printing pages, and the largest number of editorial pages".
Computers were the only industry with product-specific magazines, like 80 Micro, PC Magazine, and Macworld; their editors vowed to impartially cover their computers whether or not doing so hurt their readers' and advertisers' market, while claiming that their rivals pandered to advertisers by only publishing positive news. BYTE in March 1984 apologized for publishing articles by authors with promotional material for companies without describing them as such, and in April suggested that other magazines adopt its rules of conduct for writers, such as prohibiting employees from accepting gifts or discounts. InfoWorld stated in June that many of the "150 or so" industry magazines published articles without clearly identifying authors' affiliations and conflicts of interest.
Many magazines ended that year, however, as their number exceeded the amount of available advertising revenue despite revenue in the first half of the year five times that of the same period in 1982. Consumers typically bought computer magazines more for advertising than articles, which benefited already leading journals like BYTE and PC Magazine and hurt weaker ones. Also affecting magazines was the computer industry's economic difficulties, including the video game crash of 1983, which badly hurt the home-computer market. Dan Gutman, the founder of Computer Games, recalled in 1987 that "the computer games industry crashed and burned like a bad night of Flight Simulator—with my magazine on the runway". Antics advertising sales declined by 50% in 90 days, Compute! number of pages declined from 392 in December 1983 to 160 ten months later, and Compute! and Compute!'s Gazettes publisher assured readers in an editorial that his company "is and continues to be quite successful ... even during these particularly difficult times in the industry". Computer Gaming World stated in 1988 that it was the only one of the 18 color magazines that covered computer games in 1983 to survive the crash. Compute! similarly stated that year that it was the only general-interest survivor of about 150 consumer-computing magazines published in 1983.
Some computer magazines in the 1980s and 1990s were issued only on disk (or cassette tape, or CD-ROM) with no printed counterpart; such publications are collectively (though somewhat inaccurately) known as disk magazines and are listed separately.
1990s
In some ways the heyday of printed computer magazines was a period during the 1990s, in which a large number of computer manufacturers took out advertisements in computer magazines, so they became quite thick and could afford to carry quite a number of articles in each issue, (Computer Shopper was a good example of this trend). Some printed computer magazines used to include covermount floppy disks, CDs, or other media as inserts; they typically contained software, demos, and electronic versions of the print issue.
2000s–2010s
However, with the rise in popularity of the Internet, many computer magazines went bankrupt or transitioned to an online-only existence. Exceptions include Wired, which is more of a technology magazine than a computer magazine.
List of computer magazines
Notable regular contributors to print computer magazines
See also
Online magazine
Magazine
Online newspaper
References
Magazine
Magazine genres |
3341783 | https://en.wikipedia.org/wiki/Information%20processing%20theory | Information processing theory | Information processing theory is the approach to the study of cognitive development evolved out of the American experimental tradition in psychology. Developmental psychologists who adopt the information processing perspective account for mental development in terms of maturational changes in basic components of a child's mind. The theory is based on the idea that humans process the information they receive, rather than merely responding to stimuli. This perspective uses an analogy to consider how the mind works like a computer. In this way, the mind functions like a biological computer responsible for analyzing information from the environment. According to the standard information-processing model for mental development, the mind's machinery includes attention mechanisms for bringing information in, working memory for actively manipulating information, and long-term memory for passively holding information so that it can be used in the future. This theory addresses how as children grow, their brains likewise mature, leading to advances in their ability to process and respond to the information they received through their senses. The theory emphasizes a continuous pattern of development, in contrast with cognitive-developmental theorists such as Jean Piaget's theory of cognitive development that thought development occurs in stages at a time.
Humans as Information Processing Systems
The information processing theory simplified is comparing the human brain to a computer or basic processor. It is theorized that the brain works in a set sequence, as does a computer. The sequence goes as follows, "receives input, processes the information, and delivers an output".
This theory suggests that we as humans will process information in a similar way. Like a computer receives input the mind will receive information through the senses. If the information is focused on, it will move to the short-term memory. While in the short-term memory or working memory, the mind is able to use the information to address its surroundings. The information is then encoded to the long-term memory, where the information is then stored. The information can be retrieved when necessary using the central executive. The central executive can be understood as the conscious mind. The central executive can pull information from the long-term memory back to the working memory for its use. As a computer processes information, this is how it is thought our minds are processing information. The output that a computer would deliver can be likened to the mind's output of information through behavior or action.
Components of the Information Processing Theory
Though information processing can be compared to a computer, there is much more that needs to be explained. Information Processing has several components. The major components are information stores, cognitive processes, and executive cognition.
Information stores are the different places that information can be stored in the mind. Information is stored briefly in the sensory memory. This information is stored just long enough for us to move the information to the short-term memory. Short-term memory can only hold a small amount of information at a time. George Miller, a modern psychologist, discovered the short-term memory can only hold 7 (plus or minus two) things at once. The information here is also stored for only 15–20 seconds. The information stored in the short-term memory can be committed to the long-term memory store. There is no limit to the information stored in the long-term memory. The information stored here can stay for many years. Long-term memory can be divided between semantic, episodic, and procedural memories. The semantic memory is made up of facts or information learned or obtained throughout life. The episodic memory is made up of personal experiences or real events that have happened in a person’s life. Last the procedural memory is made up of procedures or processes learned such as riding a bike. Each of these are subcategories of long-term memory.
Cognitive processes are the way humans transfer information among the different memory stores. Some prominent processes used in transferring information are coding, retrieval, and perception. Coding is the process of transferring information from the short to long-term memory by relating the information of the long-term memory to the item in the short-term memory. This can be done through memorization techniques. Retrieval is used to bring information from the long-term memory back to the short-term memory. This can be achieved through many different recall techniques. Perception is the use of the information processed to interpret the environment. Another useful technique advised by George Miller is recoding. Recoding is the process of regrouping or organizing the information the mind is working with. A successful method of recoding is chunking. Chunking is used to group together pieces of information. Each unit of information is considered a chunk, this could be one or several words. This is commonly used when trying to memorize a phone number.
Executive cognition is the idea that someone is aware of the way they process information. They know their strengths and weaknesses. This concept is similar to metacognition. The conscious mind has control over the processes of the information processing theory.
Emergence
Information processing as a model for human thinking and learning is part of the resurgence of cognitive perspectives of learning. The cognitive perspective asserts that complex mental states affect human learning and behavior that such mental states can be scientifically investigated. Computers, which process information, include internal states that affect processing. Computers, therefore, provided a model for possible human mental states that provided researchers with clues and direction for understanding human thinking and learning as information processing. Overall, information-processing models helped reestablish mental processes –– processes that cannot be directly observed –– as a legitimate area of scientific research.
Major Theorists
George Armitage Miller was one of the founders of the field of psychology known as cognition. He played a large role when it came to the Information Processing theory. He researched the capacity of the working memory discovering that people can only hold up to 7 plus or minus 2 items. He also created the term chunking when explaining how to make the most of our short-term memory.
Two other theorists associated with the Cognitive Information Processing Theory are Richard C. Atkinson and Richard Shiffrin. In 1968 these two proposed a multi-stage theory of memory. They explained that from the time information is received by the processing system, it goes through different stages to be fully stored. They broke this down to sensory memory, short-term memory, and long-term memory (Atkinson).
Later in 1974 Alan Baddeley and Graham Hitch would contribute more to the information processing theory through their own discoveries. They deepened the understanding of memory through the central executive, phonological loop, and visuospatial sketch pad. Baddeley later updated his model with the episodic buffer.
Atkinson and Shiffrin Model
The Atkinson and Shiffrin Model was proposed in 1968 by Richard C. Atkinson and Richard Shiffrin. This model illustrates their theory of the human memory. These two theorists used this model to show that the human memory can be broken in to three sub-sections: Sensory Memory, short-term memory and long-term memory.
Sensory Memory
The sensory memory is responsible for holding onto information that the mind receives through the senses such as auditory and visual information. For example, if someone were to hear a bird chirp, they know that it is a bird because that information is held in the brief sensory memory.
Short-Term Memory
Short-term memory lasts for about 30 seconds. Short term memory retains information that is needed for only a short period of time such as remembering a phone number that needs to be dialed.
Long-Term Memory
The long-term memory has an unlimited amount of space. In the long-term memory, there can be memory stored in there from the beginning of our life time. The long-term memory is tapped into when there is a need to recall an event that happened in an individual's previous experiences.
Baddeley and Hitch Model of Working Memory
Baddeley and Hitch introduced the model of working memory in 1974. Through their research, they contributed more to help understand how the mind may process information. They added three elements that explain further cognitive processes. These elements are the central executive, phonological loop, and the visuo-spatial working memory. Later Alan Baddeley added a fourth element to the working memory model called the episodic buffer. Together these ideas support the information processing theory and possibly explain how the mind processes information.
Central Executive
The central executive is a flexible system responsible for the control and regulation of cognitive processes. It directs focus and targets information, making working memory and long-term memory work together. It can be thought of as a supervisory system that controls cognitive processes, making sure the short-term store is actively working, and intervenes when they go astray and prevents distractions.[4]
It has the following functions:
updating and coding incoming information and replacing old information
binding information from a number of sources into coherent episodes
coordination of the slave systems
shifting between tasks or retrieval strategies
inhibition, suppressing dominant or automatic responses[4]
selective attention
The central executive has two main systems: the visuo-spatial sketchpad, for visual information, and the phonological loop, for verbal information.[5]
Using the dual-task paradigm, Baddeley and Erses have found, for instance, that patients with Alzheimer's dementia are impaired when performing multiple tasks simultaneously, even when the difficulty of the individual tasks is adapted to their abilities.[6] Two tasks include a memory tasks and a tracking task. Individual actions are completed well, but as the Alzheimer's becomes more prominent in a patient, performing two or more actions becomes more and more difficult. This research has shown the deteriorating of the central executive in individuals with Alzheimer's.[7]
Recent research on executive functions suggests that the 'central' executive is not as central as conceived in the Baddeley & Hitch model. Rather, there seem to be separate executive functions that can vary largely independently between individuals and can be selectively impaired or spared by brain damage
Phonological Loop
Working in connection with the central executive is the phonological loop. The phonological loop is used to hold auditory information. There are two sub components of the phonological loop; the phonological store and the articulatory rehearsal process. The phonological store holds auditory information for a short period. The articulatory rehearsal process keeps the information in the store for a longer period of time through rehearsal.
Visuospatial Sketch Pad
The visuospatial sketch pad is the other portion of the central executive. This is used to hold visual and spatial information. The visuospatial sketch pad is used to help the conscious imagine objects as well as maneuver through the physical environment.
Episodic Buffer
Baddeley later added a fourth aspect to the model called the episodic buffer. It is proposed that the episodic buffer is able to hold information thereby increasing the amount stored. Due to the ability to hold information the episodic buffer is said to also transfer information between perception, short-term memory and long-term memory. The episodic buffer is a relatively new idea and is still being researched.
Other Cognitive processes
Cognitive processes include perception, recognition, imagining, remembering, thinking, judging, reasoning, problem solving, conceptualizing, and planning. These cognitive processes can emerge from human language, thought, imagery, and symbols.
In addition to these specific cognitive processes, many cognitive psychologists study language-acquisition, altered states of mind and consciousness, visual perception, auditory perception, short-term memory, long-term memory, storage, retrieval, perceptions of thought and much more.
Cognitive processes emerge through senses, thoughts, and experiences. The first step is aroused by paying attention, it allows processing of the information given. Cognitive processing cannot occur without learning, they work hand in hand to fully grasp the information.cognitive process
Nature versus nurture
Nature versus nurture refers to the theory about how people are influenced. The nature mentality is around the idea that we are influenced by our genetics. This involves all of our physical characteristics and our personality. On the other hand, nurture revolves around the idea that we are influenced by the environment and our experiences. Some believe that we are the way we are due to how we were raised, in what type of environment we were raised in and our early childhood experiences. This theory views humans as actively inputting, retrieving, processing, and storing information. Context, social content, and social influences on processing are simply viewed as information. Nature provides the hardware of cognitive processing and Information Processing theory explains cognitive functioning based on that hardware. Individuals innately vary in some cognitive abilities, such a memory span, but human cognitive systems function similarly based on a set of memory stores that store information and control processes determine how information is processed. The “Nurture” component provides information input (stimuli) that is processed resulting in behavior and learning. Changes in the contents of the long-term memory store (knowledge) are learning. Prior knowledge affects future processing and thus affects future behavior and learning.
Quantitative versus qualitative
Information processing theory combines elements of both quantitative and qualitative development. Qualitative development occurs through the emergence of new strategies for information storage and retrieval, developing representational abilities (such as the utilization of language to represent concepts), or obtaining problem-solving rules (Miller, 2011). Increases in the knowledge base or the ability to remember more items in working memory are examples of quantitative changes, as well as increases in the strength of connected cognitive associations (Miller, 2011). The qualitative and quantitative components often interact together to develop new and more efficient strategies within the processing system.
Current areas of research
Information Processing Theory is currently being utilized in the study of computer or artificial intelligence. This theory has also been applied to systems beyond the individual, including families and business organizations. For example, Ariel (1987) applied Information Processing Theory to family systems, with sensing, attending, and encoding of stimuli occurring either within individuals or within the family system itself. Unlike traditional systems theory, where the family system tends to maintain stasis and resists incoming stimuli which would violate the system's rules, the Information Processing family develops individual and mutual schemes which influence what and how information is attended to and processed. Dysfunctions can occur both at the individual level as well as within the family system itself, creating more targets for therapeutic change. Rogers, P. R. et al. (1999) utilized Information Processing Theory to describe business organizational behavior, as well as to present a model describing how effective and ineffective business strategies are developed. In their study, components of organizations that "sense" market information are identified as well as how organizations attend to this information; which gatekeepers determine what information is relevant/important for the organization, how this is organized into the existing culture (organizational schemas), and whether or not the organization has effective or ineffective processes for their long-term strategy.
Cognitive psychologist, Kahnemen and Grabe, noted that learners has some control over this process. Selective attention is the ability of humans to select and process certain information while simultaneously ignoring others. This is influenced by many things including:
What the information being processed means to the individual
The complexity of the stimuli (based partially on background knowledge)
Ability to control attention (varies based on age, hyperactivity, etc.)
Some research has shown that individuals with a high working memory are better able to filter out irrelevant information. In particular, one study on focusing on dichotic listening, followed participants were played two audio tracks, one in each ear, and were asked to pay attention only to one. It was shown that there was a significant positive relationship between working memory capacity and ability of the participant to filter out the information from the other audio track.
Implications inside the Classroom
Information Processing Theory outlines a way of learning that can be used by teachers inside the classroom. Some examples of classroom implications of the Information Processing Theory include:
Use mnemonics to aid students in retaining information for later use, as well as strengthening the students’ remembering skills.
Example: While teaching the order of operations in mathematics, use the mnemonic “Please excuse my dear Aunt Sally” to symbolize the six steps.
When teaching a specific lesson, use many different teaching styles and tools.
Example: In social studies, if the lesson is on the Rwandan Genocide, lecture on the topic using many pictures, watch the movie Hotel Rwanda, and have a class discussion about the topic and the film.
Pair students together to review the material covered.
Example: When teaching a more abstract lesson, place students into pairs and have each student teach their partner the material covered to further embed the information into the long-term memory.
Break down lessons into smaller more manageable parts.
Example: When teaching an intricate math equation, walk the students through an example step-by-step. After each step, pause for questions to ensure everyone understands.
Assess the extent of the prior knowledge students have about the upcoming material.
Example: After each test, have a Pre-Test about the next chapter to get an understanding of how much prior knowledge the students have.
Give students feedback on each assignment as a reinforcement.
Example: When returning a graded paper ensure there are both positive and negative comments on each paper. This will assist the students in bettering their future work, as well as keep them motivated in their studies.
Connect new lessons back to old lessons and real-life scenarios.
Example: When teaching a lesson about the Industrial Revolution, tie it back to your own town and buildings or areas that exist because of that time period.
Allow for over-learning
Play games like trivial pursuit and jeopardy to encourage extra learning, especially as a review, within the classroom.
References
Further reading
Information science |
17121343 | https://en.wikipedia.org/wiki/MS%20Lady%20of%20Mann | MS Lady of Mann | {|
{{Infobox ship image
|Ship image=[[File:LadyofMann04.JPG|300px|Lady of Mann arrives in Douglas, 2004]]
|Ship caption=Lady of Mann arrives in Douglas, 2004
}}
|}
MS Lady of Mann (II) was a side-loading car ferry built in 1976 for the Isle of Man Steam Packet Company and operated on the Douglas–Liverpool crossing. She served the company for 29 years. In 2005, she was converted to a Roll-on/roll-off ferry and was operated by SAOS Ferries in Greece under the name MS Panagia Soumela until she was scrapped in August 2011.
Isle of Man Steam Packet CompanyLady of Mann was the final vessel in a quartet with of 1962, of 1966 and of 1972. She was the second vessel in the line's history to be so named, the first being the company's "centenary steamer", which entered service in 1930. The fourth car ferry was ordered from Ailsa Shipbuilding in Troon, Scotland as demand for car space continued. Lady of Mann arrived in service in 1976. Known as the "Lady", she was the flagship of the fleet until 1984.
Based on the earlier Mona's Queen, she was smaller and had 12-cylinder diesel engines, compared to her elder sister's 10.
Her maiden voyage was the morning sailing to Liverpool on 30 June 1976. Already several weeks late, she had missed the peak TT traffic, something which caused much stress for the Steam Packet Company.Lady of Mann remained the fleet's flagship until 1984, when she was replaced by Mona's Isle. Her owners were in financial difficulties, forcing a merger with Manx Line.
In 1989, Lady of Mann was withdrawn from service for a £2.6 million refit at Wright and Beyer. She received a complete modernisation of the interior layout, increased vehicle capacity, passenger capacity for 1000, and a new livery; she returned to service on 26 May 1989. This made her sister Mona's Queen redundant, and she was withdrawn from service in 1990.
Disaster struck on 2 June 1993. Instead of going astern onto the Victoria Pier in Douglas, she surged straight ahead and crashed into the Battery Pier, crumpling her bow. She was temporarily repaired, and issued with a lower passenger certificate to operate during TT, after which she was withdrawn for permanent repairs. The first fastcraft to operate from the island was a result of this, with SeaCat Scotland making a return sailing from Stranraer.
1994 saw the arrival of the first SeaCat to operate for the Steam Packet Company, and HSC SeaCat Isle of Man operated for the company in place of Lady of Mann. A period of calm weather allowed successful operation for the wave-restricted SeaCat. Problems at the end of August led to Lady of Mann returning to service, and a drop in passenger numbers, who had been tempted by the SeaCat's high speed. 1995 saw Lady of Mann operate for Porto Santo Line in Madeira.
In 1996, the Steam Packet Company was bought by Sea Containers and all vessels in the fleet were painted in the new owner's blue livery.
After the 1996 TT, Lady of Mann was employed on the new Liverpool–Dublin service, taking around 6 hours to complete a passage. She was replaced on this service in 1997 by SuperSeaCat Two, and often came to the rescue of the craft due to recurrent technical problems and weather cancellations. She was joined in 1997 by the new flagship, , the largest vessel the company has ever operated.
Following the 1999 TT period, she was not sent to her usual Madeira charter, but instead retained as a back-up vessel for Ben-my-Chree. The winter of 1999 led to a long programme of sailings for Lady of Mann, with the fastcraft and Ben-my-Chree unable to operate. The following year after TT, she was sent to the Azores for a three-month charter. During the winter of 2000–01, Lady of Mann maintained the company's Douglas–Liverpool services, before entering Cammell Laird yard for a refit to comply with the latest SOLAS regulations, which included a new fast response craft on her starboard boat deck.
Due to the foot and mouth outbreak, Lady of Mann's 25th anniversary cruise to her birthplace in Troon was cancelled, to prevent the disease reaching the island. This also led to cancellation of the annual TT races. The following year's TT saw "the highest recorded traffic for about 20 years". Lady of Mann covered for Ben-my-Chree on 28/29 May 2002 when the latter had to be withdrawn for emergency repairs.
Sea Containers put the Steam Packet up for sale and it was bought by Montagu Private Equity.
During Ben-my-Chree's major refit in 2004, sailings were covered by Lady of Mann and the chartered freight ship Hoburgen. The island had a record TT season in 2004 after which, Lady of Mann went out once again on charter to the Azores, returning for the winter schedule. During the winter of 2004, the Steam Packet Company operated only Ben-my-Chree and Lady of Mann, with the fastcraft being laid up.
In 2005, the company sold the "Lady" to SAOS Ferries of Greece.
SAOS Ferries
Renamed Panagia Soumela, she left Liverpool for the final time in October 2005, arriving in Greece the following month. Over the winter of 2005-6, she was converted to a stern-loading vessel in Piraeus, increasing in size considerably. This left her ex-fleetmate Mary the Queen as the sole example of a Steam Packet side-loader still in service.
In August 2006, Panagia Soumela commenced operations for SAOS Ferries on the Lavrion–Limnos route. In December 2008, she was laid up in Lavrion but returned to service the same month. As of 2009, the Panagia Soumela continued to operate between Lavrion and Limnos.Panagia Soumela was sold in 2011 to be scrapped at Aliağa, Turkey.
In popular culture
In 1991, Lady of Mann appeared in Alan Parker's movie, The Commitments.Lady of Mann, features in the Gerry and the Pacemakers video of their re-release of "Ferry Across the Mersey".Lady of Mann, appears prominently in the background during the Granada UK sketch-comedy series, Alfresco'', in episode 5 of the first series (1983) in a sketch in which two characters walk along the docks.
References
Notes
Bibliography
Ships of the Isle of Man Steam Packet Company
Ferries of the Isle of Man
Ships built in Scotland
1975 ships |
42212127 | https://en.wikipedia.org/wiki/MS-DOS%204.0%20%28multitasking%29 | MS-DOS 4.0 (multitasking) | MS-DOS 4.0 was a multitasking release of MS-DOS developed by Microsoft based on MS-DOS 2.0. Lack of interest from OEMs, particularly IBM (who previously gave Microsoft multitasking code on IBM PC DOS included with TopView), led to it being released only in a scaled-back form. It is sometimes referred to as European MS-DOS 4.0, as it was primarily used there. It should not be confused with PC DOS 4.00 or MS-DOS 4.01 and later, which did not contain the multi-tasking features.
History
Apricot Computers pre-announced "MS-DOS 4.0" in early 1986, and Microsoft demonstrated it in September of that year at a Paris trade show. However, only a few European OEMs, such as and International Computers Limited (ICL), actually licensed releases of the software. In particular, IBM declined the product, concentrating instead on improvements to MS-DOS 3.x and their new joint development with Microsoft to produce OS/2.
As a result, the project was scaled back, and only those features promised to particular OEMs were delivered. In September 1987, a version of multi-tasking MS-DOS 4.1 was reported to be developed for the ICL DRS Professional Workstation (PWS). No further releases were made once the contracts had been fulfilled.
In July 1988, IBM announced "IBM DOS 4.0", an unrelated product continuing from DOS 3.3 and 3.4, leading to initial conjecture that Microsoft might release it under a different version number. However, Microsoft eventually released it as "MS-DOS 4.0", with a MS-DOS 4.01 following quickly to fix issues many had reported.
Features
As well as minor improvements such as support for the New Executable file format, the key feature of the release was its support for preemptive multitasking. This did not use the protected mode available on 80386 processors, but allowed specially-written programs to continue executing in a "background mode", where they had no access to user input and output until returned to the foreground. The OS was reported to include a time-sliced scheduler and interprocess communication via pipes and shared memory. This limited form of multitasking was considered to be more useful in a server rather than workstation environment, particularly coupled with MS-Net 2.0, which was released simultaneously.
Other limitations of MS-DOS 3.0 remained, including the inability to use memory above 640 KB, and this contributed to the product's lack of adoption, particularly in light of the need to write programs specifically targeted at the new environment.
INT 21h/AH=87h can be used to distinguish between the multitasking MS-DOS 4.x and the later MS-DOS/PC DOS 4.x issues.
Microsoft president Jon Shirley described it as a "specialized version" and went as far as saying "maybe we shouldn't have called it DOS 4.0", although it's not clear whether this was always the intention, or if a more enthusiastic response from OEMs would have resulted in it being the true successor to DOS 3.x. The marketing positioned it as an additional option between DOS 3.x for workstations, and Xenix for higher-end servers and multiuser systems.
External commands
MS-DOS Version 4.10.20 supports the following external commands:
APPEND
ASSIGN
ATTRIB
BACKUP
CHKDSK
COMMAND
DEBUG
DETACH
DISKCOMP
DISKCOPY
EDLIN
EXE2BIN
FC
FDISK
FIND
FORMAT
GRAFTABL
GRAPHICS
GWBASIC
HEADPARK
INSTALLX
JOIN
LABEL
LINK4
MODE
MORE
MOUS
PERM0
PRINT
QUEUER
RECOVER
REPLACE
RESTORE
SETUP
SHARE
SORT
SUBST
SYS
TREE
XCOPY
See also
Concurrent DOS, Concurrent DOS 286, Concurrent DOS 386 - Concurrent CP/M-based multiuser multitasking OS with DOS emulator since 1983
DOS Plus - Concurrent PC DOS-based multitasking OS with DOS emulator since 1985
Novell DOS, OpenDOS, DR-DOS - successors of DOS Plus with preemptive multitasking in VDMs since 1993
FlexOS - successor of Concurrent DOS 286 since 1986
4680 OS, 4690 OS - successors of FlexOS 286 and FlexOS 386 since 1986
Multiuser DOS - successor of Concurrent DOS 386 since 1991
REAL/32 - successor of Multiuser DOS since 1995
PC-MOS/386 - multiuser multitasking DOS clone since 1987
VM/386 - multiuser multitasking DOS environment since 1987
TopView - DOS-based multitasking environment since 1985
DESQview, DESQview/X - DOS-based multitasking environment since 1985
Virtual DOS machine
Datapac Australasia
References
Further reading
1986 software
Discontinued Microsoft operating systems
Disk operating systems
DOS variants
Proprietary operating systems
Assembly language software |
51382092 | https://en.wikipedia.org/wiki/Luay%20Nakhleh | Luay Nakhleh | Luay K. Nakhleh (Arabic): لؤي نخله; born May 8, 1974) is a Palestinian-Israeli-American computer scientist and computational biologist who is the William and Stephanie Sick Dean of the George R. Brown School of Engineering, a Professor of Computer Science and a Professor of BioSciences at Rice University in Houston, Texas.
Biography
Nakhleh was born on May 8, 1974 to a Christian, Palestinian family in Israel. He currently lives with his Japanese wife and two children in Texas, and holds both U.S. and Israeli citizenship.
Nakhleh did his undergraduate studies in the Department of Computer Science at the Technion, Israel Institute of Technology, earning a bachelor's degree in 1996. He earned a master's degree in Computer Science from Texas A&M University in 1998, and a PhD degree in Computer Science from the University of Texas at Austin, under the supervision of Prof. Tandy Warnow, in 2004. Nakhleh started his academic position at Rice University in July 2004, and became a Full Professor in 2016. He served as the J.S. Abercrombie Professor of Computer Science from July 2018 until December 2020. Nakhleh served as Chair of the Computer Science Department at Rice University from 2017 to 2020.
In addition to his duties as the Dean of Engineering at Rice University, Nakhleh currently teaches courses in discrete mathematics and computational biology. Nakhleh has received high acclaim at Rice University for his skills in teaching, and he is the recipient of many awards in this area.
Research
Nakhleh's research has been focused mainly on computational and statistical approaches to phylogenomics and comparative genomics under
scenarios where the evolutionary history of the genomes is not treelike. His earlier work in this area focused on parsimonious phylogenetic networks: networks that embed a given set of trees with the lowest number of reticulations, assuming all gene tree incongruence is due to reticulate evolution. He and his colleagues also applied similar approaches to language data to elucidate the (reticulate)
evolutionary history of the Indo-European languages. Their paper on perfect phylogenetic networks was included as one of the 20 best papers published in Language, the flagship journal of the Linguistic Society of America, in the 30-year period 1986-2016.
Later, his work started focusing on statistical approaches, in order to account for other evolutionary processes that could be at play in genomic data sets, most notably incomplete lineage sorting. These approaches could be viewed as approximations of the multispecies coalescent with gene flow.
Additionally, Nakhleh has done research on biological networks (modeling and evolution) and, more recently, on computational questions arising in cancer genomics.
Nakhleh and his group have been developing PhyloNet, an open-source software package, implemented in Java, for inference and analysis of (explicit) phylogenetic networks.
Honors and awards
Nakhleh's honors and awards include:
The National Science Foundation CAREER Award, 2009.
The Sloan Fellowship in the Molecular Biology category, 2010.
The Guggenheim Fellowship in the Organismic Biology and Ecology category, 2012.
References
Living people
Rice University faculty
American computer scientists
1974 births |
11454119 | https://en.wikipedia.org/wiki/BIRT%20Project | BIRT Project | The Business Intelligence and Reporting Tools (BIRT) Project is an open source software project that provides reporting and business intelligence capabilities for rich client and web applications, especially those based on Java and Java EE. BIRT is a top-level software project within the Eclipse Foundation, an independent not-for-profit consortium of software industry vendors and an open source community.
The project's stated goals are to address a wide range of reporting needs within a typical application, ranging from operational or enterprise reporting to multi-dimensional online analytical processing (OLAP). Initially, the project has focused on and delivered capabilities that allow application developers to easily design and integrate reports into applications.
The project is supported by an active community of users at BIRT Developer Center and developers at the Eclipse.org BIRT Project page.
BIRT has two main components: a visual report designer within the Eclipse IDE for creating BIRT Reports, and a runtime component for generating reports that can be deployed to any Java environment. The BIRT project also includes a charting engine that is both fully integrated into the report designer and can be used standalone to integrate charts into an application.
BIRT Report designs are persisted as XML and can access a number of different data sources including JDO datastores, JFire Scripting Objects, POJOs, SQL databases, Web Services and XML.
History
The BIRT project was first proposed and sponsored by Actuate Corporation when Actuate joined the Eclipse Foundation as a Strategic Developer on August 24, 2004. The project was subsequently approved and became a top-level project within the Eclipse community on October 6, 2004 The project contributor community includes IBM, and Innovent Solutions.
In 2007 IBM's Tivoli Division adopted BIRT as the infrastructure for its Tivoli Common Reporting (TCR) product. TCR produces historical reports on Tivoli-managed IT resources and processes.
The initial project code base was designed and developed by Actuate beginning in early 2004 and donated to the Eclipse Foundation when the project was approved.
Versions
References
Bibliography
External links
Eclipse BIRT project home page
2005 software
Business intelligence
Free reporting software
Free software programmed in Java (programming language)
Eclipse (software)
Eclipse technology
Eclipse software |
24703181 | https://en.wikipedia.org/wiki/Wi-Fi%20Direct | Wi-Fi Direct | Wi-Fi Direct (formerly Wi-Fi Peer-to-Peer) is a Wi-Fi standard for peer-to-peer wireless connections that allows two devices to establish a direct Wi-Fi connection without an intermediary wireless access point, router, or Internet connection. Wi-Fi Direct is single-hop communication, rather than multihop communication like wireless ad hoc networks.
Wi-Fi becomes a way of communicating wirelessly, much like Bluetooth. It is useful for everything from internet browsing to file transfer, and to communicate with one or more devices simultaneously at typical Wi-Fi speeds. One advantage of Wi-Fi Direct is the ability to connect devices even if they are from different manufacturers. Only one of the Wi-Fi devices needs to be compliant with Wi-Fi Direct to establish a peer-to-peer connection that transfers data directly between them with greatly reduced setup.
Wi-Fi Direct negotiates the link with a Wi-Fi Protected Setup system that assigns each device a limited wireless access point. The "pairing" of Wi-Fi Direct devices can be set up to require the proximity of a near field communication, a Bluetooth signal, or a button press on one or all the devices.
Background
Basic Wi-Fi
Conventional Wi-Fi networks are typically based on the presence of controller devices known as wireless access points. These devices normally combine three primary functions:
Physical support for wireless and wired networking
Bridging and routing between devices on the network
Service provisioning to add and remove devices from the network.
A typical Wi-Fi home network includes laptops, tablets and phones, devices like modern printers, music devices, and televisions. Most Wi-Fi networks are set up in infrastructure mode, where the access point acts as a central hub to which Wi-Fi capable devices are connected. All communication between devices goes through the access point.
In contrast, Wi-Fi Direct devices are able to communicate with each other without requiring a dedicated wireless access point. The Wi-Fi Direct devices negotiate when they first connect to determine which device shall act as an access point.
Automated setup
With the increase in the number and type of devices attaching to Wi-Fi systems, the basic model of a simple router with smart computers became increasingly strained. At the same time, the increasing sophistication of the hot spots presented setup problems for the users. To address these problems, there have been numerous attempts to simplify certain aspects of the setup task.
A common example is the Wi-Fi Protected Setup system included in most access points built since 2007 when the standard was introduced. Wi-Fi Protected Setup allows access points to be set up simply by entering a PIN or other identification into a connection screen, or in some cases, simply by pressing a button. The Protected Setup system uses this information to send data to a computer, handing it the information needed to complete the network setup and connect to the Internet. From the user's point of view, a single click replaces the multi-step, jargon-filled setup experience formerly required.
While the Protected Setup model works as intended, it was intended only to simplify the connection between the access point and the devices that would make use of its services, primarily accessing the Internet. It provides little help within a network - finding and setting up printer access from a computer for instance. To address those roles, a number of different protocols have developed, including Universal Plug and Play (UPnP), Devices Profile for Web Services (DPWS), and zero-configuration networking (ZeroConf). These protocols allow devices to seek out other devices within the network, query their capabilities, and provide some level of automatic setup.
Wi-Fi Direct
Wi-Fi Direct has become a standard feature in smart phones and portable media players, and in feature phones as well. The process of adding Wi-Fi to smaller devices has accelerated, and it is now possible to find printers, cameras, scanners, and many other common devices with Wi-Fi in addition to other connections, like USB.
The widespread adoption of Wi-Fi in new classes of smaller devices made the need for ad hoc networking much more important. Even without a central Wi-Fi hub or router, it would be useful for a laptop computer to be able to wirelessly connect to a local printer. Although the ad hoc mode was created to address this sort of need, the lack of additional information for discovery makes it difficult to use in practice.
Although systems like UPnP and Bonjour provide many of the needed capabilities and are included in some devices, a single widely supported standard was lacking, and support within existing devices was far from universal. A guest using their smart phone would likely be able to find a hotspot and connect to the Internet with ease, perhaps using Protected Setup to do so. But, the same device would find that streaming music to a computer or printing a file might be difficult, or simply not supported between differing brands of hardware.
Wi-Fi Direct can provide a wireless connection to peripherals. Wireless mice, keyboards, remote controls, headsets, speakers, displays, and many other functions can be implemented with Wi-Fi Direct. This has begun with Wi-Fi mouse products, and Wi-Fi Direct remote controls that were shipping circa November 2012.
File sharing applications such as SHAREit on Android and BlackBerry 10 devices could use Wi-Fi Direct, with most Android version 4.1 (Jellybean), introduced in July 2012, and BlackBerry 10.2 supported. Android version 4.2 (Jellybean) included further refinements to Wi-Fi Direct including persistent permissions enabling two-way transfer of data between multiple devices.
The Miracast standard for the wireless connection of devices to displays is based on Wi-Fi direct.
Technical description
Wi-Fi Direct essentially embeds a software access point ("Soft AP"), into any device that must support Direct. The soft AP provides a version of Wi-Fi Protected Setup with its push-button or PIN-based setup.
When a device enters the range of the Wi-Fi Direct host, it can connect to it, and then gather setup information using a Protected Setup-style transfer. Connection and setup is so simplified that it may replace Bluetooth in some situations.
Soft APs can be as simple or as complex as the role requires. A digital picture frame might provide only the most basic services needed to allow digital cameras to connect and upload images. A smart phone that allows data tethering might run a more complex soft AP that adds the ability to bridge to the Internet. The standard also includes WPA2 security and features to control access within corporate networks.
Wi-Fi Direct-certified devices can connect one-to-one or one-to-many and not all connected products need to be Wi-Fi Direct-certified. One Wi-Fi Direct enabled device can connect to legacy Wi-Fi certified devices.
The Wi-Fi Direct certification program is developed and administered by the Wi-Fi Alliance, the industry group that owns the "Wi-Fi" trademark. The specification is available for purchase from the Wi-Fi Alliance.
Commercialization
Laptops
Intel included Wi-Fi Direct on the Centrino 2 platform, in its My WiFi technology by 2008. Wi-Fi Direct devices can connect to a notebook computer that plays the role of a software Access Point (AP). The notebook computer can then provide Internet access to the Wi-Fi Direct-enabled devices without a Wi-Fi AP. Marvell Technology Group, Atheros, Broadcom, Intel, Ralink, and Realtek announced their first products in October 2010. Redpine Signals's chipset was Wi-Fi Direct certified in November of the same year.
Mobile devices
Google announced Wi-Fi Direct support in Android 4.0 in October 2011. While some Android 2.3 devices like Samsung Galaxy S II have had this feature through proprietary operating system extensions developed by OEMs, the Galaxy Nexus (released November 2011) was the first Android device to ship with Google's implementation of this feature and an API for developers.
Ozmo Devices, which developed integrated circuits (chips) designed for Wi-Fi Direct, was acquired by Atmel in 2012.
Wi-Fi Direct became available with the Blackberry 10.2 upgrade.
As of March 2016, no iPhone device implements Wi-Fi Direct; instead, iOS has its own proprietary feature, namely Apple's MultipeerConnectivity. This protocol and others are used in the feature AirDrop, used to transfer large files between Apple devices using a similar (but proprietary) technology to Wi-Fi Direct.
Game consoles
The Xbox One, released in 2013, supports Wi-Fi Direct.
NVIDIA's SHIELD controller uses Wi-Fi Direct to connect to compatible devices. NVIDIA claims a reduction in latency and increase in throughput over competing Bluetooth controllers.
See also
Digital Living Network Alliance
TDLS
Miracast
Li-Fi
Ultra-wideband
FiRa Consortium
Wireless HDMI
References |
25183022 | https://en.wikipedia.org/wiki/Private-collective%20model%20of%20innovation | Private-collective model of innovation | The term private-collective model of innovation was coined by Eric von Hippel and Georg von Krogh in their 2003 publication in Organization Science. This innovation model represents a combination of the private investment model and the collective-action innovation model.
In the private investment model innovators appropriate financial returns from innovations through intellectual property rights such as patents, copyright, licenses, or trade secrets. Any knowledge spillover reduces the innovator's benefits, thus freely revealed knowledge is not in the interest of the innovator.
The collective-action innovation model explains the creation of public goods which are defined by the non-rivalry of benefits and non-excludable access to the good. In this case the innovators do not benefit more than any one else not investing into the public good, thus free-riding occurs. In response to this problem, the cost of innovation has to be distributed, therefore governments typically invest into public goods through public funding.
As combination of these two models, the private-collective model of innovation explains the creation of public goods through private funding. The model is based on the assumption that the innovators privately creating the public goods benefit more than the free-riders only consuming the public good. While the result of the investment is equally available to all, the innovators benefit through the process of creating the public good. Therefore, private-collective innovation occurs when the process-related rewards exceed the process-related costs.
A laboratory study traced the initiation of private-collective innovation to the first decision to share knowledge in a two-person game with multiple equilibria. The results indicate fragility: when individuals face opportunity costs to sharing their knowledge with others they quickly turn away from the social optimum of mutual sharing. The opportunity costs of the "second player", the second person deciding whether to share, have a bigger (negative) impact on knowledge sharing than the opportunity costs of the first person to decide. Overall, the study also observed sharing behavior in situations where none was predicted.
Recent work shows that a project will not "take off" unless the right incentives are in place for innovators to contribute their knowledge to open innovation from the beginning. The article explores social preferences in the initiation of PCI. It conducted a simulation study that elucidates how inequality aversion, reciprocity, and fairness affect the underlying conditions that lead to the initiation of Private-collective innovation.
While firms increasingly seek to cooperate with outside individuals and organizations to tap into their ideas for new products and services, mechanisms that motivate innovators to "open up" are critical in achieving the benefits of open innovation.
The theory of private collective innovation has recently been extended by a study on the exclusion rights for technology in the competition between private-collective and other innovators. The authors argue that the investment in orphan exclusion rights for technology serves as a subtle coordination mechanism against alternative proprietary solutions.
Additionally, the research on private-collective innovation has been extended with theoretical explanations and empirical evidence of egoism and altruism as significant explanations for cooperation in private-collective innovation. Benbunan-Fich and Koufaris show that contributions to a social bookmarking site are a combination of intentional and unintentional contributions. The intentional public contribution of bookmarks is driven by an egoistic motivation to contribute valuable information and thus showing competence.
Example: Development of Free and Libre Open Source Software
The development of open source software / Free Software (consequently named Free and Libre Open Source Software – FLOSS) is the most prominent example of private-collective innovation. By definition, FLOSS represents a public good. It is non-rival because copying and distributing software does not decrease its value. And it is non-excludable because FLOSS licenses enable everyone to use, change and redistribute the software without any restriction.
While FLOSS is created by many unpaid individuals, it has been shown that technology firms invest substantially in the development of FLOSS. These companies release previously proprietary software under FLOSS licenses, employ programmers to work on established FLOSS projects, and fund entrepreneurial firms to develop certain features. In this way, private entities invest into the creation of public goods.
References
Community building
Free software
Political science
Public economics
Science and technology studies |
52578073 | https://en.wikipedia.org/wiki/TumbleSeed | TumbleSeed | TumbleSeed is an indie action video game, created by developer Benedict Fritz and designer Greg Wohlwend, in which the player balances a rolling seed on an ascending, horizontally slanted vine past procedurally generated obstacles to reach the top of a mountain. It is based on the mechanical arcade game Ice Cold Beer and built partially through the Cards Against Humanity game incubation program. TumbleSeed was released in May 2017 to generally favorable reviews on MacOS, Nintendo Switch, PlayStation 4, and Windows platforms. Critics, in particular, appreciated their haptic sense of the rolling seed from the Nintendo Switch's sensitive HD Rumble. Many reviewers noted TumbleSeed intense and sometimes uneven difficulty, which the developers hoped to address in a post-release update. They credited this stigma and a tepid critical reception for the game's slow sales, but were proud of their work.
Gameplay
TumbleSeed begins in a quiet forest village disturbed by angry creatures creating holes in the ground. A seed is tasked to fulfill a vague prophecy by ascending the mountain, progressing through the forest, jungle, desert, and snow, to save the town. The player controls an ascending, horizontal, and slanted vine to balance a rolling seed past procedurally generated holes and enemy obstacles to reach the top of a mountain. With a controller, the player uses two analog sticks to raise and lower the two edges of the horizontal vine while managing the momentum of the seed. If the seed falls into a hole or hits an enemy, the player loses a life ("heart") and resumes from an earlier position, but the game ends once the player runs out of hearts. Some punishments are more severe: A seed that falls into a hole will resume from the last flagged checkpoint, but if the seed collides with spikes or loses all hearts, the game ends and the player returns to the bottom of the mountain to try again.
While ascending the mountain, the player activates helpful power-up abilities by collecting and planting minerals in marked plots. While there are over 30 abilities in total, the player starts with the basic four: Flagseed creates an additional checkpoint (in lieu of regressing further down the mountain upon losing a heart), Heartseed creates additional hearts from minerals, and the Crystal ability generates minerals from multiple plots. Thornvine's single-use, protective thorny vine can hurt enemies if aligned correctly, though some enemies require more than one hit to die, and the player forfeits all thorns upon losing a heart. As the game progresses, the player collects additional abilities, such as Flailflower, which turns the seed into an anchor for a spiky flail, and Floodfruit, which fills surrounding holes with water for easy passage. Thus the player constantly weighs whether to use minerals offensively (to eliminate enemies), defensively (to bypass difficult sections), or at all.
After completing a portion of the mountain, the player reaches a basecamp intermezzo with multiple opportunities for collecting new abilities. The player can trade minerals for abilities in the store, and choose between two abilities to receive for free. The basecamp also contains minigames, such as a shooting range, a gambling device, and optional sidequests, which reward the player with shortcuts to later stages in subsequent playthroughs for completing challenges such as finishing a level within a time limit or without receiving damage.
Some enemies move in predictable patterns and others hunt the player. They range in resemblance from larvae and vultures to spiders. While ascending the mountain, the player passes through themed biomes, such as a forest, a desert, and snow, and each section of the mountain introduces new enemies that force the player to modify their strategy. TumbleSeed is depicted in a simple, colorful art style. Its characters are one-eyed, seed-shaped creatures who occasionally have hats. Their dialogue is visualized with speech bubbles, in text and emoji. Additionally, TumbleSeed includes other features including quests, leaderboards, and daily challenges.
The Four Peaks patch adds several elements to make the game friendlier, including training levels, a Weekly Challenge, and benefits that persist between playthroughs. The Battle Mode, exclusive to the Nintendo Switch release, is a King of the Hill-style mode for two players to compete to hold a set region for the longest.
Development
The game is based on Ice Cold Beer, a mechanical arcade game in which the player controls the ends of a metal rod to raise a rolling, metal ball vertically in a wooden cabinet while avoiding holes cut into the wood. The Chicago-based indie development team found the machine at an arcade where they competitively played the 2013 video game Killer Queen. Developer Benedict Fritz later prototyped a version of Ice Cold Beer in the Unity game engine with a simple white background and black, circular holes. His friend, designer Greg Wohlwend, who had previously worked on Threes and Ridiculous Fishing, saw a short, online video of Fritz's prototype and the two began to work together on the project by adding enemies, an in-game world, and procedurally generated levels. Their prototypes included dungeon crawl and open world, action-adventure genre explorations. They spent two years designing the title and wanted to honor and contribute to the legacy of Ice Cold Beer. They also worked through a Cards Against Humanity game incubation program in 2015, and several other indie developers based in Chicago joined the production: David Laskey, Jenna Blazevich, and composer Joel Corelitz. Based on the project's loose, collaboratory nature, Metro called the game's pedigree "as indie as indie gaming gets". The team released a promotional trailer in August 2016 ahead of a demo at the PAX West game show.
With the announcement of the Nintendo Switch, the development team sought to become a "flagship" demonstration of the console's HD Rumble feature, in which the player proprioceptively "feels" in-game textures through the controller's fine-tuned vibrations. The developers thought that the Switch's "high-fidelity vibration" afforded players a greater sense of in-game detail with better perception about the seed's speed and direction. They cold-called Nintendo and began work together in June 2016, prior to the Switch's announcement. Designer Greg Wohlwend saw the game as sharing classic Nintendo attributes, including a wide color palette, accessibility, and difficulty. Nintendo, at the time, was thawing its relations with indie developers by removing restrictions for development on their consoles. Wohlwend said that their partnership was positive and the port of the game's code to the Switch platform was painless. Originally planned for release in 2015, TumbleSeed was released on May 2, 2017, on Nintendo Switch, MacOS, PlayStation 4, and Windows platforms. The Nintendo Switch release includes an exclusive King of the Hill-style Battle Mode.
Reception
The game received "generally favorable" reviews, according to video game review aggregator Metacritic. Reviewers recommended the game, in particular, for short bursts of play with the portable Nintendo Switch. Reviewers considered the Nintendo Switch's HD Rumble a good fit for the game's core rolling mechanic and aesthetics. Nintendo Life said HD Rumble made the Switch's portable mode "the definitive way to play" TumbleSeed. Its haptic sensation approximated how it felt to play the mechanical Ice Cold Beer. Reviewers additionally compared the Ice Cold Beer conceit to the core mechanic of Shrek n' Roll (2007) and portable, plastic maze puzzles.
Multiple reviewers commented on the gameplay's chafing difficulty and uneven difficulty progression. TumbleSeed stops short of the "masocore" genre of punishingly difficult games, but its imprecise controls nevertheless demand patience to develop proficiency and mastery. At a fundamental level, Eurogamer found the controls "maddening and mesmerizing", and Polygon, out of frustration, wanted to directly control the seed. The multitasking player is made to "never feel safe" between balancing encroaching enemies, unpredictable enemy spawns, precarious controls, and easy deaths, all while managing crystal resources. Polygon considered the game's strategic elements, such as deciding whether to go on offense or defense, to be its most interesting aspect. Others focused on TumbleSeed art and movement mechanics, as Wired reviewer appreciated the slow process of learning how it is best to navigate the game and additionally found a new aspect to appreciate each time she played it. Some disagreed as to whether the game's challenge was appropriate and never malicious, or often unbalanced and frustrating compared to its rewards. The randomness of each basecamp's power-up offerings also contributed to uneven difficulty between playthroughs.
Metro contended that though TumbleSeed was marketed as a roguelike, apart from its procedurally generated levels, it was closer in genre to traditional arcade games. Though like roguelikes, player mistakes are costly and unforgivingly punished by returning the player to the beginning of the game. Polygon wrote that while this mechanic is acceptable, the act of losing all equipped upgrades upon losing a single heart was harsh.
The Guardian saw "an obvious throughline" between designer Greg Wohlwend's prior work and the colorful, simple, and cute visuals of TumbleSeed. Reviewers wrote that "bright art and cheery music" made the environment inviting and lively, though not particularly memorable as to distract from the gameplay. Some critics struggled to visually distinguish between cosmetic and active objects. It visually recalled the art of Patapon, according to Nintendo Life. TumbleSeed was an honorable mention for "Excellence in Audio" at the 2017 Game Developers Conference.
Post-release
A month after release, the developers worked to make the game less difficult, in response to criticism from reviewers. In a postmortem released alongside a set of updates, TumbleSeed designer Greg Wohlwend credited the game's slow sales to the title's tepid critical reception and stigma of high difficulty. Critics, he wrote, considered the game unfair and unforgivingly hard, as reflected in lukewarm scores from major gaming websites. Players were expected to withstand an overwhelming amount of simultaneous elements and as such, few reached the end of the game. The "4 Peaks Update" added four new areas and abilities to simplify the game, such as reducing incoming damage or increasing stealth near enemies. The game was unlikely to recoup its costs, Wohlwend wrote, and the update was doubtful to change that course, but he felt proud of their development effort and considered the update to be therapy. The June 2017 update was released for Windows and was anticipated for consoles soon after.
References
External links
2017 video games
Action video games
Indie video games
MacOS games
Single-player video games
Video games about plants
Video games developed in the United States
Video games scored by Joel Corelitz
Video games set in forests
Windows games
Nintendo Switch games |
30544835 | https://en.wikipedia.org/wiki/Marta%20Kwiatkowska | Marta Kwiatkowska | Marta Zofia Kwiatkowska is a Polish theoretical computer scientist based in the United Kingdom.
Kwiatkowska is Professor of Computing Systems in the Department of Computer Science at the University of Oxford, England, and a Fellow of Trinity College, Oxford. Her research focuses on developing modelling and automated verification techniques for computing systems in order to guarantee safe, secure, reliable, timely and resource-efficient operation.
Education
Kwiatkowska received her Bachelor of Science and Master of Science degrees in Computer Science with distinction summa cum laude from Jagiellonian University in Krakow, Poland. She obtained her PhD in Computer Science from the University of Leicester in 1989.
Career and research
After obtaining her PhD, Kwiatkowska was assistant professor at Jagiellonian University, Krakow, Poland (1980–1988); research scholar and lecturer in Computer Science at University of Leicester (1984–1994); and lecturer in Computer Science, reader in Semantics for Concurrency, and professor of Computer Science at University of Birmingham (1984–2007). Joining the University of Oxford in 2007, Kwiatkowska was the first female professor in the Department of Computer Science and now heads the Automated Verification research theme.
Kwiatkowska’s research develops models and analysis methods for complex systems, as found in computer networks, biological organisms and electronic devices. Kwiatkowska led development of the PRISM probabilistic model checker; PRISM has been downloaded over 79,000 times and there are over 400 papers by external research teams using PRISM (as at January 2021).
Instrumental in the development of probabilistic and quantitative methods in verification on the international scene, Kwiatkowska’s recent work incorporates synthesis from quantitative specifications with a focus on safety and robustness for machine learning and AI. A member of the Global Partnership on Artificial Intelligence (GPAI) 'Responsible AI Working Group', and the Royal Society's 'Digital Technology and the Planet Working Group', Kwiatkowska advocates responsible adoption of trustworthy AI.
As a senior member of OxWoCS, contributor to the Perspektywy Women in Tech Summit and adviser to the Suffrage Science Award (2016), Kwiatkowska encourages women to pursue careers in science.
Kwiatkowska serves on the editorial boards of Information and Computation, Formal Methods in System Design, Logical Methods in Computer Science, Science of Computer Programming and the Royal Society's Open Science.
Current Projects
FUN2MODEL: From FUNction-based TO MOdel-based automated probabilistic reasoning for DEep Learning (2019-2024), a European Research Council (ERC) Advanced Grant.
Mobile Autonomy: Enabling a Pervasive Technology of the Future (2015–2021), an Engineering and Physical Sciences Research Council (EPSRC) Programme Grant (co-I).
Selected talks and lectures
'Probabilistic Model Checking for the Data-Rich World' BCS 2020 Lovelace Lecture, on-line event, May 2021.
'Probabilistic Model Checking for Strategic Equilibria-Based Decision Making' Conference on Principles of Knowledge Representation and Reasoning, (KR 2020), on-line event, September 2020. '
'When to Trust a Self-Driving Car...' - Milner Award Prize Lecture, November 2018.
'When to trust a robot' – Hay Festival talk on 30 May 2017.
'Model Checking and Strategy Synthesis for Stochastic Games: From Theory to Practice' – invited lecture at Simons Institute for the Theory of Computing, UC Berkeley, October 2016.
'Mobile Autonomous Robots' – invited lecture at IntelliSys, September 2016.
Awards and honours
Fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS Society), 2020.
Awarded the BCS Ada Lovelace Medal for 'her research in probabilistic and quantitative verification. Since 2001 she has led the development of the highly influential probabilistic model checker PRISM', 2019.
Fellow of the Royal Society, (FRS), 2019.
Became the first female winner of the Royal Society Milner Award in recognition of ‘her contribution to the theoretical and practical development of stochastic and quantitative model checking’, 2018.
Jointly awarded the HVC 2016 Award for her ‘contributions to probabilistic model checking and, more generally, to formal verification’, 2016.
Kwiatkowska was awarded an Honorary Doctorate at the KTH Royal Institute of Technology in 2014, and is Fellow of ACM, Fellow of EATCS, Fellow of the BCS, a member of Academia Europea, and Fellow of Polish Society of Arts & Sciences Abroad.
Personal life
Kwiatkowska lives in Oxford with her husband, with whom she has a daughter.
References
External links
Kwiatkowska's University of Oxford homepage
Kwiatkowska's Trinity College homepage
Living people
Jagiellonian University alumni
Jagiellonian University faculty
Academics of the University of Leicester
Academics of the University of Birmingham
Members of the Department of Computer Science, University of Oxford
Members of Academia Europaea
Fellows of Trinity College, Oxford
Polish computer scientists
British computer scientists
British women computer scientists
British people of Polish descent
Formal methods people
Polish women computer scientists
Polish women academics
Fellows of the Association for Computing Machinery
Fellows of the Royal Society
1957 births
Female Fellows of the Royal Society |
1913256 | https://en.wikipedia.org/wiki/Command%20%26%20Conquer%3A%20Generals | Command & Conquer: Generals | Command & Conquer: Generals is a real-time strategy video game and the seventh installment in the Command & Conquer series. It was released for Microsoft Windows and Mac OS in 2003 and 2004. The Windows version of Generals was developed by EA Pacific and published by EA Games, the Mac OS X version was developed and published by Aspyr Media. The Mac OS X version was re-released by Aspyr for the Mac App Store on March 12, 2005. In the game, the player can choose from three different factions: the United States, China and the Global Liberation Army (GLA).
Generals utilizes SAGE (Strategy Action Game Engine), an extended version of the Command & Conquer: Renegades 3D engine. An expansion pack, entitled Command & Conquer: Generals – Zero Hour, was additionally released for PC in 2003, and for Mac OS in 2005. Both Generals and Zero Hour were met with highly positive reviews. A sequel, Command & Conquer: Generals 2, was in development, until it was repurposed to a free-to-play game known as Command & Conquer. The new game was part of the Generals franchise and was cancelled on October 29, 2013 by EA after negative feedback during the closed alpha test.
Gameplay
Command & Conquer Generals operates in a similar manner to that of other titles in the series - players construct bases and train units from these, acquiring resources on one of the game's maps to fund this, and then defeat their opponents by eliminating their bases and armies. Various units types are available for training, ranging from infantry to ground vehicles and aircraft, each focuses on specific roles (e.g. anti-vehicle), while base structures are divided between unit production, support facilities, and defensive counter-measures. Success in the game relies upon making the most out of mixing units, utilising their advantages while countering their disadvantages with other units, in order to win against opponents - for example, rifle infantry are useful for countering anti-vehicle infantry, but need to rely on tanks to counter anti-infantry vehicles. Units that survive and manage to kill other units gain "veterancy" points, earning chevrons when they level up, effectively improve their abilities and making them more powerful; at the highest level, it also grants the ability to repair any damage when out of combat.
Training can be queued at production structures and units sent to rally points designated by the player, with the ability to research upgrades to improve certain units. In addition, players can also deploy superweapons which can decimate an opponent's forces, though must wait for a cooldown period to end before they can use it again. Factions in the game function similar in how they operate, but maintain differences in units and strategies:
The United States rely on high-tech weaponry, such as drones, and a dominant air force to deal with opponents, and are able to use supply units to airdrop rifle infantry into occupied buildings, alongside flash grenades, to clear them out. In addition, they can improve power plants, their defensive structures link together to deal with enemy units, and they collect more supplies than the other two factions, but units are more expensive to produce.
China relies on stronger tank and artillery units, and can use hackers to claim buildings or produce additional funds. In addition, their troop transports can detect stealth units, while their tanks and infantry can occur horde bonuses when grouped together. However, their power plants can cause damage to surrounding units and buildings when destroyed, they maintain a weaker air force, and require large armies to make horde bonuses work effectively.
The Global Liberation Army rely on cheap units and terrorist-styled guerrilla combat to overcome opponents, in which several vehicle units can be upgraded by salvaging parts from defeated enemy vehicles (infantry can also claim this for funds), using specialised infantry units to create ambushes and considerable damage. In addition, they do not require power for base structures, and any buildings that are destroyed will be automatically rebuilt if the enemy fails to destroy a tunnel entrance that is left behind. However, they must use builder units to collect supplies, with several needed to ensure funds are steady, and cannot build air units; this is compensated by having more anti-air units than the other two factions.
Generals functions differently to other titles in the series, in that base construction relies on dedicated builder units rather than a central construction building, but with the added ability of being able to construct buildings anywhere on the map. Resources are restricted to supply docks that have a limited amount for collection, with each faction able to construct units or buildings that provide continual resources as long as they are not destroyed. In addition, players can also make use of "Generals Abilities" - a unique set of bonuses that can be purchased upon earning experience points during the game, which can confer additional abilities such as support powers (e.g. airstrikes), improvements to certain units, or access to additional units for construction.
Single-player
In a single-player mode, players can tackle one of three campaigns, each dedicated to a faction and consisting of seven missions. While a training mission is provided to allow new players to become accustomed to the game, players can freely choose which campaign to tackle and at what difficulty, with each mission becoming moderately more difficult and featuring different scenarios to tackle.
Multiplayer
Games can be played both over the Internet or a local area network (LAN). It adopts a similar format to skirmish mode whereby the goal is to eliminate the other team. Games over the Internet can be completely random, in the form of a Quick Match. Players can also play in Custom Matches where the number of players, the map and rules are decided upon by the host.
The online feature originally worked via GameSpy servers. After the shutdown of GameSpy in 2014, these were no longer available.
The macOS version of the game released for the Mac App Store does not support multiplayer. Apple discontinued Game Center for online play with the release of macOS Sierra.
Soundtrack
Generals presents players with a separate musical score for each faction. The United States' theme music consists of grand, militaristic scores composed by Bill Brown and Mikael Sandgren. China's musical themes feature apocalyptic, orchestral scores combined with East Asian instrumentation. The GLA faction's theme soundtrack can be described as a combination of Middle Eastern and few South Asian sounds coupled with heavy metal music.
World Builder
Generals includes a map editor named World Builder for the PC edition only. The World Builder includes features such as:
A terraforming tool
An intelligent road system, able to detect when the player wants an intersection
A tool to scatter flora around the map
Waypoints and area triggers that the AI can use. Waypoints also determine starting points for the players on a skirmish map
A scripting system that was meant for the missions in the single-player campaign
Plot
Setting
Generals takes place in the near future, the world's two superpowers - China and the United States as loose allies fight the Global Liberation Army (GLA), itself a terrorist organisation primarily based in the Middle East, North Africa as well as Central Asia. In chronological order, the campaign is played through the Chinese, GLA and then the United States perspectives respectfully.
China
A military parade in Beijing is attacked by GLA forces, culminating in the detonation of a stolen Chinese nuclear warhead and the beginning of the GLA's incursion inside China's borders. The Chinese mobilise to stall and contain the GLA, having to destroy the Three Gorges Dam as well as the Hong Kong Convention and Exhibition Center in the process. Now on the offensive, the Chinese launch into GLA strongholds, arriving at the terror cell's main headquarters in Dushanbe. Utilising nuclear weapons, the Chinese put an end to the GLA's offensive.
GLA
Despite losses to China, the GLA maintains its presence across Central Asia and the Middle East. In efforts to revive itself, the GLA raid UN convoys and incite riots in Astana. The United States enters the war, occupying GLA toxin depositories in the Aral Sea and a GLA renegade sides with the Chinese with the intention to destroy the GLA. The GLA retaliate by attacking the Baikonur Cosmodrome, and uses the platform to launch devastating toxin attacks at highly populated cities.
USA
The United States mobilises its forces to the Middle East, Hindu Kush and then Kazakhstan to finally put an end to the GLA. Despite losses incurred from GLA Anthrax attacks and ambushes, the USA are able to push the GLA back to their final stronghold in Akmola Region. With Chinese support, the USA destroys the last GLA stronghold, ending the GLA's reign of tyranny.
Reception
After its release, Generals received mostly positive reviews. Based on 34 reviews, Metacritic gives it a score of 84/100, which includes a score of 9.3/10 from IGN. Generals has received the E3 2002 Game Critics Awards Best Strategy Game award. GameSpot named Generals the best computer game of February 2003.
In the United Kingdom, it sold over 100,000 units during the first half of 2003. This made it the United Kingdom's second-best-selling computer game for the period, or seventh across all platforms. At the time, Kristan Reed of GamesIndustry.biz wrote that its performance proved "you can still have big hits on PC". Generals received a "Silver" sales award from the Entertainment and Leisure Software Publishers Association (ELSPA), indicating sales of at least 100,000 copies in the United Kingdom. The game's Deluxe release received another "Silver" award from ELSPA.
Ban in China
The Generals series is banned in mainland China. Throughout the Chinese campaign, the player is occasionally made to utilize heavy-handed tactics such as leveling the Hong Kong Convention and Exhibition Centre after it becomes a GLA base and destroying the Three Gorges Dam to release a flood on GLA forces. Chinese forces also liberally use nuclear weaponry in-game, albeit restricted to the lower tactical nuclear weapon yield range. Furthermore, in the introduction of the game, Tiananmen Square and much of Beijing is destroyed by a stolen nuclear warhead.
Ban in Germany
Initially, the game was released in Germany under its international title Command & Conquer: Generals. However, the Bundesprüfstelle für jugendgefährdende Medien (Federal Department for Media Harmful to Young People) placed the game onto the "List of Media Harmful to Young People" two months after the initial release, which, by law, forbids further public advertising and any sale for people under 18 years of age. The BPjM stated that the game would give underage people the ability to play the war in the game. Additionally, the player would be able to kill civilians. Based on these two points the BPjM put the game on the Index, because they believed it glorified war.
Therefore, sale to minors and marketing the original version of the game were prohibited throughout the Federal Republic of Germany.
Due to the ban, EA released in the middle of 2003 a regular title-localized German version specifically for the German market called Command & Conquer: Generäle, which did not incorporate real world factions or any relation to terrorism. For example, the "terrorist" bomber unit was transformed into a rolling bomb and all other infantry units were changed into "cyborgs" (e.g. Red Guard becomes Standard Cyborg) in order of appearance and unit responses similar to earlier releases of the Command & Conquer franchise.
Sequel
In September 2003, an expansion pack called Generals – Zero Hour was released, which continues the story of Generals. In December 2011, a sequel, Command & Conquer: Generals 2, was announced, due to be released in 2013. Generals 2 was repurposed to a free-to-play game known as simply Command & Conquer. The new game would have started with the Generals franchise and may have expanded to the rest of the franchise post-release. The game's project was cancelled on October 29, 2013. Later in November, EA said that the game will still be developed by a new game studio, but no further news emerged and the project appears to be abandoned.
References
2003 video games
Alternate history video games
Aspyr games
Generals
Interactive Achievement Award winners
MacOS games
Multiplayer and single-player video games
Real-time strategy video games
SAGE (game engine) games
Terrorism in fiction
Video games developed in the United States
Video games scored by Bill Brown
Video games set in China
Video games set in Hong Kong
Video games set in Iraq
Video games set in Kazakhstan
Video games set in Kyrgyzstan
Video games set in Tajikistan
Video games set in Turkey
Video games set in Yemen
Video games with expansion packs
Works banned in China
Windows games |
16871 | https://en.wikipedia.org/wiki/Kazaa | Kazaa | Kazaa Media Desktop (once stylized as "KaZaA", but later usually written "Kazaa") is a discontinued peer-to-peer file sharing application using the FastTrack protocol licensed by Joltid Ltd. and operated as Kazaa by Sharman Networks. Kazaa was subsequently under license as a legal music subscription service by Atrinsic, Inc. According to one of its creators, Jaan Tallinn, Kazaa is pronounced ka-ZAH (/kəˈzaː/).
Kazaa Media Desktop was commonly used to exchange MP3 music files and other file types, such as videos, applications, and documents over the Internet. The Kazaa Media Desktop client could be downloaded free of charge; however, it was bundled with adware and for a period there were "No spyware" warnings found on Kazaa's website. During the years of Kazaa's operation, Sharman Networks and its business partners and associates were the target of copyright-related lawsuits, related to the copyright of content distributed via Kazaa Media Desktop on the FastTrack protocol.
By August 2012, the Kazaa website was no longer active.
History
Kazaa and FastTrack were originally created and developed by Estonian programmers from BlueMoon Interactive including Jaan Tallinn and sold to Swedish entrepreneur Niklas Zennström and Danish programmer Janus Friis (who were later to create Skype and later still Joost and Rdio). Kazaa was introduced by the Dutch company Consumer Empowerment in March 2001, near the end of the first generation of P2P networks typified by the shutdown of Napster in July 2001. Skype itself was based on Kazaa's P2P backend, which allowed users to make a call by directly connecting them with each other.
Initially, some users of the Kazaa network were users of the Morpheus client program, formerly made available by MusicCity. Eventually, the official Kazaa client became more widespread. In February 2002, when Morpheus developers failed to pay license fees, Kazaa developers used an automatic update ability to shut out Morpheus clients by changing the protocol. Morpheus later became a client of the gnutella network.
Lawsuits
Consumer Empowerment was sued in the Netherlands in 2001 by the Dutch music publishing body, Buma/Stemra. The court ordered Kazaa's owners to take steps to prevent its users from violating copyrights or else pay a heavy fine. In October 2001 a lawsuit was filed against Consumer Empowerment by members of the music and motion picture industry in the USA. In response Consumer Empowerment sold the Kazaa application to Sharman Networks, headquartered in Australia and incorporated in Vanuatu. In late March 2002, a Dutch court of appeal reversed an earlier judgment and stated that Kazaa was not responsible for the actions of its users. Buma/Stemra lost its appeal before the Dutch Supreme Court in December 2003.
In 2003, Kazaa signed a deal with Altnet and Streamwaves to try to convert users to paying, legal customers. Searchers on Kazaa were offered a free 30-second sample of songs for which they were searching and directed to sign up for the full-featured Streamwaves service.
However, Kazaa's new owner, Sharman, was sued in Los Angeles by the major record labels and motion pictures studios and a class of music publishers. The other defendants in that case (Grokster and MusicCity, makers of the Morpheus file-sharing software) initially prevailed against the plaintiffs on summary judgment (Sharman joined the case too late to take advantage of that ruling). The summary judgment ruling was upheld by the Ninth Circuit Court of Appeals, but was unanimously reversed by the US Supreme Court in a decision titled MGM Studios, Inc. v. Grokster, Ltd.
Following that ruling in favor of the plaintiff labels and studios, Grokster almost immediately settled the case. Shortly thereafter, on 27 July 2006, it was announced that Sharman had also settled with the record industry and motion picture studios. As part of that settlement, the company agreed to pay $100 million in damages to the four major music companies—Universal Music, Sony BMG, EMI and Warner Music—and an undisclosed amount to the studios. Sharman also agreed to convert Kazaa into a legal music download service. Like the creators of similar products, Kazaa's owners have been taken to court by music publishing bodies to restrict its use in the sharing of copyrighted material.
While the U.S. action was still pending, the record industry commenced proceedings against Sharman on its home turf. In February 2004, the Australian Record Industry Association (ARIA) announced its own legal action against Kazaa, alleging massive copyright breaches. The trial began on 29 November 2004. On 6 February 2005, the homes of two Sharman Networks executives and the offices of Sharman Networks in Australia were raided under a court order by ARIA to gather evidence for the trial.
On 5 September 2005, the Federal Court of Australia issued a landmark ruling that Sharman, though not itself guilty of copyright infringement, had "authorized" Kazaa users illegally to swap copyrighted songs. The court ruled six defendants—including Kazaa's owners Sharman Networks, Sharman's Sydney-based boss Nikki Hemming and associate Kevin Bermeister—had knowingly allowed Kazaa users illegally to swap copyrighted songs. The company was ordered to modify the software within two months (a ruling enforceable only in Australia). Sharman and the other five parties faced paying millions of dollars in damages to the record labels that instigated the legal action.
On 5 December 2005, the Federal Court of Australia ceased downloads of Kazaa in Australia after Sharman Networks failed to modify their software by the 5 December deadline. Users with an Australian IP address were greeted with the message "Important Notice: The download of the Kazaa Media Desktop by users in Australia is not permitted" when visiting the Kazaa website. Sharman planned to appeal against the Australian decision, but ultimately settled the case as part of its global settlement with the record labels and studios in the United States.
In yet another set of related cases, in September 2003, the Recording Industry Association of America (RIAA) filed suit in civil court against several private individuals who had shared large numbers of files with Kazaa; most of these suits were settled with monetary payments averaging $3,000. Sharman Networks responded with a lawsuit against the RIAA, alleging that the terms of use of the network were violated and that unauthorized client software (such as Kazaa Lite, see below) was used in the investigation to track down the individual file sharers. An effort to throw out this suit was denied in January 2004. However, that suit was also settled in 2006 (see above). Most recently, in Duluth, Minnesota, the recording industry sued Jammie Thomas-Rasset, a 30-year-old single mother. On 5 October 2007, Thomas was ordered to pay the six record companies (Sony BMG, Arista Records LLC, Interscope Records, UMG Recordings Inc., Capitol Records Inc. and Warner Bros. Records Inc.) $9,250 for each of the 24 songs they had focused on in this case. She was accused of sharing a total of 1,702 songs through her Kazaa account. Along with attorney fees, Thomas may owe as much as half a million dollars. Thomas testified that she does not have a Kazaa account, but her testimony was complicated by the fact that she had replaced her computer's hard drive after the alleged downloading took place, and later than she originally said in a deposition before the trial.
Thomas-Rasset appealed the verdict and was given a new trial. In June 2009 that jury awarded the recording industry plaintiffs a judgment of $80,000 per song, or $1.92 million. This is less than half of the $150,000 amount authorized by statute.
The federal court found the award "monstrous and shocking" and reduced it to $54,000. The recording industry offered to accept a settlement of $25,000, with the money going to charities that support musicians. Apparently undaunted, Thomas-Rasset was able to obtain a third trial on the issue of damages. In November 2010 she was again ordered to pay for her violation, this time $62,500 per song, for a total of $1.5 million. At last word, her attorneys were examining a challenge to the constitutional validity of massive statutory damages, where actual damages would have been $24.
Bundled malware
In 2006 StopBadware.org identified Kazaa as a spyware application. They identified the following components:
Cydoor (spyware): Collects information on the PC's surfing habits and passes it on to Cydoor Desktop Media.
B3D (adware): An add-on which causes advertising popups if the PC accesses a website which triggers the B3D code.
Altnet (adware): A distribution network for paid "gold" files.
The Best Offers (adware): Tracks user's browsing habits and internet usage to display advertisements similar to their interests.
InstaFinder (hijacker): Redirects URL typing errors to InstaFinder's web page instead of the standard search page.
TopSearch (adware): Displays paid songs and media related to a Kazaa search.
RX Toolbar (spyware): The toolbar monitors all sites visited with Microsoft Internet Explorer and provides links to competitors' websites.
New.net (hijacker): A browser plugin that allowed users to access several of its own unofficial Top Level Domain names, e.g., .chat and .shop. The main purpose of this was to sell domain names such as www.record.shop which is actually www.record.shop.new.net (ICANN did not allow third-party registration of generic top level domains until 2012).
Transitional period
Kazaa's legal issues ended after a settlement of $100 million in reparations to the recording industry. Kazaa, including the domain name, was then sold off to Brilliant Digital Entertainment, Inc. Kazaa then operated as a monthly music subscription service allowing users to download unlimited songs, before finally ending the service in 2012. The Kazaa.com website is no longer accessible as of 2017, however Brilliant Digital Entertainment, Inc. continues to own the domain name.
Some users still use the old network on the unauthorized versions of Kazaa, either Kazaa Lite or Kazaa Resurrection, which is still a self-sustaining network where thousands of users still share unrestricted media. This fact was previously stated by Kazaa when they claimed their FastTrack network was not centralized (like the old Napster), but instead a link between millions of computers around the world.
However, in the wake of the bad publicity and lawsuits, the number of users on Kazaa Lite has dropped dramatically. They have gone from several millions of users at a given time to mere thousands.
Without further recourse, and until the lawsuit was settled, the RIAA actively sued thousands of people and college campuses across the U.S. for sharing copyrighted music over the network. Particularly, students were targeted and most were threatened with a penalty of $750 per song. Although the lawsuits were mainly in the U.S., other countries also began to follow suit. Beginning in 2008, however, RIAA announced an end to individual lawsuits.
While Napster lasted just three years, Kazaa survived much longer. However, the lawsuits eventually ended the company.
Variations
Kazaa Lite was an unauthorized modification of the Kazaa Media Desktop application which excluded adware and spyware and provided slightly extended functionality. It became available in April 2002. It was available free of charge, and as of mid-2005 was almost as widely used as the official Kazaa client itself. It connected to the same FastTrack network and thus allowed to exchange files with all Kazaa users, and was created by third party programmers by modifying the binary of the original Kazaa application. Later versions of Kazaa Lite included K++, a memory patcher that removed search limit restrictions, and set one's "participation level" to the maximum of 1000. Sharman Networks considers Kazaa Lite to be a copyright violation.
After development of Kazaa Lite stopped, K-Lite v2.6, Kazaa Lite Resurrection and Kazaa Lite Tools appeared. Unlike Kazaa Lite, which is a modification of an old version of Kazaa, K-Lite v2.6 and later require the corresponding original KMD executable to run. K-Lite doesn't include any code by Sharman: instead, it runs the user's original Kazaa Media Desktop executable in an environment which removes the malware, spyware and adware and adds features.
In November 2004, the developers of K-Lite released K-Lite v2.7, which similarly requires the KMD 2.7 executable.
Other clean variants used an older core (2.02) and thus, K-Lite had some features that others didn't have. K-Lite included multiple search tabs, a custom toolbar, and autostart, a download accelerator, an optional splash screen, preview with option (to view files you are currently downloading), an IP blocker, Magnet links support, and ad blocking, although the clients based on the 2.02 core abstract these functions to third-party programs.
Kazaa Lite Tools was an update of the original Kazaa Lite, with modifications to the third-party programs included, it is newer and includes more tools.
Kazaa Lite Resurrection (KLR) appeared almost immediately after Kazaa Lite development was stopped in August 2003. KLR was a copy of Kazaa Lite 2.3.3.
See also
μTorrent
WinMX
Bearshare
eMule
iMesh
LimeWire
Napster
The Pirate Bay
References
External links
"Malware prevalence in the KaZaA file-sharing network". Seungwon Shin, Jaeyeon Jung, and Hari Balakrishnan. 2006.
2001 software
Adware
Discontinued software
Estonian inventions
File sharing software
Internet services shut down by a legal challenge
Music retailers of the United States
Online music database clients
Online music stores of Australia
United States Internet case law
Windows file sharing software
Peer-to-peer software |
1802713 | https://en.wikipedia.org/wiki/Mel%20Hein | Mel Hein | Melvin Jack Hein (August 22, 1909 – January 31, 1992), sometimes known as "Old Indestructible", was an American football player and coach. In the era of one-platoon football, he played as a center (then a position on both offense and defense) and was inducted into the College Football Hall of Fame in 1954 and the Pro Football Hall of Fame in 1963 as part of the first class of inductees. He was also named to the National Football League (NFL) 50th, 75th, and 100th Anniversary All-Time Teams.
Hein played college football as a center for the Washington State Cougars from 1928 to 1930, leading the 1930 team to the Rose Bowl after an undefeated regular season. He received first-team All-Pacific Coast and All-American honors.
Hein next played fifteen seasons in the NFL as a center for the New York Giants from 1931 to 1945. He was selected as a first-team All-Pro for eight consecutive years from 1933 to 1940 and won the Joe F. Carr Trophy as the NFL's Most Valuable Player in 1938. He was the starting center on NFL championship teams in 1934 and 1938 and played in seven NFL championship games (1933–1935, 1938–1939, 1941, and 1944).
Hein also served as the head football coach at Union College from 1943 to 1946 and as an assistant coach for the Los Angeles Dons of the All-America Football Conference (AAFC) from 1947 to 1948, the New York Yankees of the AAFC in 1949, the Los Angeles Rams in 1950, and the USC Trojans from 1951 to 1965. He was also the supervisor of officials for the American Football League from 1966 to 1969 and for the American Football Conference from 1970 to 1974.
Early years
Hein was born in 1909 at Redding in Shasta County, California. His father, Herman Hein (1886-1940), was a California native of German and Dutch ancestry who worked as an electrician for a power house operator. His mother, Charlotte Hein (1887-1967), was a California native of English and German ancestry. As of 1910, the family was living at Round Mountain, about 30 miles northeast of Redding.
By 1920, the family was living in Glacier in Whatcom County, Washington, where Hein's father was working as a lineman on transmission lines. Hein had an older brother, Lloyd, and two younger brothers, Homer and Clayton. The family later moved to Fairhaven and Burlington, where Hein's father worked as an insurance agent and where Hein attended both Fairhaven and Burlington High Schools. He also played basketball as a center at Burlington High.
College career
In 1927, Hein enrolled at Washington State College in Pullman joined Sigma Nu Fraternity and played center for the Cougars from 1928 to 1930. With Hein as the starting center, the Cougars compiled a 10–2 record in 1929 and 9–1 in 1930. The 1930 team won the Pacific Coast Conference championship and were undefeated in the regular season, but fell to Alabama in the Rose Bowl. Hein played all sixty minutes of the Cougars' victories over California and USC on October 4 and 11.
At the end of his senior year, Hein was selected by the Associated Press and United Press as the first-team center on the All-Pacific Coast team. He was also selected by the Central Press as the first-team center, and by the All-America Board in a tie for the first-team center position, on the All-American team.
While at Washington State, Hein also played for three years (freshman, sophomore, and junior years) on the basketball team and for one year on the Cougars track team as a freshman.
Professional career
In 1931, Hein signed a contract with the New York Giants, married his college sweetheart, and packed all of their belongings into a 1929 Ford and drove from Pullman to New York. He played for 15 years as a center and a defensive lineman. Hein was a first-team All-Pro center eight straight years from 1933 to 1940. He was also selected as the NFL's most valuable player in 1938. He was the starting center on two NFL championship teams — in 1934 (NYG 30, Chicago 13) and again in 1938 (NYG 23, Green Bay 17). Hein was also a member of five Giants teams that lost NFL championship games — 1933, 1935, 1939, 1941, and 1944.
Hein had planned to retire after a dozen years in the NFL and become the head coach at Union College in Schenectady, New York. When Union's program went on hiatus due to World War II, Hein returned to the Giants on weekends for three more seasons and retired after the 1945 season.
Coaching and administrative career
Hein worked as a football coach and league administrator for more than 30 years. He began coaching in 1943 as the head football coach at Union College in Schenectady, New York. For the next three years, he held that position, though the 1943 and 1945 Union College teams had their seasons cancelled due to the disruption of losing many players to World War II. In 1944, the team compiled an 0–5 record, as Hein coached the team on Saturdays and played for the Giants on Sundays. In 1946, Hein continued as Union College's head coach after retiring from the Giants. He led the 1946 team to a 3–5 record.
In March 1947, Hein was hired as an assistant coach with the Los Angeles Dons of the All-America Football Conference (AAFC). He served initially under head coach Dudley DeGroot on the 1947 Dons team. However, on November 18, 1947, DeGroot was fired as head coach, and assistant coaches Hein and Ted Shipkey were appointed as co-coaches to lead the team for the final three games of the season. The 1947 Dons compiled a 5-6 record under DeGroot and a 2-1 record under Hein and Shipkey. Hein resumed his position as an assistant coach under Jimmy Phelan on the 1948 Dons team that again compiled a 7-7 record.
After two years with the Dons, Hein was hired in February 1949 as an assistant coach for the New York Yankees of the AAFC under head coach Red Strader. The 1949 Yankees compiled an 8-4 record and finished in second place in the AAFC. The Yankees' forward wall, which was coached by Hein, was rated as the toughest in the AAFC.
Hein returned to Los Angeles in 1950 as the line coach for the Los Angeles Rams. Under head coach Joe Stydahar, the 1950 Rams won the NFL National Conference championship with a 9–3 record but lost to the Cleveland Browns in the 1950 NFL Championship Game.
Hein left the Rams in February 1951 to join the USC Trojans football team as its line coach under head coach Jess Hill. Hein remained with the Trojans for 15 years through the 1965 season. During his tenure with the program, the Trojans won a national championship (1962) and four conference championships (1952, 1959 [co-championship], 1962, and 1964 [co-championship]).
In June 1966, Hein was hired by commissioner Al Davis as the supervisor of officials for the American Football League. He remained in that position from 1966 to 1969 and continued thereafter as the supervisor of officials for the American Football Conference from 1970 to 1974. He retired in May 1974 after more than 45 years in college and professional football.
Honors
Hein received numerous honors for his accomplishments as a football player. His honors include the following:
In 1954, Hein was inducted into the National Football Foundation's Hall of Fame (later renamed the College Football Hall of Fame) as part of the fourth class of inductees.
In 1960, he was inducted into the Helms Athletic Foundation's Football Hall of Fame.
In 1961, he was inducted into the Washington Sports Hall of Fame. That same year, he also became the first athlete to receive Washington State's Distinguished Alumnus Award.
In 1963, he was one of the 17 players, coaches, and founders who were inducted into the Pro Football Hall of Fame as part of the charter class.
In 1969, as part of the NFL's 50th anniversary, the Pro Football Hall of Fame selected all-decade teams for each of the league's first five decades. Hein was selected as a center on the NFL 1930s All-Decade Team. He was also named to the NFL 50th Anniversary All-Time Team.
Also in 1969, he was selected by the Football Writers Association of America as the center on the 11-member modern all-time college football team.
In 1979, he was inducted as a charter member into the Washington State University Athletic Hall of Fame.
In 1999, he was named as one of two centers on the NFL 75th Anniversary All-Time Team.
Also in 1999, he was one of three centers named to the Walter Camp Football Foundation's All-Century Team for college players.
In 1999, he was also ranked 74th on The Sporting News' list of the 100 Greatest Football Players.
In 2010, the NFL Network ranked Hein 96th on its list of the 100 greatest players of all time.
Hein's jersey number 7 was retired by both the Washington State Cougars and New York Giants.
Family and later years
Hein was married in August 1931 to Florence Emma Porter of Pullman, Washington. They had two children, Sharen Lynn, born c. 1939, and Mel Hein, Jr. (1941-2020). Mel, Jr., once held the United States indoor record in the pole vault in the 1960s.
In his later years, Hein lived in San Clemente, California. By 1991, Hein was suffering from stomach cancer, and his weight dropped from 225 to 130 pounds. Hein died of stomach cancer in 1992 at age 82 at his home in San Clemente.
References
Further reading
(Mel Hein autobiographical piece)
External links
1909 births
1992 deaths
American football centers
College Football Hall of Fame inductees
National Football League players with retired numbers
New York Giants players
Pro Football Hall of Fame inductees
Union Dutchmen football coaches
USC Trojans football coaches
Washington State Cougars football players
People from Burlington, Washington
People from Whatcom County, Washington
People from Redding, California
Players of American football from Washington (state)
Deaths from stomach cancer
Shasta County Sports Hall of Fame inductees
National Football League Most Valuable Player Award winners |
8105109 | https://en.wikipedia.org/wiki/Common%20Vulnerability%20Scoring%20System | Common Vulnerability Scoring System | The Common Vulnerability Scoring System (CVSS) is a free and open industry standard for assessing the severity of computer system security vulnerabilities. CVSS attempts to assign severity scores to vulnerabilities, allowing responders to prioritize responses and resources according to threat. Scores are calculated based on a formula that depends on several metrics that approximate ease and impact of an exploit. Scores range from 0 to 10, with 10 being the most severe. While many utilize only the CVSS Base score for determining severity, temporal and environmental scores also exist, to factor in availability of mitigations and how widespread vulnerable systems are within an organization, respectively.
The current version of CVSS (CVSSv3.1) was released in June 2019.
History
Research by the National Infrastructure Advisory Council (NIAC) in 2003/2004 led to the launch of CVSS version 1 (CVSSv1) in February 2005, with the goal of being "designed to provide open and universally standard severity ratings of software vulnerabilities". This initial draft had not been subject to peer review or review by other organizations. In April 2005, NIAC selected the Forum of Incident Response and Security Teams (FIRST) to become the custodian of CVSS for future development.
Feedback from vendors utilizing CVSSv1 in production suggested there were "significant issues with the initial draft of CVSS". Work on CVSS version 2 (CVSSv2) began in April 2005 with the final specification being launched in June 2007.
Further feedback resulted in work beginning on CVSS version 3 in 2012, ending with CVSSv3.0 being released in June 2015.
Terminology
The CVSS assessment measures three areas of concern:
Base Metrics for qualities intrinsic to a vulnerability
Temporal Metrics for characteristics that evolve over the lifetime of vulnerability
Environmental Metrics for vulnerabilities that depend on a particular implementation or environment
A numerical score is generated for each of these metric groups. A vector string (or simply "vector" in CVSSv2), represents the values of all the metrics as a block of text.
Version 2
Complete documentation for CVSSv2 is available from FIRST. A summary is provided below.
Base metrics
Access Vector
The access vector (AV) shows how a vulnerability may be exploited.
Access Complexity
The access complexity (AC) metric describes how easy or difficult it is to exploit the discovered vulnerability.
Authentication
The authentication (Au) metric describes the number of times that an attacker must authenticate to a target to exploit it. It does not include (for example) authentication to a network in order to gain access. For locally exploitable vulnerabilities, this value should only be set to Single or Multiple if further authentication is required after initial access.
Impact metrics
Confidentiality
The confidentiality (C) metric describes the impact on the confidentiality of data processed by the system.
Integrity
The Integrity (I) metric describes the impact on the integrity of the exploited system.
Availability
The availability (A) metric describes the impact on the availability of the target system. Attacks that consume network bandwidth, processor cycles, memory or any other resources affect the availability of a system.
Calculations
These six metrics are used to calculate the exploitability and impact sub-scores of the vulnerability. These sub-scores are used to calculate the overall base score.
The metrics are concatenated to produce the CVSS Vector for the vulnerability.
Example
A buffer overflow vulnerability affects web server software that allows a remote user to gain partial control of the system, including the ability to cause it to shut down:
This would give an exploitability sub-score of 10, and an impact sub-score of 8.5, giving an overall base score of 9.0.
The vector for the base score in this case would be AV:N/AC:L/Au:N/C:P/I:P/A:C. The score and vector are normally presented together to allow the recipient to fully understand the nature of the vulnerability and to calculate their own environmental score if necessary.
Temporal metrics
The value of temporal metrics change over the lifetime of the vulnerability, as exploits are developed, disclosed and automated and as mitigations and fixes are made available.
Exploitability
The exploitability (E) metric describes the current state of exploitation techniques or automated exploitation code.
Remediation Level
The remediation level (RL) of a vulnerability allows the temporal score of a vulnerability to decrease as mitigations and official fixes are made available.
Report Confidence
The report confidence (RC) of a vulnerability measures the level of confidence in the existence of the vulnerability and also the credibility of the technical details of the vulnerability.
Calculations
These three metrics are used in conjunction with the base score that has already been calculated to produce the temporal score for the vulnerability with its associated vector.
The formula used to calculate the temporal score is:
Example
To continue with the example above, if the vendor was first informed of the vulnerability by a posting of proof-of-concept code to a mailing list, the initial temporal score would be calculated using the values shown below:
This would give a temporal score of 7.3, with a temporal vector of E:P/RL:U/RC:UC (or a full vector of AV:N/AC:L/Au:N/C:P/I:P/A:C/E:P/RL:U/RC:UC).
If the vendor then confirms the vulnerability, the score rises to 8.1, with a temporal vector of E:P/RL:U/RC:C
A temporary fix from the vendor would reduce the score back to 7.3 (E:P/RL:T/RC:C), while an official fix would reduce it further to 7.0 (E:P/RL:O/RC:C). As it is not possible to be confident that every affected system has been fixed or patched, the temporal score cannot reduce below a certain level based on the vendor's actions, and may increase if an automated exploit for the vulnerability is developed.
Environmental metrics
The environmental metrics use the base and current temporal score to assess the severity of a vulnerability in the context of the way that the vulnerable product or software is deployed. This measure is calculated subjectively, typically by affected parties.
Collateral Damage Potential
The collateral damage potential (CDP) metric measures the potential loss or impact on either physical assets such as equipment (and lives), or the financial impact upon the affected organisation if the vulnerability is exploited.
Target Distribution
The target distribution (TD) metric measures the proportion of vulnerable systems in the environment.
Impact Subscore Modifier
Three further metrics assess the specific security requirements for confidentiality (CR), integrity (IR) and availability (AR), allowing the environmental score to be fine-tuned according to the users' environment.
Calculations
The five environmental metrics are used in conjunction with the previously assessed base and temporal metrics to calculate the environmental score and to produce the associated environmental vector.
Example
If the aforementioned vulnerable web server were used by a bank to provide online banking services, and a temporary fix was available from the vendor, then the environmental score could be assessed as:
This would give an environmental score of 8.2, and an environmental vector of CDP:MH/TD:H/CR:H/IR:H/AR:L. This score is within the range 7.0-10.0, and therefore constitutes a critical vulnerability in the context of the affected bank's business.
Criticism of Version 2
Several vendors and organizations expressed dissatisfaction with CVSSv2.
Risk Based Security, which manages the Open Sourced Vulnerability Database, and the Open Security Foundation jointly published a public letter to FIRST regarding the shortcomings and failures of CVSSv2. The authors cited a lack of granularity in several metrics which results in CVSS vectors and scores that do not properly distinguish vulnerabilities of different type and risk profiles. The CVSS scoring system was also noted as requiring too much knowledge of the exact impact of the vulnerability.
Oracle introduced the new metric value of "Partial+" for Confidentiality, Integrity, and Availability, to fill perceived gaps in the description between Partial and Complete in the official CVSS specifications.
Version 3
To address some of these criticisms, development of CVSS version 3 was started in 2012. The final specification was named CVSS v3.0 and released in June 2015. In addition to a Specification Document, a User Guide and Examples document were also released.
Several metrics were changed, added, and removed. The numerical formulas were updated to incorporate the new metrics while retaining the existing scoring range of 0-10. Textual severity ratings of None (0), Low (0.1-3.9), Medium (4.0-6.9), High (7.0-8.9), and Critical (9.0-10.0) were defined, similar to the categories NVD defined for CVSS v2 that were not part of that standard
.
Changes from Version 2
Base metrics
In the Base vector, the new metrics User Interaction (UI) and Privileges Required (PR) were added to help distinguish vulnerabilities that required user interaction or user or administrator privileges to be exploited. Previously, these concepts were part of the Access Vector metric of CVSSv2. The Base vector also saw the introduction of the new Scope (S) metric, which was designed to make clear which vulnerabilities may be exploited and then used to attack other parts of a system or network. These new metrics allow the Base vector to more clearly express the type of vulnerability being evaluated.
The Confidentiality, Integrity and Availability (C, I, A) metrics were updated to have scores consisting of None, Low, or High, rather than the None, Partial, Complete of CVSSv2. This allows more flexibility in determining the impact of a vulnerability on CIA metrics.
Access Complexity was renamed Attack Complexity (AC) to make clear that access privileges were moved to a separate metric. This metric now describes how repeatable exploit of this vulnerability may be; AC is High if the attacker requires perfect timing or other circumstances (other than user interaction, which is also a separate metric) which may not be easily duplicated on future attempts.
Attack Vector (AV) saw the inclusion of a new metric value of Physical (P), to describe vulnerabilities that require physical access to the device or system to perform.
Temporal metrics
The Temporal metrics were essentially unchanged from CVSSv2.
Environmental metrics
The Environmental metrics of CVSSv2 were completely removed and replaced with essentially a second Base score, known as the Modified vector. The Modified Base is intended to reflect differences within an organization or company compared to the world as a whole. New metrics to capture the importance of Confidentiality, Integrity and Availability to a specific environment were added.
Criticism of Version 3
In a blog post in September 2015, the CERT Coordination Center discussed limitations of CVSSv2 and CVSSv3.0 for use in scoring vulnerabilities in emerging technology systems such as the Internet of Things.
Version 3.1
A minor update to CVSS was released on June 17, 2019. The goal of CVSS version 3.1 was to clarify and improve upon the existing CVSS version 3.0 standard without introducing new metrics or metric values, allowing for frictionless adoption of the new standard by both scoring providers and scoring consumers alike. Usability was a prime consideration when making improvements to the CVSS standard. Several changes being made in CVSS v3.1 are to improve the clarity of concepts introduced in CVSS v3.0, and thereby improve the overall ease of use of the standard.
FIRST has used input from industry subject-matter experts to continue to enhance and refine CVSS to be more and more applicable to the vulnerabilities, products, and platforms being developed over the past 15 years and beyond. The primary goal of CVSS is to provide a deterministic and repeatable way to score the severity of a vulnerability across many different constituencies, allowing consumers of CVSS to use this score as input to a larger decision matrix of risk, remediation, and mitigation specific to their particular environment and risk tolerance.
Updates to the CVSS version 3.1 specification include clarification of the definitions and explanation of existing base metrics such as Attack Vector, Privileges Required, Scope, and Security Requirements. A new standard method of extending CVSS, called the CVSS Extensions Framework, was also defined, allowing a scoring provider to include additional metrics and metric groups while retaining the official Base, Temporal, and Environmental Metrics. The additional metrics allow industry sectors such as privacy, safety, automotive, healthcare, etc., to score factors that are outside the core CVSS standard. Finally, the CVSS Glossary of Terms has been expanded and refined to cover all terms used throughout the CVSS version 3.1 documentation.
Adoption
Versions of CVSS have been adopted as the primary method for quantifying the severity of vulnerabilities by a wide range of organizations and companies, including:
The National Vulnerability Database (NVD)
The Open Source Vulnerability Database (OSVDB)
CERT Coordination Center, which in particular makes use of CVSSv2 Base, Temporal and Environmental metrics
See also
Common Weakness Enumeration (CWE)
Common Vulnerabilities and Exposures (CVE)
Common Attack Pattern Enumeration and Classification (CAPEC)
References
External links
The Forum of Incident Response and Security Teams (FIRST) CVSS site
National Vulnerability Database (NVD) CVSS site
Common Vulnerability Scoring System v2 Calculator
Computer security standards
Computer network security |
28012637 | https://en.wikipedia.org/wiki/Laodice%20%28daughter%20of%20Priam%29 | Laodice (daughter of Priam) | In Greek mythology, Laodice (; , ; "people-justice") was the daughter of Priam of Troy and Hecuba. She was described as the most beautiful of Priam's daughters. The Iliad mentions Laodice as the wife of Helicaon, son of Antenor, although according to Hyginus she was the wife of Telephus, king of Mysia and son of Heracles.
Mythology
Before the outbreak of the Trojan War, Laodice fell in love with Acamas, son of Theseus, who had come to Troy to try to recover Helen through diplomatic means. She became pregnant and bore him the son Munitus. Munitus was given to Acamas' grandmother Aethra, who was then a slave to Helen. After the war had ended, Acamas took his son with him. Much later, Munitus was bitten by a snake while hunting with his father in Thrace and died.
According to the Bibliotheca and several other sources, in the night of the fall of Troy Laodice feared she might become one of the captive women and prayed to the gods. She was swallowed up in a chasm that opened on the earth.
Pausanias, however, mentions her among the captive Trojans painted in the Lesche of Delphi. He assumes that she was subsequently set free because no poet mentions her as a captive, and he further surmises that the Greeks would have done her no harm, since she was married to a son of Antenor, who was a guest-friend of the Greeks Menelaus and Odysseus.
Notes
References
Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. Greek text available at the Perseus Digital Library.
Parthenius, Love Romances translated by Sir Stephen Gaselee (1882-1943), S. Loeb Classical Library Volume 69. Cambridge, MA. Harvard University Press. 1916. Online version at the Topos Text Project.
Parthenius, Erotici Scriptores Graeci, Vol. 1. Rudolf Hercher. in aedibus B. G. Teubneri. Leipzig. 1858. Greek text available at the Perseus Digital Library.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Pseudo-Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Greek text available from the same website.
Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com
Quintus Smyrnaeus, The Fall of Troy. Arthur S. Way. London: William Heinemann; New York: G.P. Putnam's Sons. 1913. Greek text available at the Perseus Digital Library.
Tryphiodorus, Capture of Troy translated by Mair, A. W. Loeb Classical Library Volume 219. London: William Heinemann Ltd, 1928. Online version at theoi.com
Tryphiodorus, Capture of Troy with an English Translation by A.W. Mair. London, William Heinemann, Ltd.; New York: G.P. Putnam's Sons. 1928. Greek text available at the Perseus Digital Library.
Princesses in Greek mythology
Queens in Greek mythology
Children of Priam
Women of the Trojan war
Trojans
Women in Greek mythology
Characters in Greek mythology
Prisoners of war in popular culture |
18073194 | https://en.wikipedia.org/wiki/Lynis | Lynis | Lynis is an extensible security audit tool for computer systems running Linux, FreeBSD, macOS, OpenBSD, Solaris, and other Unix derivatives. It assists system administrators and security professionals with scanning a system and its security defenses, with the final goal being system hardening.
Software
The tool was created by Michael Boelen, the original author of rkhunter as well as several special contributors and translators. Lynis is available under the GPLv3 license.
The software determines various system information, such as the specific OS type, kernel parameters, authentication and accounting mechanism, installed packages, installed services, network configuration, logging and monitoring (e.g. syslog-ng), cryptography (e.g. SSL/TLS certificates) and installed malware scanners (e.g. ClamAV or rkhunter). Additionally, it will check the system for configuration errors and security issues. By request of the auditor, those checks may conform to international standards such as ISO 27001, PCI-DSS 3.2 and HIPAA.
The software also helps with fully automated or semi-automatic auditing, software patch management, evaluation of server hardening guidelines and vulnerability/malware scanning of Unix-based systems. It can be locally installed from most system repositories, or directly started from disk, including USB stick, CD or DVD.
Audience
The intended audience is auditors, security specialists, penetration testers, and sometimes system/network administrators. Usually members of a First Line of Defense within a company or larger organization tend to employ such audit tools. According to the official documentation, there is also a Lynis Enterprise version, available with support for more than 10 computer systems, providing malware scanning, intrusion detection and additional guidance for auditors.
Reception
In 2016, Lynis won an InfoWorld Bossie Award.
See also
chkrootkit
Host-based intrusion detection system comparison
List of free and open-source software packages
Kali Linux
References
External links
Lynis on free(code)
Free security software
Unix security-related software
Unix package management-related software
MacOS security software |
46476054 | https://en.wikipedia.org/wiki/Hoopla%20Software | Hoopla Software | Hoopla Software, Inc. is a privately held SaaS company founded in 2010, headquartered in Silicon Valley, San Jose, California. The company is backed by Trinity Ventures, Safeguard Scientifics, Illuminate Ventures and Salesforce. The company currently has over 500 customers and between 11-50 employees. The Hoopla software uses gamification technology to help motivate sales teams.
History
The company was founded in Philadelphia in 2010. Michael Smalls, the founder and CEO of the SaaS company moved headquarters to San Jose in 2012. The company received $2.8M of funding in the Series A round and in February 2014, Hoopla has raised a $8M Series B round funding. The total funding of $10.8 was received from Safeguard Scientifics, Salesforce Ventures, Illuminates Ventures and Trinity Ventures.
Product
The initial product launch included a cloud-based application which integrates the Salesforce CRM which was available on Hoopla TV. With the 2014 release Hoopla launched a mobile version.
Modern game mechanics (gamification) and motivation psychology are used to build this platform. A team can create customizable leaderboards and challenges.
References
External links
Official website
Software companies based in the San Francisco Bay Area
Companies based in San Jose, California
Software companies established in 2010
2010 establishments in California
Lists of software
Web applications
Software companies of the United States |
5019041 | https://en.wikipedia.org/wiki/Legalization%20%28international%20law%29 | Legalization (international law) | In international law, legalization is the process of authenticating or certifying a legal document so a foreign country's legal system will recognize it as with full legal effect.
Rationale and procedure
Authentication by legalization is widely used in international commerce and civil law matters in those jurisdictions where the simpler apostille system has not been adopted (e.g., Canada, China). The procedure of legalization (also known as attestation or authentication, though these are essentially the same process) can be simply split into three stages, though each stage can vary in the number of steps required:
Verification by the government of the issuing country
Verification by the embassy of the destination country within the issuing country
A final verification of all steps within the destination country itself
Broadly speaking, the aim of any international document authentication process is to solve a fundamentally practical problem: how can civil and judicial officials reliably verify the authenticity of a document that was issued abroad?
Legalization attempts to solve this dilemma by creating a chain of authentications, each by a progressively higher government authority so as to ultimately narrow the point of contact between countries to a single designated official (usually in the national department responsible for foreign affairs). Therefore, by authenticating the signature and seal of this final official, a foreign jurisdiction can authenticate the entire chain of verifications back to the entity responsible for issuing the original document without scrutinizing each "link" individually.
For example, since Canada is not part of the apostille convention, an individual who wishes to use a Canadian document in Indonesia must first authenticate the document with the appropriate provincial authority or Global Affairs Canada, then send the authenticated document to be legalized at the appropriate Indonesian diplomatic post in Canada.
This is not always the case vice versa, though. An individual who wishes to use an Indonesian document in Canada does not need to do any such thing, as Canada does not require authentication of foreign documents for their acceptance in the country.
Apostille Convention
The Hague Convention Abolishing the Requirement for Legalisation for Foreign Public Documents has supplanted legalization as the default procedure by a system of apostille. It is available if both the origin country of the document and the destination country are party to the treaty. The apostille is a stamp on which standard validating information is supplied. It is available (dependent on the document) from the competent authority of the origin country, and often the document has to be notarized before it can be apostilled. In the United States, the secretaries of state for the various states are the competent authorities who can apply an apostille. A list of the competent authorities designated by each country that has joined the treaty is maintained by the Hague Conference on Private International Law.
References
External links
Hague Apostille Countries
Legalization of documents in China
Legalization of documents in Thailand
Legalization of documents in United States
Embassy and Consulate Legalization
Legalisation of UK documents
International law legal terminology
Conflict of laws
Notary |
43866229 | https://en.wikipedia.org/wiki/David%20J.%20Malan | David J. Malan | David J. Malan () is an American computer scientist and professor. Malan is the Gordon McKay Professor of the practice of Computer Science at Harvard University and is best known for teaching Computer Science 50 (known as CS50) which is the largest course at Harvard and the largest Massive Open Online Course (MOOC) at edX, with lectures being viewed by over a million people on the edX platform up to 2017.
Malan is a member of faculty at the Harvard John A. Paulson School of Engineering and Applied Sciences, where his research interests include cybersecurity, digital forensics, botnets, computer science education, distance learning, collaborative learning, and computer-assisted instruction.
Education
Malan enrolled at Harvard College, initially studying government, and took CS50 in the fall of 1996, which was taught by Brian Kernighan at the time. Inspired by Kernighan, Malan began his education in computer science, graduating with a Bachelor of Science degree in Computer Science in 1999. After a period working outside of academia, he returned to postgraduate studies to complete a Master of Science degree in 2004, followed by a PhD in 2007 for research into cybersecurity and computer forensics, supervised by Michael D. Smith.
Teaching
Malan is known for teaching CS50, an introductory course in Computer Science for majors and non-majors that aims to develop computational thinking skills, using tools like Scratch, C, Python, SQL, and JavaScript. the course has 800 freshman and sophomore students enrolled at Harvard College each year, making it the largest course there. CS50 is available on edX as CS50x, with over a million views from the lectures. His courses on EdX are known by being taken by people of all ages. All of his courses are freely available and licensed for re-use with attribution using OpenCourseWare, for example at cs50.tv. CS50 also exists as CS50 AP (Advanced Placement), an adaptation for high schools that satisfies the AP Computer Science Principles of the College Board.
Besides CS50, Malan also teaches at Harvard Extension School and Harvard Summer School. Prior to teaching at Harvard, Malan taught mathematics and computer science at Franklin High School and Tufts University.
Career and research
Malan worked for Mindset Media, LLC from 2008 to 2011 as Chief Information Officer (CIO), where he was responsible for advertising the network’s scalability, security, and capacity-planning. He designed infrastructure for collection of massive datasets capable of 500 million HTTP requests per day, with peaks of 10K per second. In 2011, Mindset Media was acquired by Meebo, Inc.
During 2001 to 2002 he worked for AirClic as an Engineering Manager.
Malan was also the founder and chairman of Diskaster, a data recovery firm that offered professional recovery of data from hard drives and memory cards, as well as forensic investigations for civil matters.
During his undergraduate studies, Malan worked part-time for the District Attorney's Office in Middlesex County, Virginia as a forensic investigator, after which he founded his own two startups. On the side since 2003, he volunteered as an emergency medical technician (EMT-B) for MIT-Emergency Medical Services (EMS). He continues to volunteer as an EMT-B for the American Red Cross.
Malan is also an active member of the SIGCSE community, a Special Interest Group (SIG) concerned with Computer Science Education (CSE) organised by the Association for Computing Machinery (ACM).
References
Computer scientists
Computer science educators
John A. Paulson School of Engineering and Applied Sciences faculty
Chief information officers
Living people
Harvard School of Engineering and Applied Sciences alumni
Year of birth missing (living people)
Harvard Extension School faculty
Date of birth missing (living people) |
23716350 | https://en.wikipedia.org/wiki/Barnacle%20%28slang%29 | Barnacle (slang) | The word barnacle is a slang term used in electrical engineering to indicate a change made to a product on the manufacturing floor that was not part of the original product design. A barnacle is typically used to correct a defect in the product or as a way of enhancing the product with new functionality. A barnacle is normally a quick fix that is used until the product design can be redone incorporating the barnacle into the actual product so that when manufactured, the barnacle step in manufacturing is no longer required.
A barnacle may also be added in the field in order to correct a design or manufacturing defect.
Origin
The term appears to have originated from the crustacean barnacle which is an animal that attaches itself to rocks, docks, ships, whales, and other objects where it grows. A barnacle in electronics is something added to the manufactured product. Typically a barnacle on a circuit board is very noticeable, much like the mollusc variety on a rock in the sea.
Use in software
While the term was originally used with electronic hardware, it has also migrated into the software industry where is it is used to describe software that is added to a system. The connotation in the software industry is that a software barnacle is code added as an expedient without regard to the original design intent. A software barnacle may also refer to malware or spyware which has been inserted into a computing system illegally.
Examples
On printed circuit boards, a barnacle may be as simple as cutting a trace, soldering a wire in order to connect two points on the circuit board, or adding a component such as a resistor or capacitor. A barnacle may also be a complex subassembly or daughterboard.
Barnacles in hardware assemblies allow an engineer to repair design errors, experiment with design changes or enhancements, or otherwise alter circuit behaviour.
Although usually a barnacle-implemented change is incorporated into a new fabrication cycle circuit before production, occasionally there are final-assembly barnacles. In such cases it is determined to be less expensive to add a barnacle to a final, shipping product rather than re-spin the circuit to ship without these interventions left in place.
Use
The normal development cycle for electronic hardware contains two main phases. The first phase is the development and prototype phase in which the hardware is first designed (and often simulated using a computer program such as PSpice) and the design manufactured in low quantity as prototypes for testing. The second phase is the updating of design documents based on the testing experience and the beginning of general manufacturing of the product.
During the testing phase, problems are usually found as the design and simulation tools can not duplicate some types of environmental as well as electrical circumstances in which the product may be used. During the testing phases, barnacles are often used to patch (computing) or correct the hardware so that testing can continue in the face of defects (failures or faults) found. The goals of adding barnacles at this phase are to reduce development costs by using the prototype hardware for as long as it can be used, to test hardware changes before the design documentation is updated, and to reduce development time by not requiring a new version of the prototype hardware to be manufactured.
During general manufacturing of the product, the product may sometimes be used in circumstances which the specifications indicate would be acceptable however when the product is actually used in those circumstances, a problem is encountered. Engineering will typically perform a root cause analysis in order to determine the root cause of the problem. In some cases, manufacturing changes may need to be made such as trace contaminants being introduced during some phase of manufacturing. In other cases, the problem has to do with the design of the product and a change has to be made in the product design.
When a product design change is required, often and when possible, a barnacle is designed so that existing products can be modified with the design change using the barnacle. By using a barnacle, the idea is that existing products do not need to be scrapped and replaced so in this case the use of a barnacle is an economic decision. The barnacle work may be done in the field using portable tools and components or it may require a product recall with the barnacle work being done on the manufacturing floor.
See also
Electronic design automation
Electronic engineering
Engineering
Product design
Product lifecycle management
Product management
Technical standard
Electrical components |
68420623 | https://en.wikipedia.org/wiki/Garuda%20Linux | Garuda Linux | Garuda Linux is a Linux distribution based on the Arch Linux operating system. Garuda Linux is available in wide range of popular Linux desktop environments, including modified versions of the KDE Plasma 5 desktop environment. It features a rolling release update model using Pacman as its package manager. The term Garuda, originates from Hinduism, which is defined as a divine eagle-like sun bird and the king of birds.
Desktop Environments
Garuda Linux KDE comes in three variants; Dragonized, Dragonized Gaming and Dragonized BlackArch.
Garuda Linux GNOME features Garuda Linux's own customized dark theme and light theme of GNOME.
Garuda Linux Xfce, which features Garuda Linux's own dark theme and light theme of Xfce.
Garuda Linux LXQt-kwin, which features Garuda Linux's own dark theme and light theme of LXQt-kwin.
Garuda Linux Wayfire, which features Garuda Linux's own dark theme and light theme of Wayfire.
Garuda Linux Qtile, which features Garuda Linux's own dark theme and light theme of Qtile.
Garuda Linux i3wm, which features Garuda Linux's own dark theme and light theme of i3wm.
Garuda Linux Sway, which features Garuda Linux's own dark theme and light theme of Sway.
Community Editions of Garuda Linux as Garuda Linux Mate & Garuda Linux Cinnamon also exists.
Garuda Linux comes with an option called Barebones for advanced users. It's meant for users who do not want extra software and functionalities. Barebones KDE and GNOME have minimum packages that allow the Operating System to run simple functionalities. Garuda Linux does not provide any support for Barebone users, as the user has the liberty to control their Linux distribution. Garuda Linux provides multiple environments, but the ISO's are packed with either one of the environments only. Others can be installed manually.
System Requirements
Garuda Linux hardware requirements vary on the Desktop Environment used, but are very similar.
Minimum requirements:
30 GB storage
4 GB RAM
Recommended requirements:
40 GB storage
8GB RAM
Garuda Linux also requires a thumbdrive that contains 4GB space for their standard versions. Gaming Desktop Environments require a thumbdrive with 8GB storage space available.
Installation
The official Garuda Linux website supplies ISO images that can be run using a USB which contains 4GB/8GB of storage space, depending on the ISO image chosen. After the user has their partitions and formats set up on their drive, they can insert the thumbdrive and boot into it from the BIOS. Calamares will begin its process and present the user with a GUI installer.
Garuda Linux uses a rolling release update model in which new packages are supplied throughout the day. Pacman, the package manager, allows users to easily update their system.
Features
Garuda Linux installation process is done with Calamares, a graphical installer. The rolling release model means that the user does not need to upgrade/reinstall the whole operating system to keep it up-to-date inline with the latest release. Package management is handled by Pacman via command line, and front-end UI package manager tools such as the pre-installed Pamac. It can be configured as either a stable system (default) or bleeding edge in line with Arch. Garuda Linux includes colorized UI which comes in various options, with the option to further customize the user preferences.
History
Garuda Linux was released on the 26 March 2020.
Garuda Linux is developed by developers around the world. Founded by Shrinivas Vishnu Kumbhar (India) and SGS (Germany). https://garudalinux.org/about.html
References
Linux distributions
Arch-based Linux distributions |
51917228 | https://en.wikipedia.org/wiki/Podesta%20emails | Podesta emails | In March 2016, the personal Gmail account of John Podesta, a former White House chief of staff and chair of Hillary Clinton's 2016 U.S. presidential campaign, was compromised in a data breach accomplished via a spear-phishing attack, and some of his emails, many of which were work-related, were hacked. Cybersecurity researchers as well as the United States government attributed responsibility for the breach to the Russian cyber spying group Fancy Bear, allegedly two units of a Russian military intelligence agency.
Some or all of the Podesta emails were subsequently obtained by WikiLeaks, which published over 20,000 pages of emails, allegedly from Podesta, in October and November 2016. Podesta and the Clinton campaign have declined to authenticate the emails. Cybersecurity experts interviewed by PolitiFact believe the majority of emails are probably unaltered, while stating it is possible that the hackers inserted at least some doctored or fabricated emails. The article then attests that the Clinton campaign, however, has yet to produce any evidence that any specific emails in the latest leak were fraudulent. A subsequent investigation by U.S. intelligence agencies also reported that the files obtained by WikiLeaks during the U.S. election contained no "evident forgeries".
Podesta's emails, once released by WikiLeaks, shed light on the inner workings of the Clinton campaign, suggested that CNN commentator Donna Brazile had shared audience questions with the Clinton campaign in advance of town hall meetings, and contained excerpts from Hillary Clinton's speeches to Wall Street firms. Since some of the emails contained references to "hot dogs" and "pizza," they also formed the basis for Pizzagate, a conspiracy theory that posited that Podesta and other Democratic Party officials were involved in a child trafficking ring based out of pizzerias in Washington, D.C.
Data theft
Researchers from the Atlanta-based cybersecurity firm Dell SecureWorks reported that the emails had been obtained through a data theft carried out by the hacker group Fancy Bear, a group of Russian intelligence-linked hackers that were also responsible for cyberattacks that targeted the Democratic National Committee (DNC) and Democratic Congressional Campaign Committee (DCCC), resulting in WikiLeaks publishing emails from those hacks.
SecureWorks concluded Fancy Bear had sent Podesta an email on March 19, 2016, that had the appearance of a Google security alert, but actually contained a misleading link—a strategy known as spear-phishing. (This tactic has also been used by hackers to break into the accounts of other notable persons, such as Colin Powell). The link—which used the URL shortening service Bitly—brought Podesta to a fake log-in page where he entered his Gmail credentials. The email was initially sent to the IT department as it was suspected of being a fake but was described as "legitimate" in an e-mail sent by a department employee, who later said he meant to write "illegitimate".
SecureWorks had tracked the activities of Fancy Bear for more than a year before the cyberattack, and in June 2016, had reported the group made use of malicious Bitly links and fake Google login pages to trick targets into divulging their passwords. However, the hackers left some of their Bitly accounts public, allowing SecureWorks to trace many of their links to e-mail accounts targeted with spear-phishing attacks. Of this list of targeted accounts, more than one hundred were policy advisors to Clinton, or members of her presidential campaign, and by June, twenty staff members had clicked on the phishing links.
On December 9, 2016, the CIA told U.S. legislators the U.S. Intelligence Community concluded that the Russian government was behind the hack, and gave to WikiLeaks a collection of hacked emails from John Podesta.
Authenticity
A declassified report by the Central Intelligence Agency (CIA), Federal Bureau of Investigation (FBI), and National Security Agency (NSA) noted that, "Moscow most likely chose WikiLeaks because of its self-proclaimed reputation for authenticity. Disclosures through WikiLeaks did not contain any evident forgeries."
Cybersecurity experts interviewed by PolitiFact believe that while most of the emails are probably unaltered, it is possible the hackers inserted some doctored or fabricated material into the collection.
Jeffrey Carr, CEO of the cybersecurity company Taia Global, stated: "I've looked at a lot of document dumps provided by hacker groups over the years, and in almost every case you can find a few altered or entirely falsified documents. But only a few. The vast majority were genuine. I believe that's the case with the Podesta emails, as well." Jamie Winterton of the Arizona State University Global Security Initiative stated, "I would be shocked if the emails weren't altered," noting the longstanding Russian practice of promoting disinformation.
Cybersecurity expert Robert Graham described the contents of some of the emails as authentic by using the DomainKeys Identified Mail (DKIM) contained in these emails' signatures. However, not all of the emails have these keys in their signature, and thus could not be verified with this method.
Publication
On October 7, 2016, 30 minutes after the Access Hollywood tape was first published, WikiLeaks began publishing thousands of emails from Podesta's Gmail account. Throughout October, WikiLeaks released installments of these emails on a daily basis. On December 18, 2016, John Podesta stated in Meet the Press that the FBI had contacted him about the leaked emails on October 9, 2016, but had not contacted him since.
On October 17, 2016, the government of Ecuador severed the internet connection of WikiLeaks founder Julian Assange at the Ecuadorian embassy in London. The Ecuadorian government stated that it had temporarily severed Assange's internet connection because of WikiLeaks' release of documents "impacting on the U.S. election campaign", although it also stated this was not meant to prevent WikiLeaks from operating. WikiLeaks continued releasing installments of the Podesta emails during this time.
Contents
Some of the emails provide some insight into the inner workings of the Clinton campaign. For example, the emails show a discussion among campaign manager Robby Mook and top aides about possible campaign themes and slogans. Other emails revealed insights about the internal conflicts of the Clinton Foundation. The BBC published an article detailing 18 "revelations" revealed from their initial review of the leaked emails, including excerpts from Clinton's speeches and politically motivated payments to the Clinton Foundation.
One of the emails released on October 12, 2016, included Podesta's iCloud account password. His iCloud account was hacked, and his Twitter account was then briefly compromised. Some were emails that Barack Obama and Podesta exchanged in 2008.
Clinton's Wall Street speeches
WikiLeaks published transcripts of three Clinton speeches to Goldman Sachs and an 80-page internal campaign document cataloging potentially problematic portions of over 50 paid speeches. During the Democratic primary campaign, Bernie Sanders had criticized Hillary Clinton for refusing to release transcripts of speeches given to financial firms, portraying her as too close to Wall Street. Donald Trump first publicly called for the transcripts during a rally on October 3, 2016, four days before their publication by WikiLeaks: "I would like to see what the speeches said. She doesn't want to release them. Release the papers, Hillary, release those papers."
In the October 2016 presidential debate, Clinton voiced her support for a "no-fly" zone in Syria. In a 2013 speech, Clinton had discussed the difficulties involved. In particular, she noted that in order to establish a no-fly zone, Syria's air defenses would need to be destroyed. Because the Assad government had located these anti-aircraft batteries in populated civilian areas, their destruction would cause many collateral civilian deaths. Clinton's staff additionally flagged comments about regulation of Wall Street, as well as her relationship with the industry, as potentially problematic.
The excerpts came up in the two subsequent presidential debates between Clinton and Trump. In one of the debates, the moderator Martha Raddatz quoted an excerpt saying that politicians "need both a public and a private position" and asked Clinton if it was okay for politicians to be "two-faced". Clinton replied, "As I recall, that was something I said about Abraham Lincoln after having seen the wonderful Steven Spielberg movie called Lincoln. It was a master class watching president Lincoln get the Congress to approve the 13th amendment, it was principled and strategic. I was making the point that it is hard sometimes to get the Congress to do what you want to do." In the third presidential debate, the moderator Chris Wallace quoted a speech excerpt where Clinton says, "My dream is a hemispheric common market with open trade and open borders," and asked if she was for open borders. Clinton replied, "If you went on to read the rest of the sentence, I was talking about energy. We trade more energy with our neighbors than we trade with the rest of the world combined. And I do want us to have an electric grid, an energy system that crosses borders."
Discussions of Catholic religious activities
Sandy Newman wrote to Podesta: "I have not thought at all about how one would 'plant the seeds of the revolution', or who would plant them." Podesta agreed that this was necessary to do as Newman suggested and wrote back to note that they had created groups like Catholics in Alliance for the Common Good and Catholics United to push for a more progressive approach to the faith, change would "have to be bottom up".
Raymond Arroyo responded: "It makes it seem like you're creating organizations to change the core beliefs of the church," he said. "For someone to come and say, 'I have a political organization to change your church to complete my political agenda or advance my agenda', I don't know how anybody could embrace that." Professor Robert P. George added that "these groups are political operations constructed to masquerade as organizations devoted to the Catholic faith".
The leak revealed an email sent by John Halpin, a senior fellow at the Center for American Progress. The email discussed conservative media mogul Rupert Murdoch's decision to raise his kids in the Catholic Church. He wrote, "Many of the most powerful elements of the conservative movement are all Catholic (many converts) ... It's an amazing bastardization of the faith. They must be attracted to the systematic thought and severely backwards gender relations and must be totally unaware of Christian democracy." Palmieri responded: "I imagine they think it is the most socially acceptable, politically conservative religion—their rich friends wouldn't understand if they became evangelical." Supporters and members of Donald Trump's campaign called the email exchange evidence of anti-Catholic sentiment in the Democratic Party. Halpin confirmed that he had written the email, though he contested claims that it was "anti-Catholic" and said that it was taken out of context and that he had sent the email to his Catholic colleagues "to make a fleeting point about perceived hypocrisy and the flaunting of one's faith by prominent conservative leaders."
Presidential debate questions shared by Donna Brazile
On October 11, 2016, WikiLeaks released the text of an email sent by Donna Brazile on March 12, 2016, to Clinton communications director Jennifer Palmieri with the subject header "From time to time I get questions in advance." The email included a question about the death penalty. The following day Clinton received a similar question from the Townhall host, Roland Martin. Brazile initially denied coordinating with the Clinton campaign, and a CNN spokesperson said "CNN did not share any questions with Donna Brazile, or anyone else for that matter, prior to the town hall" and that "we have never, ever given a town hall question to anyone beforehand". According to CNNMoney, the debate moderator Roland Martin did not deny that he shared questions with Brazile. In another leaked email, Brazile wrote: "One of the questions directed to HRC tomorrow is from a woman with a rash. Her family has lead poison and she will ask what, if anything, will Hillary do as president to help the ppl of Flint." At a debate in Flint the following day, a woman whose "son had developed a rash from the contaminated water" asked Clinton: "If elected president, what course will you take to regain my trust in government?" In a third email, Brazile added: "I'll send a few more."
CNN severed ties with Brazile on October 14, 2016. Brazile later said that CNN did not give her "the ability to defend myself" after the email release and referred to WikiLeaks as "WikiLies". Brazile repeatedly denied that she had received the question on the death penalty in advance and has said that the documents released by WikiLeaks were "altered". In an essay for Time written on March 17, 2017, Brazile wrote that the emails revealed that "among the many things I did in my role as a Democratic operative and D.N.C. Vice Chair [...] was to share potential town hall topics with the Clinton campaign." She wrote, "My job was to make all our Democratic candidates look good, and I worked closely with both campaigns to make that happen. But sending those emails was a mistake I will forever regret."
Saudi Arabia and Qatar
One leaked email from August 2014, addressed to Podesta, identifies Saudi Arabia and Qatar as providing "clandestine", "financial and logistic" aid to ISIS and other "radical Sunni groups". The email outlines a plan of action against ISIS, urges putting pressure on Saudi Arabia and Qatar to end their alleged support for the group. Whether the email was originally written by Hillary Clinton, her advisor Sidney Blumenthal, or another person is unclear.
Reactions
The American public's interest in WikiLeaks in October roughly coincided with a tightening presidential race between Trump and Clinton. According to an analysis of opinion polling by Harry Enten of FiveThirtyEight, the release of the emails roughly matched Clinton's decline in the polls, though it did not seem to have an effect on public perceptions of her trustworthiness. Enten concluded that WikiLeaks' activities were "among the factors that might have contributed to [Clinton's] loss."
Sociology professor Zeynep Tufekci criticized how WikiLeaks handled the release of these emails, writing, "Taking one campaign manager's email account and releasing it with zero curation in the last month of an election needs to be treated as what it is: political sabotage, not whistle-blowing." In an op-ed for The Intercept, James Risen criticized the media for its reporting on emails, arguing that the hacking of the emails was a more significant story than the content of the emails themselves. Thomas Frank, writing in an editorial column for The Guardian, argued that the emails gave an "unprecedented view into the workings of the elite, and how it looks after itself".
Glen Caplin, a spokesman for the Clinton campaign, said, "By dribbling these out every day WikiLeaks is proving they are nothing but a propaganda arm of the Kremlin with a political agenda doing [Vladimir] Putin's dirty work to help elect Donald Trump." When asked to comment on the emails release, president Vladimir Putin replied that Russia was being falsely accused. He said, "The hysteria is merely caused by the fact that somebody needs to divert the attention of the American people from the essence of what was exposed by the hackers."
See also
2016 Democratic National Committee email leak
Democratic National Committee cyber attacks
October surprise
Pizzagate conspiracy theory
The Plot to Hack America
References
External links
Podesta emails blog on Politico
2016 in American politics
March 2016 crimes
October 2016 events in the United States
Controversies of the 2016 United States presidential election
Data breaches in the United States
Email hacking
Hillary Clinton controversies
Information published by WikiLeaks
Russian interference in the 2016 United States elections |
7264207 | https://en.wikipedia.org/wiki/SOCET%20SET | SOCET SET | SOCET SET is a software application that performs functions related to photogrammetry. It is developed and published by BAE Systems. SOCET SET was among the first commercial digital photogrammetry software programs. Prior to the development of digital solutions, photogrammetry programs were primarily analog or custom systems built for government agencies.
Features
SOCET SET inputs digital aerial photographs, taken in stereo (binocular) fashion, and from those photos it automatically generates a digital elevation model, digital feature (vector data), and orthorectified images (called orthophotos). The output data is used by customers to create digital maps, and for mission planning and targeting purposes.
The source images can come from film-based cameras, or digital cameras. The cameras can be mounted in an airplane, or on a satellite. A key requirement of the imagery is that there must be two or more overlapping images, taken from different vantage points. This "binocular" characteristic is what makes it mathematically possible to extract the 3-dimensional terrain and feature data from the imagery.
A key step, involving very complex least squares mathematics, is triangulation which determines exactly where the cameras were positioned when the photographs were taken. Photogrammetrists that contributed to SOCET SET's Triangulation include Scott Miller, Bingcai Zhang, John Dolloff, and Fidel Paderes. If the quality of the triangulation is poor, all subsequent data will have correspondingly poor positional accuracy.
The most recent major version, released in 2011, is version 5.6.
Stereo display
SOCET SET, like all high-end photogrammetry applications, requires a stereo display to be used to its fullest potential. Although SOCET SET can run and generate all its products on a computer with only a conventional display, a typical user will require a stereo display to view the digital data overlaid on the imagery. Interactive (manual) quality assurance requires this capability.
File formats
SOCET SET has the ability to read and write the following formats: VITec, Sun Raster, TIFF, TIFF 6.0 (Raster, Tiled, Tiled JPEG, and LZW), JFIF, NITF 2.0, NITF 2.0 JPEG, NITF 2.1, NITF 2.1 JPEG, ERDAS IMAGINE, JPEG 2000, Targa, COT, DGN, USGS DOQ, MrSID, Plain Raster.
SOCET SET has the ability to read terrain data formats, including: DTED, USGS DEM, ASCII (user-defined), LIDAR LAS, ArcGrid, SDTS, NED, GSI, GeoTIFF.
Vector formats supported include: DXF, Shapefile, ASCII (ArcGen), ASCII, TOPSCENE.
Applications
SOCET SET, like some photogrammetry tools, is used for the following applications:
Cartography (map making) – especially topographic maps
Targeting (warfare)
Mission planning
Mission rehearsal
Remote sensing
Building a 3D model of the Earth's surface for computer simulation
Astrophysics
Conservation-restoration
About half of SOCET SET users are commercial, and half are government/military.
History
Development started as a Research and Development project around 1989, with Jim Gambale as the sole developer. At the time, the parent corporation was GDE Systems (formerly a subsidiary of General Dynamics). The hardware platform was a PC running Interactive Unix.
After the prototype proved successful, a larger R&D effort was initiated in 1990, led by Herman Kading. One of the primary accomplishments of this effort was to migrate the product to UNIX Platforms, including Sun, SGI, HP, and IBM.
Technical knowledge was provided by Helava Inc, a company based in Detroit, Michigan that specialized in photogrammetry. Helava employees Scott Miller, Janis Thiede, and Kurt Devenecia brought in-depth experience in the field.
Leadership of the project passed to Neal Olander around 1992, and after this time, SOCET SET (which before then was only sold to government customers) began to be distributed commercially. Around 1996, SOCET SET was migrated to the Microsoft Windows operating system, although the Unix system continued to be supported as well.
Technical skills were provided by Tom Dawson, Kurt Reindel, Dave Mayes, Jim Colgate, Bingcai Zhang and Dave Miller.
Future
Starting in 2008, SOCET SET photogrammetric functionality is migrating to the next generation product, SOCET GXP (Geospatial eXploitation Product).
Meaning of SOCET SET
SOCET SET is an acronym that stands for SOftCopy Exploitation Toolkit. The phrase is a play on the actual tool socket set.
Release history
v1.0 – 1991
v2.0 – 1993
v3.0 – 1995
v4.0 – June 1997
v4.1 – Sept 1998
v4.2 – July 1999
v4.3 – Sept 2000
v4.4 – Dec 2001
v5.0 – Sept 2003
v5.1 – Apr 2004
v5.2 – Nov 2004
v5.3 – June 2006
v5.4 – Summer 2007
v5.4.1 – Jan 2008
v5.5 – Jun 2009
v5.6 – Jun 2011
Alternatives
The chief competitor to SOCET SET is the Leica Photogrammetry Suite (aka LPS, owned by ERDAS), INPHO, PHOTOMOD and Intergraph, which are also leaders in the field of photogrammetry.
Other related applications that have some photogrammetry functionality include ArcGIS, ENVI, and ERDAS IMAGINE, all of which are primarily GIS or remote sensing applications.
See also
Photogrammetry
Triangulation
Orthophoto
Binocular vision
Reconnaissance
Remote Sensing
Imaging Spectroscopy
Least squares
Related terms
Image Processing
GIS
Topography
Multispectral
References
External links
Paper on terrain extraction
Official page
Photogrammetry software
BAE Systems |
57739637 | https://en.wikipedia.org/wiki/Anomali | Anomali | Anomali is an American cybersecurity company that develops and provides threat intelligence products.
History
Anomali was founded in 2013 as ThreatStream, by Greg Martin and Colby Derodeff. In 2016, the company rebranded as Anomali. Anomali has received $96.3 million in funding from 11 investors, including Paladin Capital Group, Institutional Venture Partners (IVP), GV (formerly Google Ventures), General Catalyst, Telstra Ventures, and Lumina Capital
In 2013, the company launched the first version of ThreatStream, a product recognized by global research and advisory firm Gartner as a threat intelligence platform (TIP), in its Market Guide for Security Threat Intelligence Products and Services. In 2016, when the company became known as Anomali, it launched its second product, Anomali Enterprise, which later became known as Anomali Match, an enterprise threat detection solution. In 2019, Anomali introduced Anomali Lens, a web browser-based plugin that uses natural language processing (NLP) to scan structured and unstructured internet content to automate the identification of adversaries, malware, and cyber threats that are present in the users' network, actively attacking the user's network, or newly detected. Since being founded, Anomali has collaborated with partners spanning channel resellers, Managed Security Services Providers (MSSPs), Systems Integrators, and Commercial Threat Intelligence Feed providers to build out the Anomali Preferred Partner Store (Anomali APP Store). Anomali has established a collaborative relationship with Microsoft.
See also
AT&T Cybersecurity
References
Official website
Companies based in Redwood City, California
Software companies established in 2013
Security companies of the United States
Computer security companies |
17498141 | https://en.wikipedia.org/wiki/Dan%20Kaminsky | Dan Kaminsky | Daniel Kaminsky (February 7, 1979 – April 23, 2021) was an American computer security researcher. He was a co-founder and chief scientist of WhiteOps, a computer security company. He previously worked for Cisco, Avaya, and IOActive, where he was the director of penetration testing. The New York Times labeled Kaminsky an "Internet security savior" and "a digital Paul Revere".
Kaminsky was known among computer security experts for his work on DNS cache poisoning, for showing that the Sony Rootkit had infected at least 568,000 computers, and for his talks at the Black Hat Briefings. On June 16, 2010, he was named by ICANN as one of the Trusted Community Representatives for the DNSSEC root.
Early life
Daniel Kaminsky was born in San Francisco on February 7, 1979 to Marshall Kaminsky and Trudy Maurer. His mother told The New York Times that after his father bought him a RadioShack computer at age four, Kaminsky had taught himself to code by age five. At 11, his mother received a call from a government security administrator who told her that Kaminsky had used penetration testing to intrude into military computers, and that the family's Internet would be cut off. His mother responded by saying if their access was cut, she would take out an advertisement in the San Francisco Chronicle to publicize the fact that an 11-year-old could break military computer security. Instead, a three-day Internet "timeout" for Kaminsky was negotiated. In 2008, after Kaminsky found and coordinated a fix for a fundamental DNS flaw, he was approached by the administrator, who thanked him and asked to be introduced to his mother.
Kaminsky attended St. Ignatius High School and Santa Clara University. After graduating from college, he worked for Cisco, Avaya, and IOActive, before founding White Ops, his own firm.
Career
Sony rootkit
During the Sony BMG copy protection rootkit scandal, where Sony BMG was found to be covertly installing anti-piracy software onto PCs, Kaminsky used DNS cache snooping to discover whether servers had recently contacted any of the domains accessed by the Sony rootkit. He used this technique to estimate that there were at least 568,000 networks that had computers with the rootkit. Kaminsky then used his research to bring more awareness to the issue while Sony executives were trying to play it down.
Earthlink and DNS lookup
In April 2008, Kaminsky realized a growing practice among ISPs potentially represented a security vulnerability. Various ISPs have experimented with intercepting return messages of non-existent domain names and replacing them with advertising content. This could allow hackers to set up phishing schemes by attacking the server responsible for the advertisements and linking to non-existent subdomains of the targeted websites. Kaminsky demonstrated this process by setting up Rickrolls on Facebook and PayPal. While the vulnerability used initially depended in part on the fact that Earthlink was using Barefruit to provide its advertising, Kaminsky was able to generalize the vulnerability to attack Verizon by attacking its ad provider, Paxfire.
Kaminsky went public after working with the ad networks in question to eliminate the immediate cross-site scripting vulnerability.
Flaw in DNS
In 2008, Kaminsky discovered a fundamental flaw in the Domain Name System (DNS) protocol that could allow attackers to easily perform cache poisoning attacks on most nameservers (djbdns, PowerDNS, MaraDNS, Secure64 and Unbound were not vulnerable). With most Internet-based applications depending on DNS to locate their peers, a wide range of attacks became feasible, including website impersonation, email interception, and authentication bypass via the "Forgot My Password" feature on many popular websites. After discovering the problem, Kaminsky initially contacted Paul Vixie, who described the severity of the issue as meaning "everything in the digital universe was going to have to get patched." Kaminsky then alerted the Department of Homeland Security and executives at Cisco and Microsoft to work on a fix.
Kaminsky worked with DNS vendors in secret to develop a patch to make exploiting the vulnerability more difficult, releasing it on July 8, 2008. To date, the DNS design flaw vulnerability has not been fully fixed.
Kaminsky had intended not to publicize details of the attack until 30 days after the release of the patch, but details were leaked on July 21, 2008. The information was quickly pulled down, but not before it had been mirrored by others. He later presented his findings at the Black Hat Briefings, at which he wore both a suit and rollerskates.
Kaminsky received a substantial amount of mainstream press after disclosing this vulnerability, but experienced some backlash from the computer security community for not immediately disclosing his attack. When a reporter asked him why he had not used the DNS flaw for his own financial benefit, Kaminsky responded that he felt it would be morally wrong, and he did not wish for his mother to visit him in prison.
The actual vulnerability was related to DNS only having 65,536 possible transaction IDs, a number small enough to simply guess given enough opportunities. Dan Bernstein, author of djbdns, had reported this as early as 1999. djbdns dealt with the issue using Source Port Randomization, in which the UDP port was used as a second transaction identifier, thus raising the possible ID count into the billions. Other more popular name server implementations left the issue unresolved due to concerns about performance and stability, as many operating system kernels simply weren't designed to cycle through thousands of network sockets a second. Instead, other implementers assumed that DNS's time to live (TTL) field would limit a guesser to only a few attempts a day.
Kaminsky's attack bypassed this TTL defense by targeting "sibling" names like "83.example.com" instead of "www.example.com" directly. Because the name was unique, it had no entry in the cache, and thus no TTL. But because the name was a sibling, the transaction-ID guessing spoofed response could not only include information for itself, but for the target as well. By using many "sibling" names in a row, he could induce a DNS server to make many requests at once. This tactic provided enough opportunities to guess the transaction ID to successfully spoof a reply in a reasonable amount of time.
To fix this issue, all major DNS servers implemented Source Port Randomization, as djbdns and PowerDNS had done before. This fix makes the attack up to 65,536 times harder. An attacker willing to send billions of packets can still corrupt names. DNSSEC has been proposed as the way to bring cryptographic assurance to results provided by DNS, and Kaminsky had spoken in favor of it.
Automated detection of Conficker
On March 27, 2009, Kaminsky discovered that Conficker-infected hosts have a detectable signature when scanned remotely. Signature updates for a number of network scanning applications are now available, including NMap and Nessus.
Flaws in Internet X.509 infrastructure
In 2009, in cooperation with Meredith L. Patterson and Len Sassaman, Kaminsky discovered numerous flaws in the SSL protocol. These include the use of the weak MD2 hash function by Verisign in one of their root certificates and errors in the certificate parsers in a number of Web browsers that allow attackers to successfully request certificates for sites they do not control.
Attack by "Zero for 0wned"
On July 28, 2009, Kaminsky, along with several other high-profile security consultants, experienced the publication of their personal email and server data by hackers associated with the "Zero for 0wned" online magazine. The attack appeared to be designed to coincide with Kaminsky's appearance at the Black Hat Briefings.
Interpolique
In June 2010, Kaminsky released Interpolique, a beta framework for addressing injection attacks such as SQL injection and cross-site scripting in a manner comfortable to developers.
Personal life and death
The New York Times wrote that "in a community known for its biting, sometimes misogynistic discourse on Twitter, Mr. Kaminsky stood out for his empathy." He was known for regularly paying for hotels or travel bills for other people going to Black Hat, and once paid for a plane ticket for a friend of his after she had broken up with her boyfriend; the pair later married. At various points in his career, Kaminsky shifted his focus to work on projects related to his friends' and family's health, developing an app that helps colorblind people, working on hearing aid technology, and developing telemedicine tools related to AIDS among refugees for Academic Model Providing Access to Healthcare (AMPATH). According to his mother, "he did things because they were the right thing to do, not because they would elicit financial gain."
Kaminsky was also an outspoken privacy rights advocate. During the FBI–Apple encryption dispute, he criticized comments by then-FBI director James Comey, saying "what is the policy of the United States right now? Is it to make things more secure or to make them less secure?" In a 2016 interview, Kaminsky said, "the Internet was never designed to be secure. The Internet was designed to move pictures of cats ... We didn’t think you’d be moving trillions of dollars onto this. What are we going to do? And here’s the answer: Some of us got to go out and fix it."
Kaminsky died on April 23, 2021 of diabetic ketoacidosis at his home in San Francisco. He had been frequently hospitalized for the disease in prior years. After his death, he received tributes from the Electronic Frontier Foundation, which called him a "friend of freedom and embodiment of the true hacker spirit", and from Jeff Moss, who said Kaminsky should be in the Internet Hall of Fame. On Dec 14, 2021, that wish came to fruition.
Works
References
External links
Dan Kaminsky; Scott Rose; Cricket Liu; (June 2009) DNSSEC: What it Means for DNS Security and Your Network
White Ops - security company, of which Dan Kaminsky was a founder
1979 births
2021 deaths
Activists from San Francisco
American computer specialists
Avaya employees
Cisco people
Computer security specialists
Computer systems researchers
Deaths from diabetes
Ethical hackers
Internet activists
Privacy activists
Santa Clara University alumni |
10996770 | https://en.wikipedia.org/wiki/Fat%20Guy%20Stuck%20in%20Internet | Fat Guy Stuck in Internet | Fat Guy Stuck in Internet is an American science-fiction comedy television series created by John Gemberling and Curtis Gwinn for Cartoon Network's late-night adult-oriented programming block Adult Swim; and ended with a total of ten episodes.
An adaptation/remake of Gemberling and Gwinn's 2005 Channel 102 web series Gemberling, Fat Guy Stuck in Internet follows computer programmer Ken Gemberling – the titular "Fat Guy" – who is accidentally sucked into his computer and learns he is destined to save cyberspace from a variety of evils. After a pilot aired in May 2007, Adult Swim commissioned a full season of Fat Guy Stuck in Internet which lasted ten episodes, airing from June 2008 to August 2008. Following the run of its first season, the channel chose not to renew the show for a second, effectively cancelling the series.
Setting and premise
Hotshot computer programmer Ken Gemberling is the top programmer at Ynapmoclive Interactive, but is also remarkably rude, selfish, and arrogant. After dumping beer on his computer keyboard, Gemberling is inexplicably sucked into his computer, landing in the farthest reaches of the internet where he soon discovers that he is in fact "The Chosen One", prophesied to save cyberspace from a devastating virus known as the nanoplague. Joined by a pair of humanoid "programs" named Bit and Byte, Gemberling embarks on an epic adventure to save the internet, return home and unleash the hero within, all the while hunted through cyberspace by ruthless white trash bounty hunter Chains.
Production
Fat Guy Stuck in Internet first appeared as a 2005 web series created by Upright Citizens Brigade alumni John Gemberling and Curtis Gwinn, known as the comedy duo of The Cowboy & John. Originally titled Gemberling, the series followed the same storyline and major plot points of its television adaptation, featuring much of the same cast (with the notable exception of Katie Dippold in the role of Byte) though with a considerably lower budget and more profanity. Aired in five-minute episodes as part of Channel 102, Gemberling became Channel 102's longest-running original series, lasting a total of eight episodes including a 17-minute finale. The shorts also aired as part of Fuse TVs Munchies.
In January 2007, Cartoon Network announced that they had commissioned Gemberling and Gwinn to create a series based on Gemberling as part of the Adult Swim programming block. Adult Swim made the decision to rename the series as Fat Guy Stuck in Internet, a title which both creators disliked. The entirety of the series was filmed and produced inside of a warehouse in Bushwick, Brooklyn. On May 30, 2008, Gemberling and Gwinn premiered Fat Guy Stuck in Internet before a live audience at the Upright Citizens Brigade theatre in New York City.
Fat Guy Stuck in Internet uses a mix of greenscreen effects, hard sets, miniatures, matte paintings and computer animation to create the show's cyberspace environment, though carries over many of the intentionally low-budget props and special effects from the web series (e.g. broom handles being used as laser-shooting weapons, etc.).
The basic premise of Fat Guy Stuck in Internet is a parody of the film Tron, from which the series also borrows several visual elements. Gemberling and Gwinn have noted further parody and stylistic influence taken from The Matrix, Star Wars, The Lord of the Rings and Stephen King, in particular his Dark Tower series.
Characters
Gemberling (John Gemberling) - A skilled but slovenly programmer in real life otherwise recognized by his given internet handle of "Fat Guy Stuck in Internet", Gemberling is actually the prophesied savior of the internet, destined to save cyberspace from evil. At first a selfish jerk, Gemberling's quest brings out his inner goodness and strength as he learns the value of heroism and friendship.
Chains (Curtis Gwinn) - A dim-witted redneck bounty hunter hired by the C.E.O. to track Gemberling through cyberspace, Chains is more interested in smoking pot and eating cyberchicken than doing his job. Chains eventually becomes Gemberling's troublesome sidekick.
Byte (Liz Cackowski) - The sister of Bit, Byte is a humanoid computer program who aides Gemberling early in his quest, placing great confidence and belief in his abilities. She is later kidnapped by Chains and converted to evil by the C.E.O.
Bit (Neil Casey) - Brother of Byte, Bit is another humanoid program who assists Gemberling in his adventure. Along with Byte, he is kidnapped by Chains and ultimately suffers his fate at the hands of his newly evil sister.
The C.E.O. (John Gemberling) - The villainous (albeit inept) C.E.O. of Ynapmoclive Interactive, he controls the nanoplague which will destroy the internet.
Episodes
International broadcast
In Canada, Fat Guy Stuck in Internet previously aired on G4's Adult Digital Distraction block, and on the Canadian version of Adult Swim.
Reception
Fat Guy Stuck in Internet received mixed reviews from viewers and critics. The A.V. Club, having reviewed every episode as part of their "Adult Swim Sunday" column, was one of the series' harsher critics, primarily criticizing the show's "cheap" writing, poor movie parodies and John Gemberling's "smirking, mediocre" performance. As the series progressed, however, the reception became a bit warmer, with later episodes being called "not completely terrible" and "not entirely unpleasant" to the final episode being described as "going out with a bang", the reviewer admitting "for a couple minutes, I actually found myself engaged". Time looked positively on the series, noting the show "delivers", being "striking-looking and even good-hearted in its own bizarre way".
Notes
References
External links
2000s American comic science fiction television series
2007 American television series debuts
2008 American television series endings
Mass media about Internet culture
Adult Swim original programming
Channel 101
English-language television shows
Television series by Williams Street |
2834844 | https://en.wikipedia.org/wiki/Engineering%20Animation | Engineering Animation | Engineering Animation, Inc., or EAI, was a services and software company based in Ames, Iowa, United States. It remained headquartered there from its incorporation in 1990 until it was acquired in 2000 by Unigraphics Solutions, Inc., now a subsidiary of the German technology multinational Siemens AG. During its existence, EAI produced animations to support litigants in court, wrote and sold animation and visualization software, and developed a number of multimedia medical and computer game titles. Part of EAI's business now exists in a spin-off company, Demonstratives.
History
EAI was incorporated in 1990 by Martin Vanderploeg, Jay Shannan, Jim Bernard, and Jeff Trom, all Ames-based engineers closely involved with Iowa State University's Virtual Reality Applications Center (VRAC) founded by Vanderpoeg and Bernard. Later that year they were joined by a former colleague of Vanderploeg's, Matthew Rizai, a mechanical engineer and software entrepreneur, who became CEO.
EAI got its start by producing computer animations to help illustrate crime scenes and other technical courtroom testimony for lawyers and expert witnesses, eventually branching out in to visualization applications in medicine, product design, and a wide range of other applications. In 1994, EAI launched VisLab, an animation package initially written to leverage the graphics capabilities of the SGI UNIX computer platform. At the time, it was considered unusual in its ability to render complex animation in hardware rather than in software. Steve Ursenbach, general manager of SGI's Application Division commented, "VisLab is the first software program to take such advantage of our hardware rendering capabilities." VisLab's UI was based on the widely used Motif software.
EAI's computer-generated animations were used in reconstructing the TWA Flight 800 plane crash scenario and numerous crime scene investigations, including the murder of Nicole Brown Simpson and the Oklahoma City bombing for NBC's Inside Edition. In 1997, EAI collaborated with the American Bar Association Judicial Division Lawyers Conference to produce "Computer Animation in the Courtroom – A Primer," a CD-ROM introduction and guide to the use of computer animations in reconstructing crimes.
EAI's manufacturing clients included Ford, Motorola, Lockheed Martin, and 3M.
Visualization and collaboration
Based on the initial success of VisLab with automotive companies, EAI developed and released the first commercially viable 3D interactive visualization software package, VisFly, first on the SGI and later the HP and Sun platforms in 1995 and 1996. VisFly was eventually ported to Microsoft Windows and IBM AIX and expanded into the VisView and VisMockup product lines. Networking capabilities were subsequently added to VisFly via NetFly and to VisView/VisMockup via VisNetwork. Providing the visualization software, tools, and network access methods to convert common CAD data into the JT visualization format were keys to this most successful of EAI's business ventures. Networking capabilities were eventually expanded further with e-Vis.com, which provided an internet-hosted environment and many of the features now seen in mainstream collaboration software.
EAI Interactive
By 1996, EAI began to broaden its scope of development. This led to EAI's purchase that year of a small video game developer in Salt Lake City, Utah, headed by Bryan Brandenburg. This new studio became the primary location of EAI Interactive's activities. The Salt Lake City office worked more or less independently, though from time to time it used the services of the main Ames office for overflow work.
As an independent game developer, EAI Interactive produced a variety of titles, including Barbie Magic Hair Styler, Trophy Buck for Sierra On-Line, Championship Bass for Electronic Arts, A Bug's Life for Disney Interactive, Clue and Outburst for Hasbro Interactive, and Scooby Doo: Mystery of the Fun Park Phantom and Animaniacs: Titantic Adventure for SouthPeak Games.
In addition to game development, EAI's medical and scientific illustration team developed a variety of 3D interactive educational products including The Dynamic Human for McGraw Hill and The Dissectible Human for Elsevier.
Recognition
In the September 1997 issue of Individual Investor magazine, EAI was named one of "America's Fastest Growing Companies." In the January 12, 1998, issue of Businessweek, the magazine recognized company CEO Rizai as one of seven notable entrepreneurs of 1997. In 1999, Forbes ASAP magazine ranked the company as one of the 100 most dynamic technology companies in the US, placing it twenty-third overall.
An international company
In its heyday, EAI had offices located across the US and around the world. The company's primary financial success was its visualization software VisFly, later renamed VisView. This product line lives on today as part of Teamcenter from Siemens PLM Software after a series of acquisitions starting in 1999. The litigation supporting animation services portion of EAI continues as a spin-off company called Demonstratives, today a division of Engineering Systems Inc. (ESI), in Aurora, Illinois.
Former EAI employees have gone on to work at Pixar, Disney, Zygote Media Group, Hasbro, MediaTech, Milkshake Media, DreamWorks, and Maxis, among others. In 2008, Vanderploeg, Rizai, and others in the EAI management team founded Webfilings (now known as Workiva), a SaaS company specializing in corporate compliance solutions software, also headquartered in Ames.
Products and services
Animation services and Special Effects
The Discovery Channel Skyscraper at Sea (1995?)
National Geographic Asteroids: Deadly Impact (1997)
Animation software
VisLab (1994)
VisModel (1996?)
Visualization software
VisFly (1995)
NetFly (1996)
VisView (1997?)
VisMockup (1998?)
VisFactory (1998?)
VisNetwork (1998)
eVis.com (1999)
Multimedia titles
Barbie Magic HairStyler
Crayola Magic Coloring Book
MicroType Multimedia
The Dynamic Human
Computer games
X-Fire (1997, unpublished)
Legend of the Five Rings (1998, unpublished)
Clue (1998)
K'NEX: K'NEX Lost Mines (1998)
Scooby Doo: Mystery of the Fun Park Phantom (1999)
Trans-Am '68–'72 (1999, unpublished)
Small Soldiers
Disney: A Bug's Life: ActivePlay
Disney: Toy Story 2: Activity Center
Animaniacs: A Gigantic Adventure
Crazy Paint
Clue Chronicles: Fatal Illusion (2000)
Sierra: Trophy Buck
Sierra: Trophy Hunting
Hasbro: Outburst
Championship Bass (2000)
References
External links
EAI Interactive details at MobyGames
XEAI Community Site
Demonstratives, Inc.
Siemens PLM Software
Defunct software companies
Defunct video game companies of the United States |
14888910 | https://en.wikipedia.org/wiki/BitArmor | BitArmor | BitArmor Systems Inc. was a firm based in the Gateway Center of downtown Pittsburgh, Pennsylvania. Founded in 2003 by two Carnegie Mellon University alumni, BitArmor sold software-based encryption and data management technologies. The company mainly focused on industries that required protection of sensitive data, such as in retail, education, and health care.
BitArmor' primary product was BitArmor DataControl, a software solution that combined full disk encryption with persistent file encryption technology.
The company completed a $5 million round of venture capital funding in May 2009. BitArmor used the venture capital to fund development efforts and expand marketing and sales. At the time of the round of financing BitArmor employed 35 people.
BitArmor was acquired by Trustwave in January 2010, in order to strengthen the latter's PCI services.
Notes
Computer security companies
Companies established in 2003 |
12308987 | https://en.wikipedia.org/wiki/GEC%20Series%2063 | GEC Series 63 | The GEC Series 63 was a 32-bit minicomputer produced by GEC Computers Limited of the UK during the 1980s in conjunction with A.B. Dick in USA. During development, the computer was known as the R Project. The hardware development (under Dick Ruth and Ed Mack) was done in Scottsdale, Arizona whilst the software was the responsibility of GEC in Dunstable, UK. The hardware made early use of pipeline concepts, processing one instruction whilst completing the preceding one.
Announced in 1983, two operating systems were to be offered: UX63 and OS6000. UX63 was a Unix port derived from UNIX System III, whereas OS6000 was a port of the OS4000 operating system from the GEC 4000 series (under pressure from the marketing department, concerned about compatibility with its existing user base). Subsequently a version of UNIX System V Release 2 was added - largely to compete with VAX machines which were becoming the fashionable computer of choice amongst academics, concerned about being able to access software from US colleagues. The C compiler, necessary to effect the implementation, was first produced for OS4000 and cross-compiled.
The Unix product was one of the first ports to a different processor architecture undertaken in the UK, with large chunks of the GEC 63 Unix port done at the University of Edinburgh. (Other comparable early Unix ports included that of the High Level Hardware Orion system which launched with 4.1BSD Unix in 1984, ICL's PNX for the PERQ workstation in 1983, and a reported port to a Bleasdale Computer Systems product by Root Computer in early 1983. These ports were likely to have been fully operational before GEC 63 Unix was.)
There were plans for six models, but only two models of the GEC Series 63 were ever produced: the 63/30 and the 63/40. The 63/40 added an embedded GEC 4160 minicomputer running OS4000 to provide additional communications features (such as X.25 and X.29 access).
The Series 63 was used by several UK universities, also being procured with some controversy as part of the Alvey Project, having being chosen as a British-made alternative (along with Systime-produced VAX machines) to the DEC VAX, with DEC's machine being the only one available at the time that was capable of running the specified Berkeley Unix operating system. One of the first student-run university computing facilities in the UK, The Tardis Project, was established in 1988 in the Department of Computer Science of the University of Edinburgh using a Series 63. The name came from the resemblance of the Series 63's large blue cabinet to Doctor Who's time machine.
The Series 63 was discontinued in August 1987 after disappointing sales. Approximately 22 systems were sold during the lifetime of the system.
See also
GEC Computers
References
External links
Computing at Chilton, GEC Series 63
Minicomputers
GEC Computers |
53557242 | https://en.wikipedia.org/wiki/Android%20Oreo | Android Oreo | Android Oreo (codenamed Android O during development) is the eighth major release and the 15th version of the Android mobile operating system. It was first released as an alpha quality developer preview in March 2017 and released to the public on August 21, 2017.
It contains a number of major features, including notification grouping, picture-in-picture support for video, performance improvements, and battery usage optimization, and support for autofillers, Bluetooth 5, system-level integration with VoIP apps, wide color gamuts, and Wi-Fi Aware. Android Oreo also introduces two major platform features: Android Go – a software distribution of the operating system for low-end devices – and support for implementing a hardware abstraction layer.
Android Oreo ended its support in 2021.
, 10.67% of Android devices run Oreo (no longer receiving security updates), with 3.23% on Android 8.0 (API 26) and 7.44% using Android 8.1 (API 27).
History
Android Oreo was internally codenamed "Oatmeal Cookie." On March 21, 2017, Google released the first developer preview of Android "O", available for the Nexus 5X, Nexus 6P, Nexus Player, Pixel C, and both Pixel smartphones. The second, considered beta quality, was released on May 17, 2017. The third developer preview was released on June 8, 2017 and offered a finalized version of the API. DP3 finalized the release's API to API level 26, changed the camera UI, reverted the Wi-Fi and cellular connectivity levels in the status bar back to Wi-Fi being on the left, added themed notifications, added a battery animation in Settings: Battery, a new icon and darker background for the Clock app, and a teardrop icon shape for apps.
On July 24, 2017, a fourth developer preview was released which included the final system behaviors and the latest bug fixes and optimizations. Android "O" was officially released on August 21, 2017 under the name "Oreo". Its lawn statue was unveiled at a promotional event across from Chelsea Market in New York City—a building which formerly housed a Nabisco factory where Oreo cookies were first produced. Factory images were made available for compatible Pixel and Nexus devices later that day. The Sony Xperia XZ1 and Sony Xperia XZ1 Compact were the first devices available with Oreo pre-installed.
Android 8.1 was released in December 2017 for Pixel and Nexus devices, which features minor bug fixes and user interface changes.
Features
User experience
Notifications can be snoozed, and batched into topic-based groups known as "channels". The 'Major Ongoing' feature orders the alerts by priority, pinning the most important application to the top slot. Android Oreo contains integrated support for picture-in-picture modes (supported in the YouTube app for YouTube Premium subscribers). The "Settings" app features a new design which has been reduced in size, with a white theme and deeper categorization of different settings, while its ringtone, alarm and notification sound settings now contain an option for adding custom sounds to the list.
The Android 8.1 update supports the display of battery percentages for connected Bluetooth devices, makes the notification shade slightly translucent, and dims the on-screen navigation keys in order to reduce the possibility of burn-in.
Platform
Android Oreo adds support for Neighborhood Aware Networking (NAN) for Wi-Fi based on Wi-Fi Aware, Bluetooth 5, wide color gamuts in apps, an API for autofillers, multiprocess and Google Browsing support for WebViews, an API to allow system-level integration for VoIP apps, and launching activities on remote displays. Android Runtime (ART) features performance improvements. Android Oreo contains additional limits on apps' background activities in order to improve battery life. Apps can specify "adaptive icons" for differently-shaped containers specified by themes, such as circles, squares, and squircles.
Android Oreo adds native support for Advanced Audio Coding, aptX, aptX HD and LDAC Bluetooth codecs. Android Oreo supports new emoji that were included in the Unicode 10 standard. A new emoji font was also introduced, which notably redesigns its face figures to use a traditional circular shape, as opposed to the "blob" design that was introduced on KitKat.
The underlying architecture of Android was revised so that low-level, vendor-specific code for supporting a device's hardware can be separated from the Android OS framework using a hardware abstraction layer known as the "vendor interface". Vendor interfaces must be made forward compatible with future versions of Android. This new architecture, called Project Treble, allows the quicker development and deployment of Android updates for devices, as vendors would only need to make the necessary modifications to their bundled software. All devices shipping with Oreo must support a vendor interface, but this feature is optional for devices being updated to Oreo from an earlier version. The "seamless updates" system introduced in Android 7.0 was also modified to download update files directly to the system partition, rather than requiring them to be downloaded to the user partition first. This reduces storage space requirements for system updates.
Android Oreo introduces a new automatic repair system known as "Rescue Party"; if the operating system detects that core system components are persistently crashing during startup, it will automatically perform a series of escalating repair steps. If all automatic repair steps are exhausted, the device will reboot into recovery mode and offer to perform a factory reset.
The Android 8.1 update also introduces a neural network API, which is designed to "[provide] apps with hardware acceleration for on-device machine learning operations." This API is designed for use with machine learning platforms such as TensorFlow Lite, and specialized co-processors such as the Pixel Visual Core (featured in Google's Pixel 2 smartphones, but dormant until 8.1 is installed), but it also provides a CPU fallback mode.
Android Go
A tailored distribution for low-end devices known as Android Go was unveiled for Oreo; it is intended for devices with 1 GB of RAM or less. This mode has platform optimizations designed to reduce mobile data usage (including enabling Data Saver mode by default), and a special suite of Google Mobile Services designed to be less resource- and bandwidth-intensive. The Google Play Store would also highlight lightweight apps suited for these devices. The operating system's interface is also modified, with the quick settings panel providing greater prominence to information regarding the battery, mobile data limit, and available storage, the recent apps menu using a modified layout and being limited to four apps (in order to reduce RAM consumption), and an API for allowing mobile carriers to implement data tracking and top-ups within the Android settings menu. Google Play Services was also modularized to reduce its memory footprint.
Android Go was made available to OEMs for Android 8.1.
Security
Android Oreo re-brands automatic scanning of Google Play Store and sideloaded apps as "Google Play Protect", and gives the feature, as well as Find My Device (formerly Android Device Manager) higher prominence in the Security menu of the Settings app. As opposed to a single, system-wide setting for enabling the installation of apps from sources outside of the Google Play Store, this function is now implemented as a permission that can be granted to individual apps (i.e. clients for third-party app repositories such as Amazon Appstore and F-Droid). A verified boot now includes a "Rollback Protection" feature, which enforces a restriction on rolling back the device to a previous version of Android, aimed at preventing a potential thief from bypassing security measures by installing a previous version of the operating system that doesn't have them in place.
See also
Android version history
iOS 11
macOS High Sierra
Windows 10
Windows 10 Mobile
References
External links
2017 software
Android (operating system)
Oreo |
27648658 | https://en.wikipedia.org/wiki/World%20Class%20IT | World Class IT | World Class IT: Why Businesses Succeed When IT Triumphs is a 2009 IT management book by Peter A. High that aims to provide a framework by which CIOs and other executives can promote IT within a business. High outlines five principles that align IT with business strategy and allow companies to monitor and improve IT's performance. The book highlights a 2000s trend that views IT as a digital nervous system that delivers corporate thinking to business units, partners and customers. Since the 2009 publication, the book has also been published in Mandarin and Korean editions.
Five principles of World Class IT
People form the foundation of an organization. Without the right people doing the right jobs at the right time, it is difficult to achieve excellent performance.
Infrastructure distinguishes between a reactive organization and a proactive one. If software, hardware, networks, and so on are not performing consistently, the IT organization will become lodged in reactive mode. If the infrastructure works reliably, then a greater percentage of the organization can think about the future.
Through Project and portfolio management new capabilities can emerge within the company. It is important to ensure that the portfolio collectively supports the goals of the business and that projects are delivered on time and on budget.
IT and business partnerships are vital. It is the IT executive's role to ensure that different groups within IT function as a team, communicating efficiently and effectively. It is equally important that IT develop partnering relationships with executive management, lines of business and key business functions to ensure ownership of and success for IT initiatives.
External partnerships are important as outsourcing becomes more common. By contributing to the discussion about business strategy, IT is in a strong position to determine which aspects of IT are best handled by external partners. Further, IT must be adept at managing those relationships to be sure the company gains the expected value from its outsourcing activity.
Relationship between IT and the business
Using a set of metrics and models, World Class IT crystallizes a strong trend that has emerged across industries in the past decade: the integration of IT and corporate business strategy. CIO Digest recognizes this pattern as key to the long-term organizational growth and development of a company.
In a 2009 IT study, Deloitte finds that IT departments are consistently under pressure to deliver business value in the face of increasingly tighter budgets. IT departments are now expected to synchronize and connect various dimensions of a business-infrastructure, customer service, and project management - yet there is debate within the industry over how to best align information technology with corporate governance. Since corporations spend large portions of their operating budgets on IT, this debate holds a prominent position in business and information technology theory.
CIO Insight notes that the increasing depth and width of IT-business relationships has led to the emergence of IT executives from the corporate structure. Whereas IT executives typically rose from within their technology departments, the need for business-minded CIO's and CTO's has created a demand for leaders versed in corporate strategy. Business theory suggests that this trend will continue as IT moves into the realm of strategic planning.
Forum on World Class IT podcast
IT professionals are increasingly looking to congregate around social media as a forum to raise awareness of the importance of IT-business strategy. Brian Blanchard of CIO Magazine notes that CIOs turn to social media not only to share their own innovative ideas, but also to follow industry trends and form valuable connections.
In an effort to expand the discussion surrounding IT-business relationships, prominent IT professionals participate in a series of informational podcast interviews based on the five principles outlined in World Class IT. The discussions are conducted by Peter High and the over 400 interviews are made available on iTunes and other forums such as Technovation with Peter High (formerly The Forum on World Class IT).
Reviews
IT Business Edge published a review and excerpt of the book.
Related work by the author
The author Peter High has followed up the publishing of his book with numerous pieces on related World Class IT topics. Some of those include the following titles:
"How CIOs at Microsoft and Cisco Realigned IT For A New Era"
"CIOs Can Be Chief Innovation Officers, Too"
"Inside Qualcomm's Integrated IT Infrastructure"
"Ameristar Casino CIO Sheleen Quish featured in 'Should the CIO Run HR, Too?'"
"McKesson's Randy Spratt: Where IT Nirvana Meets Business Nirvana"
"Great American Insurance CIO Piyush Singh: Taking Calculated Risks"
"The New Normal: Proceed With Caution"
"The New CIO's First Steps"
"ADP: IT Innovation in Hard Times Pays Off"
"Flextronics: Keeping IT Running on 'Just Enough'"
"World Class IT: The best CIOs are full partners in driving and executing business strategy"
See also
Digital nervous system
Information technology
Project portfolio management
References
Sources
External links
What it Means to Run IT Like a Business - CIO Magazine
Prioritizing IT Projects Based on Business Strategy - CIO.com
Business books
2009 non-fiction books |
10076115 | https://en.wikipedia.org/wiki/GPM%20%28software%29 | GPM (software) | GPM ("General Purpose Mouse") software provides support for mouse devices in Linux virtual consoles. It is included in most Linux distributions.
ncurses supports GPM; many applications use ncurses mouse-support.
Other applications that work with GPM include Midnight Commander, Emacs, and JED.
See also
moused, a mouse-driver for FreeBSD
Sources
External links
Free system software
Free software programmed in C
Linux software |
2070675 | https://en.wikipedia.org/wiki/Mixminion | Mixminion | Mixminion is the standard implementation of the Type III anonymous remailer protocol. Mixminion can send and receive anonymous e-mail.
Mixminion uses a mix network architecture to provide strong anonymity, and prevent eavesdroppers and other attackers from linking senders and recipients. Volunteers run servers (called "mixes") that receive messages, decrypt them, re-order them, and re-transmit them toward their eventual destination. Every e-mail passes through several mixes so that no single mix can link message senders with recipients.
To send an anonymous message, mixminion breaks it into uniform-sized chunks (also called "packets"), pads the packets to a uniform size, and chooses a path through the mix network for each packet. The software encrypts every packet with the public keys for each server in its path, one by one. When it is time to transmit a packet, mixminion sends it to the first mix in the path. The first mix decrypts the packet, learns which mix will receive the packet, and relays it. Eventually, the packet arrives at a final (or "exit") mix, which sends it to the chosen recipient. Because no mix sees any more of the path besides the immediately adjacent mixes, they cannot link senders to recipients.
Mixminion supports Single-Use Reply Blocks (or SURBs) to allow anonymous recipients. A SURB encodes a half-path to a recipient, so that each mix in the sequence can unwrap a single layer of the path, and encrypt the message for the recipient. When the message reaches the recipient, the recipient can decode the message and learn which SURB was used to send it; the sender does not know which recipient has received the anonymous message.
The most current version of Mixminion Message Sender is 1.2.7 and was released on 11 February 2009.
On 2 September 2011, a news announcement was made that stated the source was uploaded to GitHub
See also
Anonymity
Anonymous P2P
Anonymous remailer
Cypherpunk anonymous remailer (Type I)
Mixmaster anonymous remailer (Type II)
Onion routing
Tor (anonymity network)
Pseudonymous remailer (a.k.a. nym servers)
Penet remailer
Data privacy
Traffic analysis
References
External links
Windows GUI Frontend for Mixminion
Apple OSX, Macport File for Mixminion
Network stats
Noreply number of mixminion nodes
Internet Protocol based network software
Anonymity networks
Routing
Network architecture
Mix networks |
81943 | https://en.wikipedia.org/wiki/Protesilaus | Protesilaus | In Greek mythology, Protesilaus (; Ancient Greek: Πρωτεσίλᾱος Prōtesilāos) was a hero in the Iliad who was venerated at cult sites in Thessaly and Thrace. Protesilaus was the son of Iphiclus, a "lord of many sheep"; as grandson of the eponymous Phylacos, he was the leader of the Phylaceans. Hyginus surmised that he was originally known as Iolaus—not to be confused with Iolaus, the nephew of Heracles—but was referred to as "Protesilaus" after being the first (, protos) to leap ashore at Troy, and thus the first to die in the war.
Trojan War
Protesilaus was one of the suitors of Helen. He brought forty black ships with him to Troy, drawing his men from "flowering" Pyrasus, coastal Antron and Pteleus, "deep in grass", in addition to his native Phylace. Protesilaus was the first to land: "the first man who dared to leap ashore when the Greek fleet touched the Troad", Pausanias recalled, quoting the author of the epic called the Cypria. An oracle had prophesied that the first Greek to walk on the land after stepping off a ship in the Trojan War would be the first to die, and so, after killing four men, he was himself slain by Hector. Alternate sources have him slain by either Aeneas, Euphorbus, Achates, or Cycnus. After Protesilaus' death, his brother, Podarces, joined the war in his place. The gods had pity on his widow, Laodamia, daughter of Acastus, and brought him up from Hades to see her. She was at first overjoyed, thinking he had returned from Troy, but after the gods returned him to the underworld, she found the loss unbearable. She had a bronze statue of her late husband constructed, and devoted herself to it. After her worried father had witnessed her behavior, he had it destroyed; however, Laodamia jumped into the fire with it. Another source claims his wife was Polydora, daughter of Meleager.
According to legend, the Nymphs planted elms on the tomb, in the Thracian Chersonese, of "great-hearted Protesilaus" («μεγάθυμου Πρωτεσιλάου»), elms that grew to be the tallest in the known world; but when their topmost branches saw far off the ruins of Troy, they immediately withered, so great still was the bitterness of the hero buried below. The story is the subject of a poem by Antiphilus of Byzantium (1st century A.D.) in the Palatine Anthology:
Cult of Protesilaus
Only two sanctuaries to Protesilaus are attested. There was a shrine of Protesilaus at Phylace, his home in Thessaly, where his widow was left lacerating her cheeks in mourning him, and games were organised there in his honour, Pindar noted. The tomb of Protesilaus at Elaeus in the Thracian Chersonese is documented in the 5th century BCE, when, during the Persian War, votive treasure deposited at his tomb was plundered by the satrap Artayctes, under permission from Xerxes. The Greeks later captured and executed Artayctes, returning the treasure. The tomb was mentioned again when Alexander the Great arrived at Elaeus on his campaign against the Persian Empire. He offered a sacrifice on the tomb, hoping to avoid the fate of Protesilaus when he arrived in Asia. Like Protesilaus before him, Alexander was the first to set foot on Asian soil during his campaign. Philostratus writing of this temple in the early 3rd century CE, speaks of a cult statue of Protesilaus at this temple "standing on a base which was shaped like the prow of a boat;" Gisela Richter noted coins of Elaeus from the time of Commodus that show on their reverses Protesilaus on the prow of a ship, in helmet, cuirass and short chiton. Strabo, also mention the sanctuary.
A founder-cult of Protesilaus at Scione, in Pallene, Chalcidice, was given an etiology by the Greek grammarian and mythographer of the Augustan-era Conon that is at variance with the epic tradition. In this, Conon asserts that Protesilaus survived the Trojan War and was returning with Priam's sister Aethilla as his captive. When the ships put ashore for water on the coast of Pallene, between Scione and Mende, Aethilla persuaded the other Trojan women to burn the ships, forcing Protesilaus to remain and found the city of Scione. A rare tetradrachm of Scione ca. 480 BCE acquired by the British Museum depicts Protesilaus, identified by the retrograde legend PROTESLAS.
Protesilaus, speaking from beyond the grave, is the oracular source of the corrected eye-witness version of the actions of heroes at Troy, related by a "vine-dresser" to a Phoenician merchant in the framing device that gives an air of authenticity to the narratives of Philostratus' Heroicus, a late literary representation of Greek hero-cult traditions that developed independently of the epic tradition.
Cultural depictions
Among very few representations of Protesilaus, a sculpture by Deinomenes is just a passing mention in Pliny's Natural History; the outstanding surviving examples are two Roman copies of a lost mid-fifth century Greek bronze original represent Protesilaus at his defining moment, one of them in a torso the British Museum, the other at the Metropolitan Museum of Art. The Metropolitan's sculpture of a heroically nude helmeted warrior stands on a forward-slanting base, looking down and slightly to his left, with his right arm raised, prepared to strike, would not be identifiable, save by comparison made by Gisela Richter with a torso of the same model and its associated slanting base, schematically carved as the prow of a ship encircled by waves: Protesilaus about to jump ashore.
If Euripides' tragedy, Protesilaos, had survived, his name would be more familiar today.
The poem in the Palatine Anthology (VII.141) on Protesilaus by Antiphilus of Byzantium in turn inspired F. L. Lucas's poem "The Elms of Protesilaus" (1927).
Works employing this myth
"Dialogues of the Dead", by Lucian,
Protesilaos, a lost tragedy of Euripides of which only fragments survive
"Protesilaodamia", a lost work of Laevius
"carmen 61", "carmen 68", by Catullus
"Elegies, to Cynthia", by Propertius
"Heroicus", by Philostratus
"The Epistles", 13, by Ovid
"Laodamia", by William Wordsworth
" Veeraanganaa", by Michael Madhusudan Dutt
"Protesilas i Laodamia", by Stanisław Wyspiański
ΠΡΩΤΕΣΙΛΑΟΣ, Η ΤΡΑΓΩΔΙΑ, ΤΟΥ Κωνσταντίνου Αθ. Οικονόμου, Λάρισα, 2010. [www.scribd.com/oikonomoukon]
References
External links
"Laodamia," poem by William Wordsworth.
"Laodamia to Protesilaus," poem by Jared Carter.
Achaean Leaders
Greek mythological heroes
Thessalian characters in Greek mythology
Characters in Greek mythology |
570836 | https://en.wikipedia.org/wiki/GraphicConverter | GraphicConverter | GraphicConverter is computer software that displays and edits raster graphics files. It also converts files between different formats. For example, one can convert a GIF file to a JPEG file.
The program has a long history of supporting the Apple Macintosh platform, and at times it has been bundled with new Mac purchases.
, GraphicConverter can import about 200 file types and export 80. Images can also be retouched, edited, and transformed using tools, effects and filters. The software supports most Adobe Photoshop plug-ins, including TWAIN. The application features a batch processor, slideshow mode, image preview browser, and access to metadata comments (such as XMP, Exif, and IPTC).
GraphicConverter is shareware that runs on both the classic Mac OS and macOS and is maintained by Germany-based LemkeSoft. GraphicConverter is available in a dozen languages including English, French, German, Czech and Spanish.
See also
Comparison of raster graphics editors
References
External links
Lemke Software
Lemke Software
List of supported file formats
Classic Mac OS software
MacOS graphics software
Raster graphics editors |
21046285 | https://en.wikipedia.org/wiki/2009%20Arizona%20Wildcats%20football%20team | 2009 Arizona Wildcats football team | The 2009 Arizona Wildcats football team represented the University of Arizona in the 2009 NCAA Division I FBS college football season. The Wildcats, led by sixth-year head coach Mike Stoops, played their home games at Arizona Stadium.
Arizona hosted Central Michigan of the Mid-American Conference to begin the season on September 5, 2009 (with a 19–6 win), and ended the regular season with a 21–17 win over perennial conference power, then-ranked #20 Southern California on December 5, 2009; this was the first victory over USC by the Wildcats in the Mike Stoops era.
In addition to the slate of nine conference games, four at home and five on the road, the Wildcats traveled to Iowa City, Iowa and lost to the Iowa Hawkeyes of the Big Ten (who eventually finished with a #10 AP Poll ranking and an invitation to the Orange Bowl), and hosted in-state sister school Northern Arizona of the Big Sky Conference the preceding week.
After posting an 8–4 regular season record (6–3 in the Pac-10, good for a second-place tie in the conference with Oregon State and Stanford), the Wildcats were invited to appear in the 2009 Holiday Bowl in San Diego, the second consecutive postseason bowl game for the Arizona football program under Stoops. The Wildcats were shut out 33–0 by Nebraska.
The Wildcats finished the regular season with an Associated Press poll ranking of #22, their first national ranking since the 2000 season.
Schedule
Schedule Source: 2009 Arizona Wildcats football schedule and Arizona Official Athletic Site .
Rankings
Game summaries
Central Michigan
at Arizona Stadium, Tucson, Arizona
The Wildcats’ season began following a 43-minute lightning delay, the second straight year the Arizona season opener was delayed by lightning.
Central Michigan was flagged for a false start on its first play from scrimmage. After a short completion and a trap-play run, CMU quarterback Dan LeFevour was intercepted by LB Vuni Tuihamalaka at the Chippewas’ 31, but the Wildcats, on their third play from scrimmage (in the person of WR Bug Wright) coughed up the ball at CMU's 16. The UA forced Central Michigan to go three-and-out on its second drive, then drove to score.
The 'Cats forced a second turnover just before the end of the quarter. Linebacker C.J. Parish drilled CMU's Antonio Brown on a punt return, forcing the ball loose. UA longsnapper Jason Bertoni, who started his career at Central Michigan (before leaving the Chippewa program in 2007 for personal reasons), recovered at CMU's 28.
The Wildcats gained seven yards on three plays, setting up a 37-yard Alex Zendejas field goal. Zendejas hit a total of four field goals in his first college start.
Arizona's first touchdown of the season came midway through the second quarter.
Freshman QB Matt Scott led the Wildcats on a nine-play, 63-yard drive. RB Nic Grigsby capped it with a three-yard run up the middle. Zendejas’ PAT made it a 13-point lead.
Central Michigan was hampered by the quickness of the Arizona defense and didn’t score until there were 12 minutes 21 seconds remaining in the game; even then, the team failed to convert on a two-point conversion that would have made it a one-possession game.
Scott completed 10 of 17 first-half passes for 110 yards. Sophomore QB Nick Foles did not play a snap in the entire game. TE Rob Gronkowski, a key offensive weapon for Arizona in 2008 and speculated to be a future top NFL draft pick, did not play in the opener because of a back injury.
Northern Arizona
at Arizona Stadium, Tucson, Arizona
The Wildcats were effective, if not totally crisp, in a 34–17 win over in-state rival Northern Arizona (NAU), a FCS (formerly Division I-AA) member of the Big Sky Conference.
Junior RB Nicolas Grigsby rushed for two touchdowns; Grigsby's two scores — a 25-yard run and a 30-yard run — helped pace the Wildcats through a sometimes-choppy first half in front of 50,623 at Arizona Stadium.
He was key to the Wildcats' final drive of the second quarter, an 18-play, 99-yard march. WR Terrell Turner gave the Wildcats a 21–10 lead with a 2-yard touchdown catch from starting QB Matt Scott, a play after WR Delashaun Dean hauled in a 23-yard grab. Dean was drilled by two NAU players at the end of the play, and had to be helped off the field. Scott then found Turner on a short pass to pad the Wildcats' lead.
Arizona added two scores in the second half to put the Lumberjacks away. Sophomore RB Keola Antolin punched in a 1-yard score on the first play following Grigsby's 94-yard run to give the Wildcats a 27–10 lead with 10 minutes left in the quarter.
Backup quarterback Nick Foles hit Juron Criner for a 5-yard touchdown pass on the second play of the fourth quarter to make it 34–10.
The Wildcats began emptying their bench midway through the third quarter. Foles entered the game with 7:15 remaining in the quarter, and drove the team about 40 yards before fumbling a snap and turning the ball over. CB Trevin Wade's second interception of the third quarter gave Arizona the ball back.
Arizona has not lost to NAU since 1937. All three of Arizona's state universities (Arizona, NAU and Arizona State) are obligated under state law to play one another in athletic contests each year.
Iowa
at Kinnick Stadium, Iowa City, Iowa
The Hawkeyes scored on the opening drive, with a 2-yard touchdown run by Adam Robinson. But Arizona tied the score at 7 after Trevin Wade returned a Ricky Stanzi interception 38 yards into the end zone.
The Wildcats struggled with their few offensive chances.
QB Matt Scott missed a handful of open receivers in the third quarter, and — on a play that could have changed the momentum of the game — WR Delashaun Dean dropped what would have been a 50-yard gain. Dean appeared to trap the ball between his leg and the ground; a video replay rule confirmed that it was an incomplete pass.
Iowa's defense would again prove to be the difference-maker in this game, not allowing a touchdown until 1:53 was remaining in the game, with the Hawkeyes well ahead. Iowa safety Tyler Sash, with a grab of a Matt Scott pass intended for Terrell Turner, also netted his seventh interception in five games (dating back to last year).
Nick Foles came in at QB in the fourth quarter and went 6 for 11 with a 10-yard touchdown pass to Juron Criner, but, as noted above, the damage was already done.
The loss dropped Arizona to 0–7 against Big Ten teams in the last decade; the Wildcats’ last nonconference road win of any kind came in 2001, when John Mackovic's Wildcats defeated San Diego State.
Oregon State
at Reser Stadium, Corvallis, Oregon
Running behind a third-string tailback and second-team left tackle, right guard and wide receiver (as well as a new starting QB, sophomore Nick Foles, the transfer from Michigan State) the Wildcats defeated Oregon State 35–32 at Reser Stadium in Corvallis, in the Pac-10 opener for Arizona. The Wildcats started the game without a half-dozen starters, including tight end Rob Gronkowski (who would end up being out the rest of the 2009 season due to back surgery). Tailbacks Nicolas Grigsby and Keola Antolin were out of the game by halftime with shoulder and leg injuries, respectively (Grigsby would not fully recover until the end of the season).
That left the Wildcats with third-stringer Greg Nwoko at running back. The redshirt freshman from the Austin, Texas area delivered: His 52-yard catch-and-run on a screen pass set Arizona up for its second touchdown of the quarter, a 3-yard pass from quarterback Nick Foles to receiver Juron Criner. Nwoko rushed nine times for 44 yards and a touchdown in the first extended action of his career.
Arizona took the lead early in the third quarter, when Foles — making his first college start — dove in on a sneak from the 1-yard line. Foles led his team on a 15-play, 71-yard scoring drive in the first quarter, connecting with WR Delashaun Dean for a 2-yard touchdown pass.
The Beavers scored their first touchdown with 2 minutes remaining in the quarter, when Damola Adeniji caught a tipped pass from QB Sean Canfield for an 11-yard score. James Rodgers gave OSU a 14–7 lead with a 2-yard run midway through the second quarter. Arizona tied the game on Nwoko's touchdown, but the Beavers re-took the lead on the final play before halftime. Justin Kahut's 21-yard field goal made it 17–14.
Foles appeared to put the game away when he found WR Terrell Turner for a 13-yard touchdown pass with eight minutes remaining, but the Beavers — resilient and persistent — proved tough to put away. Canfield found Aaron Nichols for a 13-yard score with 4:09 remaining, cutting Arizona's lead to 3.
CB Devin Ross intercepted Canfield with 1 minute 33 seconds remaining.
Even after surrendering a safety with 25 seconds left, Oregon State recovered an onside kick and had the ball, down five points, in their zone. This time, Arizona made the plays. Sacks by Earl Mitchell and Ricky Elmore ended the game.
Grigsby left the game after just one rushing attempt with what was discovered to be a sprained AC joint in his right shoulder. He was never fully healthy the rest of the season.
Washington
at Husky Stadium, Seattle, Washington
In a controversial play, Washington LB Mason Foster intercepted a deflected pass off the foot of Arizona's Delashaun Dean (who insisted the pass hit the ground) and returned the carom 37 yards for a touchdown with 2:37 left, and the Huskies rallied with two touchdowns in the final three minutes to beat the Wildcats 36–33 in Seattle.
On the call, Dean commented: "I felt it graze my foot, but the way the ball bounced up, it would have hit my foot a lot harder", Dean said. "I figured it had to hit the ground, then after seeing the pictures you could actually see the black beads from the turf jump up when the ball hit the ground. It's pretty obvious when you look at it. I don't know how it got missed."
Led by quarterback Jake Locker, Washington overcame a 12-point deficit in the final 3 minutes to hand Arizona the loss. Arizona lost the game despite a torrid third quarter. QB Nick Foles connected with David Roberts on a 9-yard touchdown pass on the Wildcats’ first possession of the second half. The Wildcats got the ball back — and scored again — following a strange play.
Washington punter Will Mahan muffed a snap on fourth down deep in the Huskies' zone on their first possession of the second half. He recovered the muff, took a few steps and kicked a ball that rolled 12 yards behind the original line of scrimmage. Mahan was flagged for an illegal kick, and the Huskies were penalized half the distance to the goal line – 9 yards.
Arizona turned the good fortune into a 23-yard Alex Zendejas field goal. The Wildcats capped their 17-point quarter with an eight-play, 36-yard drive; Foles delivered the crushing blow on third-and-goal from the Huskies’ 1, faking a hand-off up the middle and bootlegging into the end zone for a touchdown.
Washington cut Arizona's lead to six points just before the quarter's end, when Devin Aguilar caught his second touchdown pass of the night, a 29-yarder from Jake Locker. But Zendejas nailed a 29-yarder on the first possession of the fourth quarter.
The UA was leading 33–21 when Washington took over with the ball with 4:16 remaining in the contest. But the Wildcats' LB Vuna Tuihalamka was flagged for a late hit on a missed pass, getting Washington past midfield. It took quarterback Jake Locker six plays to travel 59 yards; his 25-yard touchdown pass to tight end Kavario Middleton cut the Wildcats' lead to 33–28 with 2:55 left.
The Huskies then chose to kick the ball deep, figuring that — with two timeouts left — they could try to force Arizona to punt. The Wildcats instead went for the kill. On first down, the Wildcats called what Stoops dubbed "a run-pass" option play, meaning Foles could check down to a run or choose to throw. He threw.
When his first few options weren't there, the quarterback attempted a short screen route to Dean. The ball glanced off the side of his right shoe and into Mason Foster's hands. The Huskies' linebacker ran in untouched. The defeat of Arizona came as the Husky football program, winless during the 2008 season, was trying to rebuild under their new head coach, former USC offensive coordinator Steve Sarkisian (for their part, the Huskies finished the season 5–7 and 4–5 in Pac-10 play).
Following the game, head coach Mike Stoops said Arizona coaches were to blame for the poor call — even though Foles had been running the play all night with great success. He completed 39 of 53 passes for 384 yards and a touchdown. Offensive coordinator Sonny Dykes, however, defended the call. Arizona settled for four field goals and continued to struggle in the red zone. The Wildcats limited Locker to just 140 passing yards, but let him drive at will when one stop would have ended the game.
Stanford
at Arizona Stadium, Tucson
In a game that featured more than a thousand offensive yards, the Wildcats rallied back late to defeat the Cardinal. The quarterbacks were evenly matched: Stanford quarterback Andrew Luck threw for 423 yards and three touchdowns, while Arizona's Nick Foles also tossed three touchdown passes and had a total of 415 yards. Stanford's Toby Gerhart had 123 yards and two touchdowns. Arizona managed a total of just 138 rush yards, but 57 of those came on Nic Grigsby's go-ahead touchdown with under three minutes left. Stanford drove to the Arizona 17 with seconds to play, but a fourth-down pass to Chris Owusu was batted away and the Wildcats escaped with a home victory.
UCLA
at Arizona Stadium, Tucson
In the first quarter, Arizona's first drive ended when UCLA safety Rahim Moore intercepted a Nick Foles pass. But in their second drive, Foles passed to Juron Criner for a 41-yard touchdown to give the Wildcats a lead. After Arizona recovered a Bruin fumble, Grigsby rushed into the end zone for a 6-yard touchdown, extra point blocked.
Both Moore and Jerzy Siewierski intercepted a Wildcats pass in the second quarter. Kai Forbath kicked a 53-yard field goal to put UCLA on the board before the half. UCLA's Datone Jones recovered a Foles fumble and Forbath kicked a field goal to begin the third quarter. Kevin Craft came in to replace Kevin Prince in UCLA's second possession, but Christian Ramirez fumbled the ball to Arizona, which led to the Wildcats' third touchdown, a Nick Foles 25-yard pass to Criner. Tony Dye recovered a Wildcats fumble and ran in for a 28-yard UCLA touchdown. Late in the third quarter, Nick Booth rushed for 6 yards for a score to give Arizona a 27–13 lead.
In the fourth quarter, the Bruins were unable to do anything and lost their fourth game in a row.
Washington State
at Arizona Stadium, Tucson
California
at California Memorial Stadium, Berkeley
In their final home season game, California started backup RB Shane Vereen in place of their star RB Jahvid Best, who was still recovering from a concussion sustained the previous week. The then-#18 ranked Wildcats were also missing their starting RB, Nic Grigsby, with ongoing shoulder troubles.
Cal's Giorgio Tavecchio hit four field goals, including a 22-yard go-ahead kick with 4 minutes 46 seconds remaining, to boost the Golden Bears past a desperate UA team.
Tavecchio contributed in other ways, too. After hitting the field goal that put Cal ahead for good, the kicker tackled Arizona's Travis Cobb on a kickoff return. Bolstered by Cobb's return, the Wildcats drove deep into Cal territory.
Arizona faced third-and-three from the Cal 25 when QB Nick Foles dropped back and attempted a short pass. The ball deflected off a Cal defender and back into the hands of Foles, who rolled right and threw it forward again — this time for a completion to WR Delashaun Dean. Foles was flagged for an illegal forward pass, and Arizona was penalized 5 yards from the spot of the penalty, 9 yards behind the line of scrimmage. The Wildcats' field goal unit stood on the sideline when, on fourth-and-17 from the Golden Bears' 39, Foles attempted a desperate pass to David Roberts that was broken up.
Foles' mistake was emblematic of the Cats' struggles. Arizona gained just 274 yards, 174 below its season average. Foles completed 25 of 41 passes for 201 yards and a touchdown, but was intercepted once and sacked three times. Foles had only been sacked four times before all season before the game. The Wildcats rushed for just 73 yards as a team, a 2.6 yards-per-carry average that rarely equates to wins. And yet Arizona had its chances.
Cal scored a late touchdown but botched a PAT attempt, leaving Arizona a chance to drive for a game-tying touchdown and two-point conversion. The Wildcats had hope, but struggled to make the simplest of plays. Their last drive included an incomplete pass, a holding call (which was declined) and two sacks.
Things weren't much better on defense.
California QB Kevin Riley threw for 181 yards and a score, with two interceptions. Vereen had 30 carries for 159 yards, both career highs, including one score. Keola Antolin, who had rushed for 149 yards against the Bears in 2008, was held to 78 yards and a score. This was Cal's fourth straight victory over the Wildcats in Berkeley, and moved them up to #25 in the BCS rankings (they would go on to an 8–5 (5–4 Pac-10) regular season and lose 37–27 to Utah in the 2009 Poinsettia Bowl).
Arizona, on the other hand, seemed to have its relatively high hopes for its first-ever appearance in the Rose Bowl placed in jeopardy with the frustrating road loss.
Oregon
at Arizona Stadium, Tucson
Oregon uniform combination: green helmet, white jersey with silver numbers, black pants
In a PAC-10 conference showdown that prompted College Gameday to pay its first visit to the University of Arizona campus, the outcome of the game would ultimately send the winning team to the Rose Bowl Game. The Fans and the Arizona Wildcats players all wore red with the intent to "Red Out Oregon".
Oregon QB Jeremiah Masoli's touchdown plunge on third-and-goal in the second overtime period gave the Ducks a come-from-behind, 44–41 victory over the U of A before 57,863 fans at Arizona Stadium and a national TV audience (ESPN on ABC). The Ducks overcame a 10-point fourth-quarter deficit to force overtime, then scored touchdowns on both their possessions. Hundreds of red-clad Arizona students from the "Zona Zoo" student section were on the sidelines, preparing for a victory party, when Masoli hit tight end Ed Dickson for an 8-yard touchdown with six seconds left in regulation to tie the game.
The teams traded scores in the first overtime; Arizona started the second OT with the ball and hit a field goal only to watch the Ducks drive 25 yards in four plays for the win.
Masoli was brilliant in a 15-play, 80-yard drive at the end of regulation, converting on two fourth-down plays to get deep into Arizona territory. With Oregon trailing 31–24, Masoli found Dickson in the middle of the end zone. Kicker Morgan Flint's PAT tied it at 31, forcing the first overtime game at Arizona Stadium since 2003.
Oregon started overtime with the ball, and drove quickly. LaMichael James' 21-yard run moved the Ducks to Arizona's 4. On third-and-goal, Masoli found Jeff Maehl for a 4-yard touchdown. The Wildcats tied the game on its ensuing possession when QB Nick Foles hit Juron Criner for a 3-yard score, the receiver's third touchdown of the night. Arizona HC Mike Stoops instantly sent place-kicker Alex Zendejas out to tie it, though a two-point conversion could have won the game.
The Wildcats had the ball to start the second overtime but gained just 1 yard on three plays. Zendejas gave the UA a 41–38 lead with a 41-yard field goal. It didn't last. Masoli found Dickson on a 23-yard pass on Oregon's first play of the second overtime. The Ducks gained a yard on their next two plays, setting up third-and-goal from the 1. Masoli faked on a lead-option play and ran in the score.
The result left Arizona reeling, especially since the Wildcats appeared to have the game in hand in the fourth quarter. Arizona led by a touchdown late when Foles attempted to put the game away with a pass into the end zone; WR Terrell Turner tipped the ball, and Oregon's Talmadge Jackson III intercepted it and took a touchback.
Wide receiver David Douglas fumbled at the Ducks' 2 in the first quarter. CB Trevin Wade dropped a sure interception in the second half, and Zendejas missed a short field goal that would have put the UA up by 6.
The Wildcats' defense, so solid all night, forced two Masoli fumbles but couldn't recover them.
That only made it worse. "We let a lot of good things slip away", safety Cam Nelson said. "I don't really know what else to say."
In winning, Oregon eliminated Arizona from Rose Bowl contention. Masoli finished with 345 yards of total offense and six touchdowns.
Arizona State
Alex Zendejas kicked a 32-yard field goal as time expired, and the Wildcats defeated their primary in-state rival, Arizona State (ASU), 20–17 in Tempe to retain the Territorial Cup.
Arizona won after ASU's Kyle Williams – who had caught the tying touchdown pass minutes earlier – muffed a punt to give the Wildcats the ball at the ASU 22-yard line. Keola Antolin scored on a 67-yard run and Orlando Vargas blocked a punt and returned it for a touchdown for the Wildcats, who have beaten the Sun Devils in back-to-back seasons for the first time since 1997–98.
Dimitri Nance ran for 115 yards for the Sun Devils, who lost their last six games, matching the school record for consecutive losses in a season.
Down 14–0 at halftime, the Sun Devils rallied and appeared ready to force Arizona into their second overtime situation of the season with 2:02 remaining. On 4th-and-12 from Arizona's 14, ASU QB Danny Sullivan rolled out of the pocket and fired a strike to a diving Williams in the back of the end zone. It was their second scoring connection of the quarter, following a 44-yarder with 11:54 to go.
Arizona defense was led by Ricky Elmore who had 2 sacks, 5 tackles and 1 forced fumble.
Then came a critical ASU error. Williams muffed a Keenyn Crier punt, and the Wildcats' Mike Turner recovered at the Sun Devils' 22-yard line. Four plays later, Zendejas, a Phoenix-area native (some of his Glendale, Arizona family were in attendance), trotted on and nailed the game-winner.
The game ended with a scuffle at midfield. ASU linebacker Vontaze Burfict took a swing at UA backup longsnapper Ricky Wolder. Vontaze Burfict a true-freshman at 6feet 3 inches and 245 pounds swung wildly at Wolder without making any contact. Players from both sides had to be separated so they could clear the field.
This was Mike Stoops' third win over Arizona State, and arguably the most important one of his career.
USC
In the season finale, WR Juron Criner dived into the end zone with a 36-yard touchdown pass from Nick Foles with 3:14 to play, and Arizona secured a 21–17 victory USC.
Foles passed for 239 yards and two touchdowns and ran for another score for the Wildcats, who finally beat USC for the first time in coach Mike Stoops' tenure by scoring the final touchdown in a defense-dominated game. Foles went 22 of 40 but made several clutch throws, including an early touchdown pass to WR Delashaun Dean.
The Wildcats ended a seven-game losing streak against USC, and the Wildcats also have their first win over the Trojans during the Pete Carroll era. The Trojans, who came into the contest ranked #20 in the Associated Press poll, dropped out of the national rankings with the loss and dropped to #24 in the BCS standings.
Arizona is also the first non-Stanford team in the Pac-10 to defeat USC in the Coliseum during the Carroll era (Stanford defeated Carroll's teams in the Coliseum in 2001, 2007, and 2009 while the Big 12's Kansas State defeated the Trojans in the Coliseum in 2001—the only other victory by a team over a USC home team coached by Carroll). With the Wildcats' win, Arizona State becomes the only remaining Pac-10 team that has never beaten the Trojans in the Carroll era.
With a final regular season overall record of 8–4, Arizona accepted an invitation to the 2009 Pacific Life Holiday Bowl after this game. The Trojans were invited to the 2009 Emerald Bowl in San Francisco, they defeated Boston College 24–13.
Nebraska (2009 Holiday Bowl)
at Qualcomm Stadium, San Diego, California
The Cornhuskers, coming off a heartbreaking loss to Texas in the 2009 Big 12 Championship Game (thereby missing a chance at a BCS bowl game berth), defeated the Wildcats 33–0 for the first shutout in the history of the Holiday Bowl. This was a rematch of the two teams, who faced each other in the 1998 Holiday Bowl, where Arizona defeated Nebraska 23–20.
The Wildcats were held to just 109 total yards of offense and just 6 first downs. The 'Huskers were led on offense by WR Niles Paul who had 4 catches for 123 yards, including a touchdown, which accounted for 74 of his receiving yards. Quarterback Zac Lee threw for 173 yards and the touchdown to Paul. Rex Burkhead of Nebraska led all rushers with 89 yards and a touchdown on 17 carries. This also marked the first time in Nebraska's 46 game, bowl game history that it has shut out a team in postseason. However, this was the third time in Arizona's bowl history that they have been shut out, the second time in a game in San Diego. The Wildcats lost the 1921 San Diego East-West Christmas Classic to Centre College 38–0 and the 1990 Aloha Bowl to Syracuse 28–0. Prior to the 2009 Holiday Bowl no team had scored less than 10 points in a game. The game also marked Nebraska's first 10-win season since 2003.
Notes
Up until this season, Arizona was one of only two Pacific-10 Conference teams that had not beaten USC during the Pete Carroll era.
Awards and honors
All-Pacific-10 Conference Team:
Second Team: OL Colin Baxter; OL Adam Grant; DL Earl Mitchell; LB Xavier Kelley; DB Trevin Wade
Honorable mention: DE Ricky Elmore; QB Nick Foles; CB Devin Ross; MLB Vuna Tuihalamaka
References
Arizona
Arizona Wildcats football seasons
Arizona Wildcats football |
7878135 | https://en.wikipedia.org/wiki/Ira%20A.%20Fulton%20Schools%20of%20Engineering | Ira A. Fulton Schools of Engineering | The Ira A. Fulton Schools of Engineering (often abbreviated to the Fulton Schools) is the engineering college of Arizona State University. The Fulton Schools offers 25 undergraduate and 47 graduate programs in all major engineering disciplines, construction and computer science.
The Fulton Schools comprises seven engineering schools located on both ASU's Tempe and Polytechnic campuses: the School of Biological and Health Systems Engineering; the School of Computing and Augmented Intelligence; the School of Electrical, Computer and Energy Engineering; the School for Engineering of Matter, Transport and Energy; the School of Manufacturing Systems and Networks; the School of Sustainable Engineering and the Built Environment and The Polytechnic School. The Global School, not an official Fulton School, refers to the Fulton Schools’ collective efforts in engaging in a globally-connected network of higher education initiatives and collaborations with government entities to broaden access to engineering education and build partnerships (in development).
History
The Ira A. Fulton Schools of Engineering began in 1954 as the College of Applied Arts and Sciences. In 1956, the first bachelor's degree program in engineering was approved. The School of Engineering was created in 1958. In 1970, the Division of Construction was added.
In 1992, through a gift of the Del E. Webb Foundation, an endowment was set up to create the Del E. Webb School of Construction, which offers undergraduate and graduate construction and construction management programs. It is now a part of the School of Sustainable Engineering and the Built Environment.
A separate school was created for technology programs and, in 1996, the Schools of Technology and Agribusiness moved to the Polytechnic Campus.
In 2002, the Department of Bioengineering was renamed the Harrington Department of Bioengineering in honor of a $5 million gift from the Harrington Arthritis Research Center.
Also in 2002, the office of Global Outreach and Executive Education (GOEE) was established to provide anytime/anyplace learning environments for industry engineers to complete advanced degrees. In 2003, the program began offering engineering graduate degrees completely online. Currently, GOEE offers eight online undergraduate engineering/technology degree programs, 14 online master's degree programs, and two graduate-level academic certificate programs.
In 2003, Ira A. Fulton, founder and CEO of Fulton Homes, established an endowment of $50 million in support of ASU's College of Engineering and Applied Sciences, which was renamed in his honor. The new Ira A. Fulton Schools of Engineering was reconstructed to include five separate and interdisciplinary schools: The School of Biological and Health Systems Engineering; the School of Computing, Informatics and Decision Systems Engineering; the School of Electrical, Computer and Energy Engineering; the School for Engineering of Matter, Transport and Energy; and the School of Sustainable Engineering and the Built Environment.
Since receiving this transformational gift, the Ira A. Fulton Schools of Engineering have grown in enrollment, programs offered and research expenditures. Between 2015 and 2019, research expenditures rose from $89 million to $115 million.
In 2013, ASU Online launched an Online Bachelor of Science in engineering in Electrical Engineering program which was, and remains, fully accredited by ABET. GOEE now offers four online programs which are ABET accredited.
In 2014, the College of Technology and Innovation on ASU's Polytechnic campus was renamed The Polytechnic School and became the sixth school in the Fulton Schools.
In August 2021, the Ira A. Fulton Schools of Engineering introduced the seventh Fulton School, the School of Manufacturing Systems and Networks (MSN) on the Polytechnic campus. At the same time, the School of Computing, Informatics, and Decision Systems Engineering was renamed the School of Computing and Augmented Intelligence (SCAI)
Fall 2021 enrollment (21st day census) in the Fulton Schools was 26,999 students total (undergraduate and graduate).
The Fulton Schools employ 370 tenured/tenure-track faculty and have $119 million in research expenditures (FY 2021)
Location
The Fulton Schools administrative offices and some departments are located within The Brickyard building complex on Mill Avenue in downtown Tempe, Arizona. The Fulton Schools has more than 1,000,000 square feet of space in over a dozen buildings on ASU's Tempe and Polytechnic campuses.
In September 2014, The College Avenue Commons building was opened as the new home of the School of Sustainable Engineering and the Built Environment, including the Del E. Webb School of Construction (DEWSC). DEWSC students, faculty and alumni contributed to the design and construction of the building, which features some exposed construction elements which allow it to be used as a teaching tool. Like many ASU and Fulton Schools buildings, it is Leadership in Energy and Environmental Design (LEED) Gold certified.
In August 2017, The Fulton Schools opened Tooker House, a residential community “built for engineers.” Tooker House is a 1,600-person, co-ed residential community for Fulton Schools undergraduate students and features on-site digital classrooms and state-of-the-art makerspaces.
Notable faculty
National Academy of Sciences
Alexandra Navrotsky, Professor, School for Engineering of Matter, Transport and Energy
National Academy of Engineering
Ronald Adrian - Regents Professor, School for Engineering of Matter, Transport and Energy
Dimitri Bertsekas - Professor, School of Computing and Augmented Intelligence
Gerald T. Heydt - Regents Professor, School of Electrical, Computer and Energy Engineering
Edward Kavazanjian - Regents Professor, School of Sustainable Engineering and the Built Environment
Subhash Mahajan (emeritus) - Regents Professor, School for Engineering of Matter, Transport and Energy)
Bruce Rittmann - Regents Professor, School of Sustainable Engineering and the Built Environment
John Undrill - Research Professor, School of Electrical, Computer and Energy Engineering
Vijay Vittal - Regents Professor, School of Electrical, Computer and Energy Engineering
National Academy of Inventors
James Abbas - Associate Professor, Biological and Health Systems Engineering
Cody Friesen - Associate Professor, School for Engineering of Matter, Transport and Energy
Michael Kozicki - Professor, School of Electrical, Computer and Energy Engineering
Deirdre Meldrum - Distinguished Professor of Biosignatures Discovery, School of Electrical, Computer and Energy Engineering
Nathan Newman - Lawrence Professor of Solid State Sciences, School for Engineering of Matter, Transport and Energy
Sethuraman Panchanathan - Regents Professor, School of Computing and Augmented Intelligence
Bruce Rittmann - Regents Professor, School of Sustainable Engineering and the Built Environment
Regents professors
The title “Regents Professor” is the highest faculty honor awarded at Arizona State University. It is conferred on ASU faculty who have made pioneering contributions in their areas of expertise, who have achieved a sustained level of distinction, and who enjoy national and international recognition for these accomplishments.
Ronald Adrian - Regents Professor, School for Engineering of Matter, Transport and Energy
Sethuraman Panchanathan - Regents Professor, School of Computing and Augmented Intelligence
Constantine A. Balanis - Regents Professor, Electrical Engineering
Aditi Chattopadhyay - Regents Professor, Mechanical & Aerospace Engineering
David K. Ferry - Regents Professor, School of Electrical, Computer and Energy Engineering
Gerald T. Heydt - Regents Professor, School of Electrical, Computer and Energy Engineering
Edward Kavazanjian - Regents Professor, School of Sustainable Engineering and the Built Environment
Ying-Cheng Lai - Regents Professor, School of Electrical, Computer and Energy Engineering
Jerry Lin - Regents Professor, School for Engineering of Matter, Transport and Energy
Subhash Mahajan (emeritus) - Regents Professor, School for Engineering of Matter, Transport and Energy
Douglas Montgomery - Regents Professor, School of Computing and Augmented Intelligence
Bruce Rittmann - Regents Professor, School of Sustainable Engineering and the Built Environment
Vijay Vittal - Regents Professor, School of Electrical, Computer and Energy Engineering
Paul Westerhoff - Regents Professor, School of Sustainable Engineering and the Built Environment
Schools
School of Biological and Health Systems Engineering
School of Computing and Augmented Intelligence (formerly the School of Computing, Informatics, and Decision Systems Engineering)
School of Electrical, Computer and Energy Engineering
School for Engineering of Matter, Transport and Energy
School of Manufacturing Systems and Networks
School of Sustainable Engineering and the Built Environment
The Polytechnic School
In addition, The Fulton Schools engage in a globally-connected network of higher education initiatives and collaborations with government entities to provide greater access to engineering education. This set of initiatives is called The Global School.
Rankings
U.S. News & World Report Rankings
#36 Undergraduate Program [#21 among public institutions] 2022 edition, published September 2021
#41 Graduate Program [#23 among public institutions] 2022 edition, published March 2021
#12 Online Master's in Engineering Programs January 2022
#9 Online Master's in Engineering Programs for Veterans January 2022
U.S. News & World Report Graduate School Specialty Rankings
U.S. News & World Report Graduate School Specialty Rankings 2022 edition, published March 2021, unless indicated
#25 Aerospace
#53 Biomedical
#52 Chemical
#26 Civil
#5 Civil, Online Master's Program, "January 2021"
#33 Computer Engineering
#43 Computer Science‡ 2020 edition, published March 2019
#31 Electrical
#2 Electrical, Online Master's Program, January 2022
#2 Engineering Management, Online Master's Program, January 2022
#20 Environmental
#18 Industrial
#4 Industrial, Online Master's Program, January 2022
#40 Materials
#41 Mechanical
‡According to U.S. News & World Report the Sciences, including Computer Science, are not ranked every year.
U.S. News & World Report Undergraduate Engineering Program Rankings (for schools with doctorate programs)
U.S. News & World Report, 2022 edition, published September 2021
#23 Artificial Intelligence (computer science specialty)
#29 Bioengineering/Biomedical
#17 Civil
#20 Computer
#54 Computer science
#28 Cybersecurity (computer science specialty)
#22 Electrical
#21 Mechanical
U.S. News & World Report Undergraduate Computer Science Program Rankings
U.S. News & World Report, 2022 edition, published September 2021
#23 Artificial Intelligence
#28 Cybersecurity
American Society for Engineering Education (ASEE) Rankings
Source:
ASEE Engineering Statistics
#3 Bachelor's Degrees Awarded (429 schools included)
#11 Bachelor's Degrees Awarded to Women (429 schools included)
#6 Bachelor's Degrees Awarded to Hispanics by school (429 schools included)
#8 Master's Degrees awarded to Underrepresented Minorities (429 schools included)
#6 Master's Degrees Awarded by school (429 schools included)
#16 Doctoral Degrees Awarded by school (429 schools included)
#6 Graduate Enrollment by school (50 schools included)
#10 Tenured and Tenure-Track Faculty Members (310 schools included)
#11 Female Tenured/Tenure-Track Faculty (310 schools included)
#13 Hispanic Tenured/Tenure-Track Faculty (310 schools included)
ASEE Engineering Technology Statistics
#1 Engineering Technology Enrollment (120 schools reported)
#2 Engineering Technology bachelor's degrees Awarded by School (120 schools reported)
#2 Engineering Technology Degrees Awarded to Women by School (120 schools reported)
#3 Engineering Technology Degrees awarded to Underrepresented Minorities (120 schools included)
References
Arizona State University
Engineering schools and colleges in the United States
Engineering universities and colleges in Arizona
Educational institutions established in 1954
1954 establishments in Arizona |
28964238 | https://en.wikipedia.org/wiki/Kenzero | Kenzero | Kenzero is a computer trojan that is spread across peer-to-peer networks and is programmed to monitor the browsing history of victims.
History
The Kenzero trojan was first discovered on the November 27, 2009, but researchers think it went undetected for a few months prior to the initial discovery.
Operations
Kenzero attacks computers that download files through peer-to-peer networks (P2P). Once the file is opened, the virus locates the victim's browsing history and publishes it online. People can then view the file(s).
References
Windows trojans
Hacking in the 2010s |
442805 | https://en.wikipedia.org/wiki/Adobe%20Audition | Adobe Audition | Adobe Audition is a digital audio workstation developed by Adobe Inc. featuring both a multitrack, non-destructive mix/edit environment and a destructive-approach waveform editing view.
Origins
Syntrillium Software was founded in the early 1990s by Robert Ellison and David Johnston, both former Microsoft employees. Originally developed by Syntrillium as Cool Edit, the program was distributed as crippleware for Windows computers. The full version was useful and flexible, particularly for its time. Syntrillium later released Cool Edit Pro, which added the capability to work with multiple tracks as well as other features. Audio processing, however, was done in a destructive manner (at the time, most computers were not powerful enough in terms of processor performance and memory capacity to perform non-destructive operations in real-time). Cool Edit Pro v2 added support for real-time non-destructive processing, and v2.1 added support for surround sound mixing and unlimited simultaneous tracks (up to the limit imposed by the actual computer hardware). Cool Edit also included plugins such as noise reduction and FFT equalization.
Ever since the earliest versions, Cool Edit 2000 and Cool Edit Pro supported a large range of import/export codecs for various audio file formats. When MP3 became popular, Cool Edit licensed and integrated the original Fraunhofer MP3 encoder. The software had an SDK and supported codec plugins (FLT filters), and a wide range of import/export format plugins were written by the developer community to open and save in a number of audio compression formats. The popular audio formats and containers supported by Cool Edit with built-in codecs or plugins were Fraunhofer MP3, LAME MP3, Dolby AC3, DTS, ACM Waveform, PCM waveform, AIFF, AU, CDA, MPEG-1 Audio, MPEG-2 Audio, AAC, HE-AAC, Ogg Vorbis, FLAC, True Audio, WavPack, QuickTime MOV and MP4 (import only), ADPCM, RealMedia, WMA Standard, WMA Professional, WMA Lossless and WMA Multichannel.
Adobe purchased Cool Edit Pro from Syntrillium Software in May 2003 for $16.5 million, as well as a large loop library called "Loopology". Adobe then renamed Cool Edit Pro to "Adobe Audition".
Version
Version 1
Adobe Audition was released on August 18, 2003. It had bug fixes but no new features and was essentially a more polished Cool Edit Pro 2.1 under a different name. Adobe then released Audition v1.5 in May 2004; major improvements over v1 included pitch correction, frequency space editing, a CD project view, basic video editing and integration with Adobe Premiere, as well as several other enhancements.
Version 2
Adobe Audition 2 was released on January 17, 2006. With this release, Audition (which the music recording industry had once seen as a value-oriented home studio application, although it has long been used for editing by radio stations) entered the professional digital audio workstation market. The current version included two sections. Multitrack View supported up to 128 digital audio mono or stereo tracks at up to 32-bit resolution. In the track controls section one could select the input and output for each track (the program supported multiple multi-channel sound cards), select "record", "solo", and "mute", and access the effects rack. New features included Audio Stream Input/Output (ASIO) support, VST (Virtual Studio Technology) support, new mastering tools (many provided by iZotope), and a redesigned UI. Adobe also included Audition 2.0 as part of its Adobe Production Studio bundle.
Version 3
Adobe Audition 3 was released on November 8, 2007. New features included VSTi (virtual instrument) support, enhanced spectral editing, a redesigned multi-track interface, new effects, and a collection of royalty-free loops.
CS2 activation servers' shutdown: Adobe Audition 3, with some other CS2 products, was released with an official serial number, due to the technical glitch in Adobe's CS2 activation servers (see Creative Suite 1 & 2).
Version 4 (CS5.5)
Audition 4, also known as Audition CS5.5, was released on April 11, 2011, as part of Adobe Creative Suite. Audition 4 was shipped as part of the Adobe Creative Suite 5.5 Master Collection and Adobe Creative Suite 5.5 Production Premium, replacing the discontinued Adobe Soundbooth. Audition 4 was also made available as a standalone product. Enhanced integration with Adobe Premiere Pro allowed editing of multitrack Premiere projects, and users of third-party software were served by the introduction of OMF- and XML-based import-export functions. Other new features included improved 5.1 multichannel support, new effects (DeHummer, DeEsser, Speech Volume Leveler, and Surround Reverb), a history panel, faster and fully supported real-time FFT analysis, and a new audio engine (more reliable and faster) for non-ASIO devices.
According to Adobe, Audition CS5.5 was rewritten from the ground up to take advantage of parallel/batch processing for performance and make it a platform-agnostic product. Audition CS5.5 now works on Windows and Mac platforms. Over 15 years of C++ code was analyzed, and many features of the previous Audition 3 were ported or enhanced. Notable features that were present in Audition 3, but removed for CS5.5, include VSTi support and MIDI sequencing. Unlike all the previous versions, this is the first release to be available as a Mac version as well as a Windows version. Many other features from previous Windows versions of Adobe Audition, such as FLT filters, DirectX effects, clip grouping, many effects (Dynamic EQ, Stereo Expander, Echo Chamber, Convolution, Scientific filters, etc.) were removed as the product was rewritten to have identical cross-platform features for Windows and macOS. Some of the features were later restored in Audition CS6 but the wide range of audio codec compression/decoding filters for import/export of various audio file formats were discontinued.
Version 5 (CS6)
Adobe showed a sneak preview of Audition CS6 in March 2012 highlighting clip grouping and automatic speech alignment (which had its technology previewed in September 2011).
Audition CS6 was released on April 23, 2012, as part of both Creative Suite 6 Master Collection and Creative Suite 6 Production Premium. It included faster and more precise editing, real-time clip stretching, automatic speech alignment, EUCON and Mackie control surface support, parameter automation, more powerful pitch correction, HD video playback, new effects, and more features.
Version 6 (CC)
Adobe Audition 6, also more commonly known as Audition CC, was released on June 17, 2013. It is the first in the Audition line to be part of the Adobe Creative Cloud. Also, Audition CC is now the first 64-bit application in the Audition line. This can provide faster processing time when compared to Audition CS6. New features include sound remover, preview editor, and pitch bender.
Version 7 (CC 2014)
Adobe Audition 7 was released in June 2014 with the name Adobe Audition CC 2014. New with this release came support for Dolby Digital and Dolby Digital Plus formats, custom channel labels, a new UI skin, High DPI support, enhanced clip and track colors and navigation, minimize tracks, tools for splitting all clips at the playhead, and more.
Version 8 (CC 2015)
Adobe Audition 8 was released in June 2015 with the name Adobe Audition CC 2015. This release offered Dynamic Link video streaming which enabled Audition to display a Premiere Pro project sequence as a video stream at full resolution and frame rate and with all effects, without needing to render to disk. Other features included support for displaying that video content on an external display, scheduled recording, customization of level meter crossover values, automatic session backup, automatic storage of assets alongside session files, import/export of markers, options to relink media, and the addition of Brazilian Portuguese language support. The 8.1 update in the fall of 2015 first unveiled Remix which could analyze a music track and recompose it to a different duration, tools for generating speech based on the OS text-to-speech voice libraries, new options for ITU-based loudness conformation, and the ability to expand and create custom functionality and integration with the Adobe Content Extensibility Platform (CEP) panel support. This update also removed "Upload to Soundcloud" support as the API had been abandoned by SoundCloud and was no longer functional.
Version 9 (CC 2015.2)
Adobe Audition 9 was released in June 2016 with the name Adobe Audition CC 2015.2. Of most importance with this release was the new Essential Sound panel, which offered novice audio editors a highly organized and focused set of tools for mixing audio and would soon be introduced to Premiere Pro allowing non-destructive and lossless transfer of mixing efforts between the two applications. This release also supported exporting directly to Adobe Media Encoder, supporting all available video and audio formats and presets.
Version 10 (CC 2017)
Adobe Audition 10 was released in November 2016 with the name Adobe Audition CC 2017. A new, flat UI skin and the introduction of the Audition Learn panel, with interactive tutorials, spearheaded this release. This also marked the introduction of the Essential Sound panel and the sharing of all real-time Audition audio effects with Premiere Pro. The 10.1 update in Spring, 2017, offered deep channel separation and manipulation features when working with multichannel audio recordings in Multitrack view, significant improvements to interchange with Premiere Pro sharing all effects and automation non-destructively when transferring a sequence to Audition for mixing, and added spectrum meters to many audio effects. This update also offered the visual keyboard shortcut editor common across other Adobe applications and offered native support for the Presonus Faderport control surface and mixer.
Version 11 (CC 2018)
Adobe Audition 11 was released on October 18, 2017, with the name Adobe Audition CC. (The year moniker was dropped from all Creative Cloud applications.) With this release, users were able to easily duck the volume of music behind dialogue and other content types with the new Auto-Ducking feature available in the Essential Sound panel. Multitrack clips were enhanced with fixed z-order, new fade features such as symmetrical fade in/out and fixed duration/curve adjustments. Performance of mixdowns and bounces improved up to 400%. Smart monitoring provides intelligent source monitoring when recording punch-ins and ADR. Video timecode overlay can display source or session timecode without burn-in, a new Dynamics effect with auto-gate, limiting, and expansion simplifies compression for many users, and support for any control surfaces and mixers which use Mackie HUI protocol for communication rounds out the release. Dolby Digital support was removed from this release, though import continues to be supported through the most recent operating systems.
Version 12 (CC 2019)
Adobe Audition 12 was released on October 17, 2018, with the main new features being DeNoise and DeReverb effects. Other new features include: Multitrack Clip improvements, Mulitrack UI improvements, Zoom to time, Add or delete empty tracks, Playback and recording improvements. Third-party effect migration.
See also
Comparison of multitrack recording software
References
External links
2003 software
Audition
Audition
Digital audio workstation software |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.