id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
54513670 | https://en.wikipedia.org/wiki/Michael%20Wooldridge%20%28computer%20scientist%29 | Michael Wooldridge (computer scientist) | Michael John Wooldridge (born 26 August 1966) is a professor of computer science at the University of Oxford. His main research interests is in multi-agent systems, and in particular, in the computational theory aspects of rational action in systems composed of multiple self-interested agents. His work is characterised by the use of techniques from computational logic, game theory, and social choice theory.
Education
Wooldridge was educated at the University of Manchester Institute of Science and Technology (UMIST) where he was awarded a PhD in 1991.
Career and research
Wooldridge was appointed a lecturer in Computer Science at the Manchester Metropolitan University in 1992. In 1996, he moved to London, where he became senior lecturer at Queen Mary and Westfield College in 1998. His appointment as full professor in the Department of Computer Science at the University of Liverpool followed in 1999. In Liverpool he served as head of department from 2001 to 2005 and as head of the School of Electrical Engineering, Electronics, and Computer Science from 2008 to 2011. In 2012 the European Research Council awarded him a five-year ERC Advanced Grant for the project Reasoning about Computational Economies (RACE). In the same year he left Liverpool to become professor of computer science at the University of Oxford, and served as head of the Department of Computer Science from 2014 - 2018. In Oxford he is a senior research fellow of Hertford College, Oxford.
Michael Wooldridge is author of more than 300 academic publications.
Editorial service
2003–2009 co-editor-in-chief of the Journal Autonomous Agents and Multi-Agent Systems
2006–2009 associate editor of the Journal of Artificial Intelligence Research (JAIR)
2009–2012 associate editor of the Journal of Artificial Intelligence Research (JAIR)
Other editorships: Journal of Applied Logic, Journal of Logic and Computation, Journal of Applied Artificial Intelligence, and Computational Intelligence.
Awards and honors
He is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a European Coordinating Committee for Artificial Intelligence (ECCAI) Fellow, a Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB) Fellow, and a British Computer Society (BCS) Fellow. In 2015, he was made Association for Computing Machinery (ACM) Fellow for his contributions to multi-agent systems and the formalisation of rational action in multi-agent environments.
2021 AAAI/EAAI Outstanding Educator Award
2020 BCS Lovelace Medal
2015 Elected an Association for Computing Machinery (ACM) Fellow. For contributions to multi-agent systems and the formalisation of rational action in multi-agent environments.
2012–17 ERC Advanced Investigator Grant "Reasoning about Computational Economies (RACE)" (5-year €2m award)
2009 British Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) Fellow
2008 American Association for Artificial Intelligence (AAAI) Fellow
2008 Influential Paper Award, Special Recognition from the International Foundation for Autonomous Agents and Multi-Agent Systems, for the paper Intelligent Agents: Theory and Practice
2007 European Association for Artificial Intelligence (ECCAI) Fellow
2006 ACM/SIGART Autonomous Agents Research Award. For significant and sustained contributions to the research on autonomous agents and multi agent systems. In particular, Dr. Wooldridge has made seminal contributions to the logical foundations of multi-agent systems, especially to formal theories of co-operation, teamwork and communication, computational complexity in multi-agent systems, and agent-oriented software engineering.
Personal life
Michael Wooldridge was born in Wakefield (West Yorkshire, United Kingdom) in 1966 as the second son to John and Jean Wooldridge. He is married with two children.
Publications
Wooldridge, Michael (19 January 2020). A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going. New York: Flatiron Books. .
References
Artificial intelligence researchers
Alumni of the University of Manchester Institute of Science and Technology
Academics of Manchester Metropolitan University
Academics of Queen Mary University of London
Fellows of Hertford College, Oxford
Academics of the University of Liverpool
British computer scientists
People from Wakefield
Fellows of the Association for the Advancement of Artificial Intelligence
Fellows of the Association for Computing Machinery
Fellows of the British Computer Society
1966 births
Living people
Fellows of the SSAISB
Fellows of the European Association for Artificial Intelligence |
2138560 | https://en.wikipedia.org/wiki/TESO%20%28Austrian%20hacker%20group%29 | TESO (Austrian hacker group) | TESO was a hacker group, which originated in Austria. It was active from 1998 to 2004, and during its peak around 2000, it was responsible for a significant share of the exploits on the bugtraq mailing list.
History
In 1998, Teso was founded, and quickly grew to 6 people, which first met in 1999 at the CCC Camp near Berlin.
By 2000, the group was at its peak, and started speaking on various conferences, wrote articles for Phrack and released security tools and exploits at a very high pace. Some of its exploits only became known after leaking to the community. This included exploits for wu-ftp, apache, and openssh.
2000 First remote vulnerability in OpenBSD followed by a series of remote exploits against OpenBSD (some co-authored with ADM). Forced OpenBSD to remove the claim from the OpenBSD webpage "7 years without vulnerability".
In September 2001 released comprehensive Format String Research Paper by scut describing uncontrolled format string vulnerabilities.
In 2003, the group informally disbanded, and in 2004 the website went down.
Achievements
In 2000, developed hellkit, the first shellcode generator.
In 2000, wrote TesoGCC, the first format string vulnerability scanner, and the first comprehensive guide on format string exploitation.
BurnEye team member is widely believed to be one of the first proper ELF executable crypters.
Quotes
Members and name
The name originally was an acronym of the nicknames of the original founders (typo, edi, stanly, ), but as many of the most skilled members joined later, this interpretation quickly became meaningless. Teso originally and during its peak was a small and tightly knit group. A full list of members does not appear to exist, but if public sources can be trusted, at least the following members existed:
Abdullah Khann
caddis
edi
halvar
hendy
lorian
palmers
randomizer
scut, published in September 2001 Exploiting format strings vulnerabilies paper
smiler
skyper
stealth/S.Krahmer
stanly
typo aka Paul Bohm
xdr/mdr
zip
See also
Goatse Security
w00w00 - A rivaling hacking group. Some research and releases were published together with w00w00 members.
The Hacker's Choice - Some team-teso members joined THC after TESO was disbanded.
References
External links
Dave Aitel on TESO
Packetstorm TESO Archive
Hacker groups |
8804105 | https://en.wikipedia.org/wiki/Rae%20Technology | Rae Technology | Rae Technology was a software company founded as a spin-off from Apple Computer in 1992. Rae Technology was best known for its Personal Information Manager Rae Assist and for being the predecessor of NetObjects, Inc. After transferring new developed technology for web site design to NetObjects, Inc. in 1995, Rae Technology had no further public recognition.
Roots in Apple Computer
The roots of Rae Technology reach back to the 80s at Apple Computer. Samir Arora, a software engineer from India, was involved in early research in navigation applications and so-called hypermedia. Years before the Internet took off and web browser emerged, developers and executives at Apple had the idea that fast and flexible access to linked data would be crucial to future computing. The famous "Knowledge Navigator" video from 1987 gives an impression of the visions at Apple labs in this time. Samir Arora worked in the office of John Sculley at the time and was involved in creating the video.
Samir Arora had been part of the original 4th Dimension engineering team at Apple and ran the applications tools group responsible for 4D and Hypercard. To manage mobile and online data access and navigation, a group of software developers, including Dave
Dell'Aquila, Sal Arora, Raj Narayan and Jeet Kaul, and led by Samir Arora, created an application framework called SOLO (Structure of Linked Objects).
In technical terms, SOLO was a proprietary programming language to develop sets of application programming interfaces (APIs).
Rae Technology and Rae Assist
With new applications based on SOLO on the rise, Samir Arora started Rae Technology as a spin-off from Apple. Headquarters were located on the Apple campus and the board consisted of high-ranked Apple executives.
The company was run by Samir Arora as chief executive officer and President, David Kleinberg as Vice President Sales and Marketing, Dave Dell'Aquila as Vice President Products, and Dianna Mullins as Vice President Operations.
In 1993, Rae Technology introduced its Rae Assist, one of the first Personal Information Managers (PIMs). Rae Assist would let the user organize and access personal contacts, dates, company profiles, scheduling, and linking these entries. Between 1993 and 1995 Rae Assist was published in three versions for the Mac, 1.0, 1.5 and 2.0.
Professional services
Beyond producing software, Rae Technology worked on corporate database projects for Chevron Corp. and Wells Fargo.
An online banking project at Wells Fargo Bank gave the Rae team insight in how the Web could work for companies in the future world of Internet and browsers. The SOLO architecture seemed to be perfectly suited and to be flexible enough to address the need of building something yet new: web sites. The filing of fundamental patents for web site design was in preparation.
From Rae to NetObjects
There are no publicly available balance sheets from Rae Technology but the highly competitive and rather small PIM market should not have generated a lot of income. A full license of that program was sold for $199, upgrades for $29.
With patents for web site design software pending, the efforts and technology for this new kind of software were transferred to a new company. NetObjects, Inc. was founded in 1995 and Rae Technology was its first investor with $1.5 million.
The new company had the same core team that led Rae: Samir Arora, his brother Sal Arora, and David Kleinberg. Clement Mok from Studio Archetype, who had already been involved with the Wells Fargo project, was added as a designer.
At the conclusion of the transaction, Rae Technology became a Venture Capital LLC but ceased to attract any further public attention though it exists as a company until today.
References
Defunct software companies of the United States
Software companies established in 1992
Software companies disestablished in 1995 |
23273117 | https://en.wikipedia.org/wiki/Tomoyo%20Linux | Tomoyo Linux | Tomoyo Linux (stylised as TOMOYO Linux) is a Linux kernel security module which implements mandatory access control (MAC).
Overview
Tomoyo Linux is a MAC implementation for Linux that can be used to increase the security of a system, while also being useful purely as a systems analysis tool. It was launched in March 2003 and was sponsored by NTT Data Corporation until March 2012.
Tomoyo Linux focuses on system behaviour. Tomoyo Linux allows each process to declare behaviours and resources needed to achieve their purpose. When protection is enabled, Tomoyo Linux restricts each process to the behaviours and resources allowed by the administrator.
Features
The main features of Tomoyo Linux include:
System analysis
Increased security through Mandatory Access Control
Automatic policy generation
Simple syntax
Ease of use
History and versions
Tomoyo was merged in Linux Kernel mainline version 2.6.30 (2009, June 10)/ It is currently one of four standard Linux Security Modules (LSM), along with SELinux, AppArmor and SMACK.
The Tomoyo Linux project started as a patch for the Linux kernel to provide MAC. Porting Tomoyo Linux to the mainline Linux kernel required the introduction of hooks into the LSM that had been designed and developed specifically to support SELinux and its label-based approach.
However, more hooks are needed to integrate the remaining MAC functionality of Tomoyo Linux. Consequently, the project is following two parallel development lines:
References
External links
Comparison chart of 1.x and 2.x
Comparison chart of Tomoyo 1.x, 2.x, and Akari
Tomoyo Linux project
Tomoyo Linux at Embedded Linux Wiki
LWN : Tomoyo Linux and pathname-based security
Tomoyo – Debian Wiki
Tomoyo Linux – ArchWiki
Linux security software
Linux kernel features
Nippon Telegraph and Telephone |
18049025 | https://en.wikipedia.org/wiki/Spacewalk%20%28software%29 | Spacewalk (software) | Spacewalk is open-source systems management software for system provisioning, patching and configuration licensed under the GNU GPLv2.
The project was discontinued on 31 May 2020 with 2.10 being the last official release. SUSE forked the spacewalk code base in 2018 with uyuni-project
Overview
Features
Spacewalk encompasses the following functions:
Systems Inventory (Hardware and Software)
System Software Installation and Updates
Collation and Distribution of Custom Software Packages into Manageable Groups
System provisioning (via Kickstart)
Management and deployment of configuration files
Provision of virtual Guests
Start/Stop/Configuration of virtual guests
OpenSCAP Auditing of client systems
Architecture
Spacewalk Server: Server represents managing System
It is possible to set up primary and worker servers, and even a tree setup is possible
There are options for geographically remote proxy servers
Spacewalk Client: A system managed by a Spacewalk server
Compatible Client OS's are drawn from:
Red Hat Enterprise Linux (RHEL)
CentOS
Fedora
Scientific Linux
Oracle Linux (OL)
SUSE Linux Enterprise Server (SLES)
openSUSE
Solaris – limited and deprecated support
Debian – limited support
Spacewalk is controlled by the following Interfaces:
web interface, Used for most interactions
CLI (Command-line interface), Used for some specific operations
XML-RPC API, programmatic interface for specialist/development use
Subscription Management:
Particular upstream and downstream versions may include integration to supported vendor subscription support network such as Red Hat Subscription Management (RHSM), ULN, and SUSE Enterprise Linux Server subscriptions.
Backend Database:
While formerly requiring the commercial Oracle Database as a backend, version 1.7 (released in March 2012) added support for PostgreSQL.
Upstream and downstream versions
A number of DownStream versions use upstream Spacewalk version as the basis of their System Provision, patch and errata management:
Red Hat Satellite 5.x
Oracle's "Spacewalk for Oracle® Linux"
SUSE Manager Server
Support for particular client OSes, server OSes, system architectures, backend databases, and subscription services varies between versions and releases.
Oracle Spacewalk
Oracle introduced their own version of Spacewalk particularly to provide a familiar alternative for those switching from a different vendor while Oracle Enterprise Manager remains Oracle Corporation's preferred way of managing systems.
Spacewalk for Oracle® Linux is designed to be hosted on Oracle Linux (OL).
The about section of the release notes in Oracle Spacewalk 2.x Documentation indicate only minor branding changes and changes for GPG keys
Red Hat Satellite 5
Red Hat Satellite 5 is a licensed downstream adaption of Spacewalk with added functionality to manage Red Hat Enterprise Linux Subscriptions. In the active years of the Red Hat Satellite 5 lifecycle Spacewalk was simply known as the upstream project for Satellite. The relationship between Spacewalk and Red Hat Satellite 5 was analogous to the relationship between Fedora and Red Hat Enterprise Linux. With the emergence of Red Hat Satellite 6 with based on a fundamentally different toolset, end of lifecycle phase of Red Hat Satellite 5 and the emergence of downstream spacewalk based offerings from Oracle and SUSE newer versions of Spacewalk may not have this close relationship.
SUSE Manager Server
In March 2011 Novell released SUSE Manager 1.2, based on Spacewalk 1.2 and supporting the management of both SUSE Linux Enterprise and Red Hat Enterprise Linux.
In May 2018, during the openSUSE conference in Prague, it was announced that a fork of Spacewalk, called Uyuni, was being created. Named after the salt flat in Bolivia, Uyuni uses Salt for configuration management and React as the user interface framework.
From version 4.0, SUSE Manager is based on Uyuni as its upstream project.
History and development
Development
Red Hat developed the Red Hat Network to manage subscriptions software management and created the Red Hat Satellite application as a central management point with the user network.
For Red Hat Satellite version 5 the Satellite Function was implemented by a toolset named Project Spacewalk.
Red Hat announced in June 2008 Project Spacewalk was to be made open source under the GPLv2 License
Satellite 5.3 was the first version to be based on upstream Spacewalk code.
Stewardship and governance
In the Spacewalk FAQ issued in 2015 after the release of Red Hat Satellite 6 Red Hat.
Red Hat formally released Spacewalk as open source(GPLv2) in June 2008
Red Hat continues to sponsor and support Spacewalk as the upstream Red Hat Satellite 5. However that participation is anticipated to diminish as Red Hat Satellite 5 enters the final phases of its lifecycle. Spacewalk is not and can never be upstream for Red Hat Satellite 6 released in September 2014 due to it being a ground up rebuild with a different toolset.
The Spacewalk project can continue to grow and flourish provided that the community continues to find it a useful tool and is willing to support it.
Satellite 5 went end-of-life on 31 May 2020, the Spacewalk project was discontinued at the same time.
Builds
Upstream build
Releases
Criticisms
In a 2019 paper considering Linux open-source patching tools Spacewalk was commended for having a software inventory and community support but limited support for distributions noteabably Ubuntu was an issue.
Miscellaneous
The Spacewalk logo is a trademark of Red Hat, Inc.
Note
References
External links
Resources
GitHub.com repository for Spacewalk
Upstream GitHub documentation Wiki
Spacewalk Upstream User Documentation
Spacewalk on Fedorahosted.org (Deprecated)
Documentation for Red Hat Satellite 5.7 - Contains much Generally relevant for Spacewalk
Oracle Spacewalk Documentation - Generally useful Reference
SUSE Manager 3 Documentation
Free software programmed in Perl
Red Hat software
Remote administration software
Provisioning
Systems management |
15441720 | https://en.wikipedia.org/wiki/NVivo | NVivo | NVivo is a qualitative data analysis (QDA) computer software package produced by QSR International. NVivo helps qualitative researchers to organize, analyze and find insights in unstructured or qualitative data like interviews, open-ended survey responses, journal articles, social media and web content, where deep levels of analysis on small or large volumes of data are required.
NVivo is used predominantly by academic, government, health and commercial researchers across a diverse range of fields, including social sciences such as anthropology, psychology, communication, sociology, as well as fields such as forensics, tourism, criminology and marketing.
The first QSR International software product was developed by Tom and Lyn Richards. Originally called NUD*IST, it contained tools for fine and detailed analysis of unstructured textual data. In 1999, the Richards developed the first version of NVivo and eventually N6 was replaced by NVivo 7.
Description
NVivo is intended to help users organize and analyze non-numerical or unstructured data. The software allows users to classify, sort and arrange information; examine relationships in the data; and combine analysis with linking, shaping, searching and modeling.
The researcher or analyst can identify trends and cross-examine information in a multitude of ways using its search engine and query functions. They can make notes in the software using memos and build a body of evidence to support their case or project.
NVivo accommodates a wide range of research methods, including network and organizational analysis, action or evidence-based research, discourse analysis, grounded theory, conversation analysis, ethnography, literature reviews, phenomenology, mixed methods research and the Framework methodology. NVivo supports data formats such as audio files, videos, digital photos, Word, PDF, spreadsheets, rich text, plain text and web and social media data. Users can interchange data with applications like Microsoft Excel, Microsoft Word, IBM SPSS Statistics, EndNote, Microsoft OneNote, SurveyMonkey and Evernote.
Users can purchase add-on modules: NVivo Transcription for automated transcriptions, where users can transcribe directly in NVivo (Release 1.0); and NVivo Collaboration Cloud, that uses the cloud to enable small team project sharing and collaboration.
NVivo - Windows is available in English, Chinese (Simplified), French, German and Japanese, Spanish and Portuguese. NVivo - Mac is available in English, French, German, Japanese and Spanish.
Version history
Note: software named NUD*IST from 1981 to 1997
.
N4 – 1997
N5 – 2000
N6 – 2002
NVivo 2 – 2002
NVivo 7 – 2006 (consolidation of NVivo and N6 (NUD*IST))
NVivo 8 – 2008
NVivo 9 and NVivo for Teams – 2010
NVivo 10 – 2012
NVivo for Mac Beta – 2014
NVivo for Mac commercial release – 2014
NVivo 11 for Windows in three editions; NVivo Starter, NVivo Pro, NVivo Plus. Updates to NVivo for Mac and NVivo for Teams – 2015
NVivo 12 (Pro, Plus, Mac, and Teams) – 2018
NVivo (Release 1.0) / Nvivo 1.0 – March 18, 2020 (Windows & Mac). As previously, the Mac version has fewer features. QSR also released Collaboration Cloud for sharing projects (within OS only). The Plus and Pro versions from NVivo 12 have been combined.
See also
Computer-assisted qualitative data analysis software
References
External links
QDA software |
16730890 | https://en.wikipedia.org/wiki/Hildon | Hildon | Hildon is an application framework originally developed for mobile devices (PDAs, mobile phones, etc.) running the Linux operating system as well as the Symbian operating system. The Symbian variant of Hildon was discontinued with the cancellation of Series 90. It was developed by Nokia for the Maemo operating system and is now a part of GNOME. It focuses on providing a finger-friendly interface. It is primarily a set of GTK+ extensions that provide mobile-device–oriented functionality, but also provides a desktop environment that includes a task navigator for opening and switching between programs, a control panel for user settings, and status bar, task bar and home applets. It is standard on the Maemo platform used by the Nokia Internet Tablets and the Nokia N900 smartphone.
Hildon has also been selected as the framework for Ubuntu Mobile and Embedded Edition.
Hildon was an early instance of a software platform for generic computing in a tablet device intended for internet consumption. But Nokia didn't commit to it as their only platform for their future mobile devices and the project competed against other in-house platforms. The strategic advantage of a modern platform was not exploited, being displaced by the Series 60.
Components
The Hildon framework includes components that effectively provide a desktop environment.
Hildon Application Manager
Hildon Application Manager is the Hildon graphical package manager, it uses the Debian package management tools APT (Advanced Packaging Tool and dpkg) and provides a graphical interface for installing, updating and removing packages. It is a limited package manager, designed specifically for end-users, in that it doesn't directly offer the user access to system files and libraries. With the Diablo release of Maemo, Hildon Application Manager now supports "Seamless Software Update" (SSU), which implements a variety of features to allow system upgrades to be easily performed through it.
Hildon Control Panel
Hildon Control Panel is the user settings interface for Hildon. It provides simple access to control panels used to change system settings.
Hildon Desktop
Hildon Desktop is the primary UI component of Hildon, so makes up the bulk of what a user will see as "Hildon". It controls application launching and switching, general system control, and provides interfaces for task bar (application menu and task switcher), status bar (brightness and volume control), and home (internet radio and web search) applets.
Hildon Library
The Hildon library, originally developed by Nokia but since Maemo 5, developed by Igalia and Lanedo (who developed MaemoGTK+, the Maemo version of GTK+). It is a set of mobile specific GTK+ widgets for applications in Maemo. Up to Maemo 4, these widgets were designed for stylus usage. However, in Maemo 5, most widgets were deprecated and new widgets for direct finger manipulation were introduced, including a kinetic panning container.
See also
Qt Extended (Improved)
References
External links
Hildon – GNOME wiki
Maemo – GNOME wiki
maemo.org
Hildon UI Style Guide
Programming tools
Free software
GNOME Mobile
GTK |
5509885 | https://en.wikipedia.org/wiki/SoapUI | SoapUI | SoapUI is an open-source web service testing application for Simple Object Access Protocol (SOAP) and representational state transfers (REST). Its functionality covers web service inspection, invoking, development, simulation and mocking, functional testing, load and compliance testing. A commercial version, SoapUI Pro, which mainly focuses on features designed to enhance productivity, was also developed by Eviware Software AB. In 2011, SmartBear Software acquired Eviware.
SoapUI was initially released to SourceForge in September 2005. It is free software, licensed under the terms of the European Union Public License. Since the initial release, SoapUI has been downloaded more than 2,000,000 times. It is built entirely on the Java platform, and uses Swing for the user interface. This means that SoapUI is cross-platform. Today, SoapUI also supports IDEA, Eclipse, and NetBeans.
SoapUI can test SOAP and REST web services, JMS, AMF, as well as make any HTTP(S) and JDBC calls.
Features
SoapUI
Core features include web services:
inspection
invoking
development
simulation and mocking
functional, compliance and security testing
SoapUI Pro
SoapUI Pro is the commercial enterprise version. SoapUI Pro adds a number of productivity enhancements to the SoapUI core, which are designed to ease many recurring tasks when working with SoapUI.
Awards
SoapUI has been given a number of awards. These include:
Jolt Awards 2014: The Best Testing Tools
ATI Automation Honors, 2009
InfoWorld Best of Open Source Software Award, 2008
SOAWorld Readers' Choice Award, 2007
See also
Apache JMeter
Automated testing
itko
List of unit testing frameworks
LoadUI
Software testing
System testing
Test case
Test-driven development
TestComplete
xUnit – a family of unit testing frameworks
References
External links
API Testing Dojo
Free computer programming tools
Cross-platform software
Web service development tools
Software testing tools
2005 software
Software using the European Union Public Licence |
2204307 | https://en.wikipedia.org/wiki/Xbox%20Linux | Xbox Linux | Xbox Linux was a project that ported the Linux operating system to the Xbox video game console. Because the Xbox uses a digital signature system to prevent the public from running unsigned code, one must either use a modchip, or a softmod. Originally, modchips were the only option; however, it was later demonstrated that the TSOP chip on which the Xbox's BIOS is held may be reflashed. This way, one may flash on the "Cromwell" BIOS, which was developed legally by the Xbox Linux project. Catalyzed by a large cash prize for the first team to provide the possibility of booting Linux on an Xbox without the need of a hardware hack, numerous software-only hacks were also found. For example, a buffer overflow was found in the game 007: Agent Under Fire that allowed the booting of a Linux loader ("xbeboot") straight from a save game.
The Xbox is essentially a PC with a custom 733 MHz Intel Pentium III processor, a 10 GB hard drive (8 GB of which is accessible to the user), 64MB of RAM (although on all earlier boxes this is upgradable to 128MB), and 4 USB ports. (The controller ports are actually USB 1.1 ports with a modified connector.) These specifications are enough to run several readily available Linux distributions.
From the Xbox-Linux home page:
The Xbox is a legacy-free PC by Microsoft that consists of an Intel Celeron 733 MHz CPU, an nVidia GeForce 3MX, 64 MB of RAM, a 8/10 GB hard disk, a DVD drive and 10/100 Ethernet. As on every PC, you can run Linux on it.
An Xbox with Linux can be a full desktop computer with mouse and keyboard, a web/email box connected to TV, a server or router or a node in a cluster. You can either dual-boot or use Linux only; in the latter case, you can replace both IDE devices. And yes, you can connect the Xbox to a VGA monitor.
Uses
An Xbox with Linux installed can act as a full desktop computer with mouse and keyboard, a web/email box connected to a television, a server, router or a node in a cluster. One can either dual-boot or use Linux only; in the latter case, one can replace both IDE devices. One can also connect the Xbox to a VGA monitor. A converter is needed to use keyboards/mice in the controller ports; however this is not difficult, as the Xbox uses standard USB with a proprietary port.
Currently only a few distributions of Xbox Linux will run on the version 1.6 Xbox (the third newest version, including 1.6b). Xboxes with modchips and the Cromwell bios installed can run more distributions than those with only a softmod. This is mainly due to issues with the video chip used in version 1.6 Xboxes that was developed exclusively by Microsoft and which has no source code available at this time. This can cause significant overscan on all four sides of the screen when a different kernel than the original is loaded.
Softmod
One of the more popular ways of installing Xbox Linux is through a softmod, which does not require a modchip to use. The Xbox Linux softmod utilizes a save exploit found in the original run of MechAssault, Splinter Cell, 007: Agent Under Fire, and Tony Hawk's Pro Skater 4. The method involves loading a hacked save file transferred to the Xbox's Hard Drive. When the save file is loaded, the MechInstaller is initiated. The Xbox Live option on the dashboard is replaced with the new Linux option after rebooting the system. Another softmod that can be used is the hotswap exploit which will unlock the Xbox hard drive long enough to allow one to modify it.
There is also a way to completely replace the Xbox's stock BIOS with a "Cromwell" BIOS, which is completely legal and is solely for Linux on the Xbox. However, once the TSOP (BIOS chip) is flashed with "Cromwell", the Xbox can no longer play Xbox games or run native Xbox executables (.xbe files, akin to .exe for Windows).
List of distributions
There are several distributions of Xbox Linux, most of which are based on PC Linux distributions.
See also
Free60
Linux for PlayStation 2
OtherOS
References
External links
Project site on SourceForge.net
Xbox Hacking official document
SoftMod Xbox for Free (Hotswap Technique!)
Platform-specific Linux distributions
Xbox (console) software
Game console operating systems
Discontinued Linux distributions
Linux distributions |
31178582 | https://en.wikipedia.org/wiki/Euclideon | Euclideon | Euclideon Pty Ltd is an Australian computer software company best known for a middleware 3D graphics engine, called Unlimited Detail. Euclideon is also the parent company and operator of Holoverse, a 'holographic entertainment centre' located on the Gold Coast, in Queensland, Australia.
Euclideon claims that Unlimited Detail is based on a point cloud search engine indexing system and that the technology can provide 'unlimited graphics power', proposing it as a replacement for polygon-based rendering.
In 2010 Euclideon was the recipient of approximately $2 million, the largest grant awarded by the Australian Federal Government under its new Commercialisation Australia initiative. The funds provided by the grant are meant to support the implementation of multi-platform functionality, allowing Euclideon's technology to run on a variety of hardware platforms, including mobile phones and game consoles.
Unlimited Detail
Unlimited Detail is described by Euclideon as a form of point cloud Search engine indexing system, which uses a large number of individual points to create models, instead of a more traditional polygon mesh. According to their description, the engine uses a search algorithm to determine which of these points are visible on-screen, and then displays only these points. On a 1024 × 768 display, for example, the engine would display only 786,432 visible points in each frame. As the engine is displaying the same number of points in every frame, the level of geometric detail provided is limited only by the amount of hard-drive space needed to store the point cloud data, and the rendering speed is limited only by the screen resolution.
Euclideon have previously described their technique as being a voxel rasterizer, but decided to use their own terminology such as "3D atoms" and "point cloud", saying that "that word [voxels] doesn't have the prestige in the games industry that it enjoys in medicine and the sciences".
History
The project was first showcased at the Australian Game Developers Conference in 2003.
In 2011, Euclideon gained worldwide attention online when it released a number of video demos showcasing its 'Unlimited Detail' technology, attracting both skepticism and interest from the gaming press. Minecraft developer Markus Persson was critical of the demos, arguing that Euclideon portrays the software as "revolutionary" while it may suffer the same limitations as existing voxel renderers. John Carmack said the technology has no chance of a game on current gen systems, but maybe several years from now. and Crytek's Cevat Yerli called the technology "definitely credible." Euclideon later released several interviews with CEO Bruce Dell responding to critics' concerns.
In September 2012 Bruce Dell filed a patent describing the rendering algorithm said to be used in Euclideon's software. The patent application was published on 27 March 2014.
The Unlimited Detail Engine was noted in a review of DigiDoc Scotland by CyArk, as "incredible" and "game changing". Shortly after, Euclideon was a sponsor and attendee at ILMF in Denver, 11–13 February 2013, showcasing Unlimited Detail enabled products.
In May 2013, another demonstration of the capabilities of the Euclideon 3D Engine showcases the Geoverse, where geospatial use is highlighted with the newly offered SDK.
In June 2013 a former employee of Euclidean claimed that a significant number of staff were let go, mentioning that clients were very impressed with Euclideon's technology and that plans are in place to develop the Infinite Detail engine to "a stage where it could eventually be utilised for games".
Euclideon appeared again in 2014 with a new video on their YouTube channel showcasing their solid laser scan technology and announced that they are currently working on 2 games and plan to open a gaming division in 2015.
In 2016 Euclideon released another video in which it shows a "revolutionary" take on the VR (virtual Reality) market claiming to have created real life hologram rooms. They've announced that they will open a 4D hologram room entertainment arcade on 4 June 2016 located at Southport, Queensland, Australia. The project itself goes under the name "Holoverse".
In August 2017, Euclideon developed multi-user hologram table, which allows four people to interact simultaneously with images projected onto the table surface.
In November 2018, Euclideon released a video unveiling what it calls a 'hologram arcade table'. In the video, Bruce Dell described it as an arcade table that uses 'holographic technology' to play games, and that the table 'creates objects out of light that float in the air about 90 centimetres away from the surface'. Euclideon displayed their hologram arcade table at IAPPA 2018 in Orlando, Florida.
In July 2020, Euclideon released udStream, a 3D data visualisation tool aimed at geospatial users. The product was released on a freemium model, offering capped levels of data storage to entry-level customers.
Geoverse Suite
Geoverse Suite is intended to allow users to view terabytes of point cloud data from laser scans of cities and terrain. The point cloud is intended to be viewable within a second of the file being loaded, and at high frame rate. It is stated that as opposed to be loaded into RAM, the data is streamed from a local hard drive or it can be streamed from a USB drive or over a network connection.
The Geoverse Suite currently contains two software products - Geoverse Massive Data Manager (Geoverse MDM) and Geoverse Convert. The software requirements for Geoverse MDM recommend 2 GiB of RAM.
Geoverse Convert is the tool that is used to convert typical mesh and point cloud formats into Euclideon's proprietary streamable UDS (Unlimited Detail Data Set) format.
Geoverse MDM allows users to view these UDS files using the Unlimited Detail technology. UDS files may be streamed from external servers or the internet without having to first download the files. It also contains standard industry features such as placing bookmarks, labels and taking measurements.
On 30 September 2013 - Merrick & Company (a $116 million geospatial technology firm), announced that it has signed an agreement with Euclideon to distribute the Geoverse software product line with North America.
In July 2013 - the Austrian company Meixner Imaging GmbH, part of Meixner Group - one of Europe's leading geospatial companies, have signed an agreement that appoints Meixner as the premium distributor for Geoverse software throughout Europe.
Solidscan
Solidscan is the latest software available to the public for purchase. Solidscan 'converts a laser scan into a solid, photo-realistic representation of the real world.' Solidscan's technology significantly improves the visual quality of laser scanner data (which typically appears as a sparse array of points) while still keeping the dimensions of objects true-to-life.
Web streaming technology
In 2014, Euclideon released its first demo available for the public in the form of a web viewer. Codenamed 'udWeb', the technology allows users browsing using the Google Chrome browser to stream multi-gigabyte point cloud models through to their browser without a plugin. The demonstration appeared to be faster than traditional streaming approaches but no product has been released and the demo was taken down.
Holoverse
In June 2016, the company opened Holoverse, a 'holographic entertainment centre' in Southport funded by a grant from the Federal Government along with private equity, allowing visitors to step into hologram rooms and be "transported to other worlds".
In March 2017 the Gold Coast Bulletin announced that Euclideon will be opening numerous Holoverse centers around the world with the next center opening in Oman. In November 2018, Euclideon announced that they will be opening their second Holoverse location in Muscat, Oman.
See also
Sparse voxel octree
Search algorithm
Search engine indexing
Point cloud
Lidar
References
External links
Official website
Australian companies established in 2010
Software companies of Australia |
31356221 | https://en.wikipedia.org/wiki/Apache%20OODT | Apache OODT | The Apache Object Oriented Data Technology (OODT) is an open source data management system framework that is managed by the Apache Software Foundation. OODT was originally developed at NASA Jet Propulsion Laboratory to support capturing, processing and sharing of data for NASA's scientific archives.
History
The project started out as an internal NASA Jet Propulsion Laboratory project incepted by Daniel J. Crichton, Sean Kelly and Steve Hughes. The early focus of the effort was on information integration and search using XML as described in Crichton et al.'s paper in the CODATA meeting in 2000.
After deploying OODT to the Planetary Data System and to the National Cancer Institute EDRN or Early Detection Research Network project, OODT in 2005 moved into the era of large scale data processing and management via NASA's Orbiting Carbon Observatory (OCO) project. OODT's role on OCO was to usher in a new data management processing framework that instead of tens of jobs per day and tens of gigabytes of data would handle 10,000 jobs per day and hundreds of terabytes of data. This required an overhaul of OODT to support these new requirements. Dr. Chris Mattmann at NASA JPL led a team of 3-4 developers between 2005-2009 and completely re-engineered OODT to support these new requirements.
Influenced by the emerging efforts in Apache Nutch and Hadoop which Mattmann participated in, OODT was given an overhaul making it more amenable towards Apache Software Foundation like projects. In addition, Mattmann had a close relationship with Dr. Justin Erenkrantz, who as the Apache Software Foundation President at the time, and the idea to bring OODT to the Apache Software Foundation emerged. In 2009, Mattmann and his team received approval from NASA and from JPL to bring OODT to Apache making it the first NASA project to be stewarded by the foundation. Seven years later, the project has released a version 1.0.
Features
OODT focuses on two canonical use cases: Big Data processing and on Information integration. Both were described in Mattmann's ICSE 2006 and SMC-IT 2009 papers. It provides three core services.
File Manager
A File Manager is responsible for tracking file locations, their metadata, and for transferring files from a staging area to controlled access storage.
Workflow Manager
A Workflow Manager captures control flow and data flow for complex processes, and allows for reproducibility and the construction of scientific pipelines.
Resource Manager
A Resource Manager handles allocation of Workflow Tasks and other jobs to underlying resources, e.g., Python jobs go to nodes with Python installed on them; jobs that require a large disk or CPU are properly sent to those nodes that fulfill those requirements.
In addition to the three core services, OODT provides three client-oriented frameworks that build on these services.
File Crawler
A file Crawler automatically extracts metadata and uses Apache Tika to identify file types and ingest the associated information into the File Manager.
Catalog and Archive Crawling Framework
A Push/Pull framework acquires remote files and makes them available to the system.
Catalog and Archive Service Production Generation Executive (CAS-PGE)
A scientific algorithm wrapper (called CAS-PGE, for Catalog and Archive Service Production Generation Executive) encapsulates scientific codes and allows for their execution independent of environment, and while doing so capturing provenance, and making the algorithms easily integrated into a production system.
CAS RESTful Services
A Set of RESTful APIs which exposes the capabilities of File Manager, Workflow Manager and Resource manager components.
OPSUI Monitor Dashboard
A web application for exposing services form the underlying OODT product / workflow / resource managing Control Systems via the JAX-RS specification. At this stage it is built using Apache Wicket components.
The overall motivation for OODT's re-architecting was described in a paper in Nature (journal) in 2013 by Mattmann called A Vision for Data Science.
OODT is written in the Java, and through its REST API used in other languages including Python (programming language).
Notable uses
OODT has been recently highlighted as contributing to NASA missions including Soil Moisture Active Passive and New Horizons. OODT also helps to power the Square Kilometre Array telescope increasing the scope of its use from Earth science, Planetary science, radio astronomy, and to other sectors. OODT is also used within bioinformatics and is a part of the Knowledgent Big Data Platform.
References
External links
http://oodt.apache.org
OODT
Java platform
Free software programmed in Java (programming language)
Java (programming language) libraries
Software using the Apache license |
13536263 | https://en.wikipedia.org/wiki/List%20of%20Veronica%20Mars%20characters | List of Veronica Mars characters | Veronica Mars is an American television series created by Rob Thomas. The series premiered on September 22, 2004, during UPN's last two years, and ended on May 22, 2007, after a season on UPN's successor, The CW Television Network. Balancing murder mystery, high-school and college drama, the series features social commentary with sarcasm and off-beat humor in a style often compared to film noir. Set in the fictional town of Neptune, the series starred Kristen Bell as the title character, a student who progressed from high school to college during the series while moonlighting as a private investigator under the wing of her detective father.
The first season had seven regular characters. As Thomas had conceived the show as a one-year mystery, he decided to introduce and eliminate several characters in order to create an "equally fascinating mystery" for the series' second season. Thomas needed "new blood" since he felt unable to bring back the Kanes and the Echolls and "have them all involved in a new mystery". The third season features a cast of ten actors who receive billing, an increase from the nine actors in the second. Three of the regulars in the second season are written out of the series, two new characters are introduced and two others are upgraded from recurring roles.
Overview
Main characters
Veronica Mars
Kristen Bell portrays the titular Veronica Mars, a high school junior and skilled private detective. Bell was chosen to play Veronica Mars from more than 500 women who auditioned for the role. Bell felt that it was "just luck" that Rob Thomas saw that "I have some sass to me, and that's exactly what he wanted." Bell thought that her cheerleader looks and outsider's attitude set her apart from the other women who auditioned.
Duncan Kane
Teddy Dunn portrayed Duncan Kane, Veronica's ex-boyfriend and Lilly's brother. Dunn originally auditioned for Logan but ended up portraying Duncan Kane. Dunn left the series midway through the second season because Thomas felt that the Logan-Veronica-Duncan love triangle had run its course. He needed to put "other guys in her life" to keep the series fresh and attributed Dunn's removal to fan interest in the Logan-Veronica relationship, saying "it became clear that one suitor won out".
Logan Echolls
Jason Dohring played Logan Echolls, the "bad-boy" 09er son of an A-list actor. Dohring originally auditioned for the role of Duncan Kane. After his audition, the producers asked Dohring to audition for Logan's character, who was only going to be a guest role in the pilot. Dohring felt that his audition for Duncan "was a little dark", and was told by the producers that it was "not really right". The producers then asked Dohring to read for the role of Logan. Dohring acted one scene from the pilot, bashing a car's headlights in with a tire iron. During the final auditions, Dohring read two times with Bell and met with the studio and the network. When reading with Bell, Dohring acted the whole scene as if he was the one who raped her and tried to give the character an evil feel.
Wallace Fennel
Percy Daggs III portrayed Wallace Fennel, Veronica's best friend and frequent partner in mystery solving. Daggs auditioned for Wallace's role twice before being cast and had to go through three tests with the studio and network executives. During his first audition, Daggs read four scenes from the pilot. Just before his studio test, Daggs read with Bell and had "a great conversation." He said that she "made me feel comfortable about auditioning" and was a big reason why he became more comfortable playing Wallace as the season went on.
Weevil
Francis Capra portrayed Eli "Weevil" Navarro, the leader of the PCH Biker gang and Veronica's friend. Despite often being in trouble with the law, Weevil helps Veronica solve many of her cases. He is wrongfully accused of fellow gang member Thumper's murder and is arrested at his graduation. He later becomes a janitor at Hearst College after he is cleared.
Capra reprises his role in the Veronica Mars film, in which Weevil is revealed to have since married a woman named Jade, with whom he has a daughter, Valentina.
Keith Mars
Enrico Colantoni played Veronica's father Keith Mars, a private investigator and former Balboa County Sheriff. Veronica often helps him solve cases.
Mallory Dent
Sydney Tamiia Poitier played Mallory Dent, Veronica's journalism teacher at Neptune High, through the first half of the first season. Ms. Dent, as she was commonly referred to, was the only teacher who took Veronica at face value, not based on prejudices. Together with Veronica, she was responsible for uncovering the voting scam that led to Duncan winning the school elections illegally. Ms. Dent left her job after becoming pregnant. Although she was given regular series billing, Poitier appeared in only four episodes but was given credit for seven. Poitier's removal from the series was rumored to be due to budget issues.
Jackie Cook
Tessa Thompson portrayed Jackie Cook in the second season as Wallace's romantic interest and the daughter of a famous baseball player. Fan reaction to the character was generally negative, particularly after Veronica witnessed Jackie talking to a guy while dating Wallace. Thomas blamed the character's reception on his error in judgment: he had hoped fans would question whether it was Jackie or Veronica in the wrong; however, the audience automatically assumed that it was Jackie. Thomas decided not to change the story arc he had planned for Jackie, as he believed Thompson was "a fantastic actress and she's got more to play." He said that whether fans ended up liking Jackie was "up in the air", but he hoped that they did because "she's really, really good". Jackie was subsequently written out of the series at the end of the second season.
Dick Casablancas
Ryan Hansen portrayed Richard "Dick" Casablancas Jr., an 09er friend of Logan, a womanizer and former high-school bully turned frat boy. Dick was a recurring character in the first season but was upgraded to series regular in the second season.
Beaver Casablancas
Kyle Gallner acted as Cassidy "Beaver" Casablancas, Dick's introverted younger brother. Cassidy was a recurring character in the first season, but was upgraded to series regular in the second season.
Mac
Tina Majorino portrayed Cindy "Mac" Mackenzie, a computer expert befriended by Veronica. Mac was a recurring character in the first two seasons but was upgraded to series regular in the third.
Parker Lee
Julie Gonzalo portrayed Parker Lee, Mac's extroverted roommate at Hearst College, described by Thomas as "everything that Mac is not." Parker was introduced in the third season as a series regular.
Piz
Chris Lowell played Stosh "Piz" Piznarski, Wallace's roommate at Hearst College and a music lover with his campus radio show. Piz was introduced in the third season as a series regular and was named after the director of the pilot, Mark Piznarski. The character's role was to have another male friend for Veronica, middle-class and not upper-class. Thomas used the radio show as a narrative device to capture the mood of the university. Throughout the third season Piz is implied to have a crush on Veronica. The two develop a friendship, although he's initially hurt by seeing her with Logan. After Veronica breaks up with Logan, she starts dating Piz in "Debasement Tapes." When a sex tape of Veronica and Piz starts circulating via email, Logan beats up Piz, believing him to be behind it. Veronica's investigations revealed that the video originated from a Hearst College secret society named The Castle.
Piz reappears in the Veronica Mars film, where he is shown to have since relocated to New York City with Veronica. When she temporarily visits her hometown to help ex-boyfriend Logan find a lawyer, she gets dragged to her high-school reunion by Wallace and Mac. Piz shows up at the reunion to surprise her, only to join the fight that ensued after Madison Sinclair played a copy of his and Veronica's college sex tape. Piz returned to New York the following morning. After she stayed in Neptune longer than initially planned and didn't show up to meet Piz's parents, he later broke up with Veronica.
Don Lamb
Michael Muhney portrayed Don Lamb, the Balboa County Sheriff who won the office from Keith in the recall election spearheaded by Jake Kane. Lamb was a recurring character in the first two seasons, but was upgraded to series regular in the third.
Gia Goodman
Krysten Ritter portrays Gia Goodman, the daughter of influential Neptune citizen and professional baseball team owner Woody Goodman, and a student at Neptune High. Gia starts attending Neptune High after she transfers from a private boarding school. She attends the class field trip to Shark Field and meets Dick Casablancas, his brother Cassidy, Duncan Kane, and Veronica for the first time. Gia rides the school bus on the way to the stadium. However, she opts to ride in Dick's limo with the other "09ers" on the way back. The bus crashes, and all of the passengers die, save for Meg Manning. Later in the year, Gia invites Veronica to her slumber party. Veronica attends; however she has an ulterior motive and is investigating families that Meg babysat for. When Gia is sent footage of herself at her brother's soccer game, she hires Veronica to find the sender. Gia and Veronica's friendship is broken when a feud erupts between their fathers, and she fires Veronica from the case. When Veronica realizes that Gia's stalker is Neptune High's janitor, Tommy "Lucky" Dohanic, she rushes to her aid. Lucky is arrested, but he is released when the Mannings pay for his bail. The next day, he comes to school with a gun looking for Gia; however, he is shot and killed by school security.
Gia reappears in the Veronica Mars film as a wealthy socialite engaged to Luke Haldeman, a former classmate. Veronica initially suspects her to be responsible for Carrie Bishop's death (who became a self-destructive pop star under the name Bonnie DeVille before the film's events). Veronica connects the murder to Carrie's best friend, Susan Knight, who allegedly died in a boating accident nine years earlier. She concludes that Gia and Luke, who were on the boat with Susan, covered up her death and killed Carrie because she threatened to confess. Veronica sends bugged flowers to Gia's apartment and calls her, playing recordings of Carrie's voice as an attempt to scare her into confessing. Gia panics and calls Stu "Cobb" Cobbler, another classmate who was on the boat with her. Veronica goes over to her apartment to confront her, and Gia reveals Cobb was the mastermind behind Carrie's death as well as what happened on Susan's boat; Susan succumbed to Alcohol poisoning, and Cobb took photos of a panicked Carrie, Gia, and Luke dumping Susan's body. He's been using the images to blackmail everyone. Veronica's bug broadcast everything on a frequency she believed to be unused but was used by a local rock station. Cobb heard everything from a radio in his apartment in the building across the street and then shot Gia through the window before coming after Veronica. Veronica calls the police and lures Cobb to the basement, beating him unconscious with a golf club. Gia is revealed to have died from her gunshot wound after Cobb is arrested and guilty of murdering her and Carrie.
Alonzo Lozano
Clifton Collins Jr. portrays Alonzo Lozano, a hitman for a Mexican cartel. Introduced in season 4, he begins a romantic relationship with Weevil's sister Claudia.
Daniel Maloof
Mido Hamada portrays Daniel Maloof, a congressman and Alex's older brother. After Tawny's family attempts to attack him and Alex over an engagement ring, he hires Logan as his bodyguard.
Matty Ross
Izabela Vidovic portrays Matty Ross, a teenager who lost her dad in bombings.
Recurring characters
Members of main characters' families
Casablancas family
Kendall Casablancas
Charisma Carpenter portrays Kendall Casablancas (Kendall Lacey Shifflet, a fake real name; really Priscilla Banks), Cassidy and Dick's gold-digging stepmother. Kendall first appears in the Season 2 premiere, having married real estate magnate Richard Casablancas, seemingly for his vast wealth. She becomes the stepmother of Dick and Cassidy and begins an affair with their friend, Logan. Cassidy hires Veronica to investigate his stepmother, and she discovers that she helped Richard commit real estate fraud. Veronica told Cassidy and gave the information to the Securities and Exchange Commission; however, Richard fled the country before he could be arrested. Richard's lawyers told Kendall that while the boys both had trust funds, essentially, she had nothing. Kendall became desperate for money, and began selling Richard's belongings, and turned to Logan and Duncan, but no avail. Cassidy, who had started a real estate company called Phoenix Land Trust, needed Kendall to be his CEO for business purposes because of his underage status. Although Kendall and Cassidy admitted their mutual dislike of each other, the salary prospect eventually made Kendall come around. The business continued, though most people assumed Richard was behind Phoenix Land Trust, not Cassidy. When Cassidy informs Kendall they need to acquire more capital, she goes and visits the incarcerated Aaron. She makes a deal with him: he will buy into Phoenix Land Trust if she goes to Duncan and Logan's suite and gets something for him. Kendall goes to the suite and removes some of Duncan's hair from the shower drain. Aaron later uses the hair to plant false evidence implying Duncan committed the crime he was on trial, and Aaron was acquitted.
When Keith discovered that Kendall is the beneficiary of the life insurance policies on her stepsons, who could have been killed in the bus crash that claimed the lives of several people, he began investigating. He and Veronica discover that everything they know about Kendall is a lie; she is not a dumb trophy wife, rather a con artist named Priscilla Banks. She had been working with the Fitzpatrick crime family, and spent time in jail to protect her lover, Cormac Fitzpatrick (Jason Beghe). Keith breaks into Kendall's home, which leads to a dangerous confrontation with Kendall and Liam Fitzpatrick. When Cormac is released from prison, Keith reunites the two, trying to escape from Liam Fitzpatrick with the money Kendall received from Phoenix Land Trust. As Keith leaves the two, Cormac shoots and kills Kendall and tries to attack Keith, who escapes. Keith believes he got Kendall killed and feels guilty since he and Kendall had formed a cordial relationship while he was working for her. He goes back to the crime scene with the police, but neither Kendall nor Cormac's body are found, who is believed to have been killed by Liam for not giving up the money.
Richard "Big Dick" Casablancas
David Starzyk portrays Richard "Big Dick" Casablancas Sr., Cassidy and Dick's father. Richard marries Kendall to be a trophy wife and openly favors Dick over Cassidy. When Veronica discovers that he has committed real estate fraud and tells the U.S. Securities and Exchange Commission (SEC), Richard flees the country before he can be arrested.
Richard reappears as a recurring character in season 4. It's revealed he did a stint in prison after the SEC finally caught him.
Terrence Cook
Jeffrey Sams portrays Terrence Cook, an ex-major-league baseball player, gambler, and Jackie's formerly estranged father. He's introduced in season 2 along with Jackie. Terrence is initially suspected of causing a bus crash that killed several Neptune High students in the season 2 premiere.
Echolls family
Aaron Echolls
Harry Hamlin plays Aaron Echolls, an Oscar-winning A-list actor and is the main antagonist during the season 1 finale. At the beginning of the series, Aaron is married to his second wife, Lynn, with a son Logan. He adopted his daughter Trina while married to his previous wife. Aaron has numerous female fans due to his impressive physique and good looks. While Aaron is beloved by the American public, in real life, he is very different. Aaron cheats on his wife, and easily angered, he physically abuses Logan, but tolerates Trina. Aaron quits acting when Lynn commits suicide and tries to reconcile with Logan and Trina. Upon learning that Trina's boyfriend is physically abusing her, he beats the boy, threatening more if the boy ever returned. When Veronica exposes him as the man who killed Lilly, Aaron is indicted and held without bail. Logan visits his father in a holding cell after new evidence implicates Logan in a murder case. Aaron claims that he is innocent and tries to convince his son that he had been wrongfully accused. Logan, who despises his father, does not believe Aaron's story but recognizes that the evidence against Aaron is circumstantial at best. Later, Kendall visits Aaron in prison, and he agrees to invest in Phoenix Land Trust if she gets some of Duncan's hair. Aaron uses the hair to plant false evidence implying Duncan committed the crime, and Aaron is acquitted. He goes to the Neptune Grand and has sex with Kendall. While she is in the shower, Clarence Wiedman appears and kills Aaron with a silenced pistol on Duncan's orders.
Lynn Echolls
Lisa Rinna portrays Lynn Echolls, Logan's mother and Trina's stepmother. Lynn was a famous movie star who was married to Aaron Echolls, an even bigger movie star. Lynn tried to ignore Aaron's abuse of Logan by turning to alcohol and pills. After Aaron received several threatening letters, Lynn turned to Keith to discover who was behind them. It turned out to be a caterer who had seen Aaron having an affair at a party, and he had her fired. At the Echolls's Christmas party, the caterer knifed Aaron in the gut. Aaron recovered and hired Keith to find out who was leaking stories of his many infidelities to the press, only to discover Lynn. When Aaron confronted Lynn, she said that she wanted to hurt him as he hurt her, went back to her car, took some pills, and drove away. Her car was found abandoned on the Coronado Bridge. She had jumped, but no body was found. Even after the funeral, Logan refused to believe that his mother was dead. He thought she left him clues that she was still alive (ex: leaving behind a lighter engraved with "Free At Last" that she always kept in her purse) and that she was faking her death to get away from Aaron. Logan hired Veronica to prove his mother was alive; however, a video filmed by a freshman and his friends who were making a movie showed jumping from the bridge in the background, the exact time it was believed Lynn had jumped. Logan finally realizes that she's gone due to the video's evidence and after discovering that Trina was the one using his mother's credit cards. With all hope gone of his mother being alive, he breaks down crying in Veronica's arms. Later, when Lynn's will was read, it was discovered she had recently altered it, cutting Aaron out and leaving everything to Logan. Trina initially thought she had been cut out, too, only to be informed that she hadn't been left anything before it was altered.
Trina Echolls
Alyson Hannigan plays Trina Echolls, Logan's adopted sister and a struggling actress. She is first mentioned by Aaron in "Clash of the Tritons," who says that she is the only one who will talk to him. Logan asks Veronica to investigate his mother's death, and they discover Trina is staying at the Neptune Grand. The film Trina had been shooting in Australia had not worked out, and she had decided to return home. Trina moves back into the house and gets into trouble with money after borrowing money from her boyfriend, Dylan. She asks Logan for a loan; however, he does not give her any money. Because Trina can't pay Dylan, he beats her to which Logan finds out about after seeing her limp and has a black eye. Once Dylan arrives to talk to Aaron about starring in his movie, Aaron confronts him and beats him.
In "My Mother, the Fiend," Trina returns to Neptune as the "Special Celebrity Director" of the school play. During this time, Veronica investigates the case of a baby left in the bathroom of Neptune High during the prom of 1980. She believes Celeste Kane is the mother and wants her to own up to abandoning the child. Veronica eventually discovers Trina was the prom baby and tells Trina the truth, excited at the thought of inheriting some of Celeste's wealth. The next day at school, Veronica and Trina talk when Mary, the deaf lunch lady, begins pointing to herself and then to Trina. Veronica eventually deciphers her sign language: Mary is Trina's biological mother. It turns out then-student Mary had an affair with a teacher, and she left baby Trina at his doorstep. He then left the baby in the bathroom at the prom, knowing they would think it belonged to a student. It turned out the teacher was the current principal, Alan Moorehead. Trina busts him in the middle of a staff meeting in front of all the other teachers, and he is fired.
As a result of Hannigan's busy schedule, several of her storylines had to be swapped between episodes. Thomas said that he "loved having Alyson in the show" and had "a lot of fun with her".
Alicia Fennel
Erica Gimpel portrays Alicia Fennel, Wallace's mother, dated Keith Mars at the end of season one and season two. Alicia later breaks up with Keith after he tried to investigate her past with Wallace's biological father.
Kane family
Jake Kane
Kyle Secor recurs as Jake Kane, Lilly and Duncan's father, a software billionaire. He is Lianne Mars' sweetheart in high school and was having an affair with her at the start of season one. Jake invented streaming video, making him one of the most influential men in Neptune, California via his software company, Kane Software. When Veronica interviews Abel Koontz, he suggests that Veronica is Jake's biological daughter rather than Keith's because Jake and Lianne once dated while in high school. Keith later has a paternity test and is proved to be Veronica's biological father. Jake returns in the series finale as a member of the secret society "The Castle."
Celeste Kane
Lisa Thornhill plays Celeste Kane, Lilly and Duncan's mother. Celeste loathed Veronica because of her husband's affair with Veronica's mother. She disapproved of Veronica and Duncan's relationship and told Duncan that Veronica could have been his half-sister because of Jake's affair with Lianne. This caused Duncan to break up with Veronica. Before Lilly died, Celeste had shown resentment towards her daughters' way of life and blamed every problem her family faced on Lilly. In her last appearance, in Donut Run, she hired Vinnie Van Lowe to find Duncan.
Lilly Kane
Amanda Seyfried portrays Lilly Kane, Duncan's older sister, Veronica's best friend, and Logan's girlfriend. Lilly was the daughter of Jake and Celeste Kane. She was murdered on October 1, 2003, roughly eleven months before the show's first episode. Abel Koontz, a disgruntled employee, fired from Jake's firm, confessed to the crime and was awaiting execution. Lilly is seen on the show throughout a series of flashbacks and Veronica's daydreams, which portray her as a fun, wild, and stylish teenage girl. She was best friend to Veronica, who was dating Lilly's brother, Duncan. Before her death, Lilly had been dating Logan but had also been in a relationship with Eli "Weevil" Navarro, as Lilly did not like to be tied down. Lilly's murder changes Veronica's life completely, being the catalyst for a series of events that include Veronica's father Keith losing his job, Veronica's mother Lianne leaving town, and all of Veronica's friends abandoning her. Through the first season, Veronica investigates Lilly's murder and finds that nothing is what it seems. All three of the Kanes falsified their alibis, and Lilly's time of death was three hours off. Clarence Wiedman, the head of security for Kane Software, called in the tip that Abel Koontz arrested.
In the episode, "Leave It to Beaver," Veronica and Duncan discover several videotapes in Lilly's bedroom. They show Lilly in bed with Aaron, Logan's father. Veronica realizes that Aaron killed Lilly to get the tapes back. When Duncan found her body, he went into a catatonic state. When his parents discovered him reeling over Lilly's dead body, they assumed he had killed Lilly in an epileptic seizure. To protect their son, they began an elaborate cover-up of the murder. However, before Veronica can take the tapes to the police, Aaron tries to kill her. After a chase, Keith arrives, and he manages to subdue Aaron, and he is arrested. Later that night, Veronica has a dream about Lilly. The two of them are floating in a lily-covered pool. Leaning back and smiling, Veronica declared this was the way things were supposed to be and how they would be from now on. Lilly smiles sadly and replies, "You know how things are going to be from now on, don't you? You have to know." She then asks Veronica not to forget about her, and Veronica turns to see that Lilly has disappeared and is alone in the pool. Veronica promises that she could never forget her best friend. Lilly also appears as a hallucination in the season two premiere, "Normal Is the Watchword." Distracted by seeing Lilly, Veronica misses the bus and has to ride with Weevil back to town instead; the bus later crashes, and all aboard eventually die. Lilly returns for one last dream sequence in "Not Pictured," wherein Veronica's subconscious suggests that Lilly would have attended Vassar had she lived. As high-spirited as ever, Lilly brags over her sexual experiments and promises Veronica will understand once she goes to college. A painting of Lilly is seen in the season 3 finale as an homage to the character.
Thomas described Seyfried as "the biggest surprise of the year." When casting Lilly Kane, who would only appear occasionally as "the dead girl," Thomas did not receive the same level of actors who auditioned for the role of a series regular. Thomas said that he had "never had a more cut and dry audition" than he did with Seyfried. He said that she was "about 100 times better than anyone else that we saw; she was just spectacular". He said that she ended up being so good in the series that he used her three or four more times than he initially planned.
Lianne Mars
Corinne Bohrer recurs as Lianne Mars, Veronica's estranged and alcoholic mother. Lianne was the high school sweetheart of Jake Kane, who she believed was Veronica's father. At Neptune High, Lianne Reynolds was credited to being a gossip and a friend to the deaf girl, Mary, who was later revealed to be the biological mother of Trina Echolls. Lianne later marries Keith; however, she has an affair with Jake. After the Lilly Kane murder is made public, Lianne develops a drinking problem and leaves town. When she returns, claiming to want to fix her problems, Veronica spends all of her college savings to enter her mother into rehab. Lianne returns to stay, seemingly sober; however, Veronica finds out that Lianne did not finish rehab. Veronica asks Lianne to leave, who steals the $50,000 settlement Veronica received from the Kanes on her way out. Lianne reappeared in Veronica's dream, where Lilly was never murdered, and Lianne is the perfect mother.
Charlie Stone
Ryan Eggold guest stars as Charlie Stone, Logan's half-brother and Aaron's son. When Logan asks Veronica to investigate his rapidly declining trust fund, she uncovers that Logan's father, movie star Aaron, had an illegitimate child. To keep this fact a secret from the tabloids, Aaron arranged for his accountant to deposit $10,000 a month from the Echolls estate into a fake charity set up to funnel the money to his son. After learning this information, Logan reaches out to his half-brother, finding a confidante and potential friend. Veronica continues her investigation to discover that the person Logan believes to be his half-brother (Matt Czuchry) is a writer for Vanity Fair who posed as Charlie Stone to get Logan to share personal details about growing up in the Echolls home. The real Charlie Stone refused to answer the reporter's questions, preferring to remain anonymous. Logan, believing that Charlie set up the reporter, reveals his brother's identity on Larry King Live in an attempt to scoop the magazine profile. After Veronica informs him that the real Charlie had nothing to do with the reporter, Logan tearfully contacts his half-brother again.
Nathan Woods
Cress Williams appears as Nathan Woods, Wallace's biological father with a shady past as an undercover cop.
Claudia Navarro
Onahoua Rodriguez recurs as Claudia Navarro, Weevil's sister, who begins a relationship with Alonzo.
Maloof family
Amalia Maloof
Jacqueline Antaramian portrays Amalia Maloof, Daniel and Alex's mother. She disapproved of Alex's engagement to Tawny Carr, going as far as offering to pay her off. She later hires Vinnie Van Lowe to find the ring Alex proposed with, which was a family heirloom.
Alex Maloof
Paul Karmiryan portrays Alex Maloof, Daniel's younger brother. Introduced in season 4, Alex was a victim of one of the bombings who survived. His parents disapproved of his fiancée Tawny Carr, who died in one of the bombings.
Residents of Neptune
Steve Batando
Richard Grieco portrays Steve Batando, a struggling actor and drug addict, as well as Mindy O'Dell's ex-husband. He killed Lamb with a bat then was shot and killed by Sacks.
Harmony Chase
Laura San Giacomo recurs as Harmony Chase, a married woman with whom Keith becomes romantically involved.
Leo D'Amato
Max Greenfield recurs as Leo D'Amato, introduced as the new Sheriff's Deputy, Veronica befriends. Without his consent, Veronica steals a recording of an anonymous tip implicating Abel Koontz in Lilly Kane's death from the evidence room. Leo is shortly suspended for the break-in, and when Veronica attempts to apologize, he is reluctant to forgive her. When Veronica admits she has fallen for him, he dismisses the claim; however, they kiss during her school dance. When Veronica secretly begins to date Logan, she confesses the affair to Leo and ends the relationship. The pair manage to remain on good terms, and Leo assists Veronica in breaking up a dognapping ring the same night she breaks up with him. To get enough money to send his little sister with Down syndrome to a private school, Leo steals and sells the Aaron Echolls and Lilly Kane sex tapes to Logan. Logan promptly destroys the recordings due to their traumatic nature; however, he inadvertently compromises the pending case against Aaron. Keith discovers Leo's participation in the theft; however, he covers for him by stating that the tapes were stolen because Leo was not at his post. Leo is promptly fired from the sheriff's department, but is able to keep the money that Logan paid him to help his sister. Leo begins working for a private security company and finds himself working for Woody Goodman's daughter Gia. Leo contacts Keith when he is concerned that his security company's warehouse is about to be robbed. Leo loses his job for involving Keith, although Keith asks him to return as a deputy. Greenfield reprises his role in the fourth season, where it is revealed that he has since joined the FBI and returns to Neptune to aid in the Spring Break bombing case.
Liam Fitzpatrick
Rodney Rowland plays Liam Fitzpatrick, a formidable Irish-Catholic gangster and drug dealer. Liam is in charge of the Fitzpatrick mafia family, also known as the Fighting Fitzpatricks. He shoots his brother, Cormac, in cold blood after Cormac fails to retrieve an original Van Gogh painting. It has been hinted that he and his family are helping Vinnie Van Lowe win the election for Sheriff. Liam tells Keith that if Vinnie does not win the election, or if Keith does anything to hurt Vinnie's campaign, he will kill Veronica.
Woody Goodman
Steve Guttenberg portrays Woody Goodman, the Sharks baseball team owner and Balboa County Executive. More commonly known as the "Mayor of Neptune," although the position is actually "County Supervisor." At the end of season two, he is revealed to be a pedophile who molested several boys on his Little League teams, including Cassidy Casablancas. Woody was killed when explosives planted by Cassidy blew up his private plane.
Tom Griffith
Rick Peters plays Tom Griffith, a plastic surgeon and coke-addict who the Fitzpatricks asked to testify that Logan killed Felix. Logan retaliated by dating his daughter Hannah until he dropped his charges.
Abel Koontz
Christian Clemenson plays Abel Koontz, Lilly Kane's falsely confessed murderer. Koontz was the person at Kane software who perfected streaming media but was cheated out of the patent. He then tried to invent a technology that "would put Kane out of business," though he failed, and his wife walked out on him. Three months after Lilly's murder, Koontz confessed to murdering her. Lamb found Lilly's backpack and shoes while searching Abel's houseboat, backing up Koontz's confession. Until this point, Keith's investigation was squarely focused on the Kane family as suspects. It was Keith's failure to investigate Koontz that led to his eventual ousting as Neptune's Sheriff. A year later, when Veronica started to investigate Lilly's murder, she visited Koontz on death row twice. During their first visit, Koontz quickly recognizes Veronica and reveals that Jake Kane maybe her birth father. She was suspicious of Koontz's confession because he fired his attorney while on death row, and she discovered that the same shoes found in Koontz's houseboat were in Lilly's room at the time of the murder. Veronica sneaked into Koontz's doctor's office and stole both his and Duncan's medical files. She found out that Duncan had type-4 epilepsy, whose symptoms included violent outbursts and seizures. For the first time, Veronica realized that Duncan was a possible suspect in Lilly's murder. She also found out that Abel Koontz had stomach cancer and that he was dying. She concluded that Jake Kane paid Koontz to confess to the murder, even though he was not the perpetrator, so that Keith Mars' investigation into the Kane family would end. Koontz accepted the offer because he knew he was dying and intended to give the money to his daughter, Amelia DeLongpré, who he had neglected in the past. Veronica locates Abel's daughter to help prove that he is innocent, but Clarence Wiedman, head of Kane security, bribes her with three million dollars in exchange for her silence. During Veronica's second visit to Koontz, she told him that she knew that he was dying and that he was paid to confess to the murder. Koontz's innocence was further solidified when Keith went to Las Vegas to talk to a prostitute with Koontz at Lilly's murder. Koontz's final appearance was in the second-season episode "Rat Saw God." After being cleared of all murder charges and released from prison, he asks Veronica to help locate Amelia before he dies. While investigating, Veronica discovers that Amelia's boyfriend killed her after they blackmailed Clarence Wiedman for more money. Instead of telling him the truth, Veronica says that his daughter was happily living in Aspen and that she could not fly down yet because of the weather. He then dies at the end of the episode, but it is not shown.
Cliff McCormack
Daran Norris recurs as Cliff McCormack, a Public Defender and Mars family friend. Cliff is a public defender in Neptune who has served as an ally to Keith and Veronica Mars and a steady client of Mars Investigations. Although rarely a key player in the series, he has had some notable and memorable appearances. In "The Rapes of Graff," Cliff was seduced by an escort hired by Aaron Echolls to steal Cliff's briefcase, which contained the Logan Echolls murder case files and keys to the storage locker containing Aaron's personal belongings, including an Oscar statue. Norris reprises his role in the fourth season, where he assists Mars Investigations in the Spring Break bombing case.
Mindy O'Dell
Jaime Ray Newman appears as Mindy O'Dell, Dean O'Dell's wife. Mindy and Cyrus share no children of their own, but parent Cyrus' adolescent son from a previous marriage and Mindy's younger son by her first husband, Steven Batando. In "President Evil," the O'Dells hire Keith to find Steven in the hopes that his bone marrow would be a match to that of his biological son, who has been diagnosed with leukemia. Keith locates Steven, who refuses to donate his marrow. When Batando later disappears, Keith suspects that the O'Dells kidnapped him and took the marrow donation against his will but looked the other way. In "Hi, Infidelity," Veronica finds Mindy sharing a hotel room with Hank, her criminology professor at Hearst College, and an employee of Cyrus'. Keith puts Mindy under surveillance after Cyrus hired him to find out if she's having an affair. Keith discovers no cheating, but Veronica reveals Landry's infidelity to her father, Keith, who tells it to Cyrus. Cyrus confides in Keith, saying that Mindy was too young and too beautiful, and their marriage had been doomed to fail. Mindy is preparing to leave Landry in the Neptune Grand hotel room they shared when Cyrus enters and confronts them both about the affair, brandishing a loaded pistol. When Cyrus is found shot to death in his office, the insurance company rules his death a suicide, leaving the O'Dell family with no compensation for the loss of income, in "Spit & Eggs." Mindy returns to Keith Mars and hires him to prove that Cyrus was murdered in "Show Me the Monkey." Mindy finally gets the insurance money from her husband's murder, buys a boat, and leaves town after being questioned by Acting Sheriff Keith Mars. Hank Landry catches up with Mindy on her boat, and he later confesses to accidentally killing her the night before being found, in "Papa's Cabin."
Vinnie Van Lowe
Ken Marino recurs as Vincent "Vinnie" Van Lowe, Keith's rival private investigator. Vinnie is a private investigator in Neptune and is the main competition for Keith and Veronica. Vinnie has rather lax moral standards and is often willing to take on cases that the Mars Investigation team would refuse. It is because of this that he has a much larger caseload than the Mars family. The cases that Vinnie Van Lowe takes on have sometimes helped the Mars family, but sometimes his work has been in opposition to Mars Investigations' cases. In "Kanes and Abel's," Veronica is attempting to find out who has been harassing Sabrina Fuller, the School Board President's daughter and one of the top candidates for the Kane Scholarship. The Kane Scholarship is a full scholarship named Lilly's honor, awarded to the valedictorian of Neptune High. In her investigation, Veronica discovers that one of the cars that was used during a harassment was owned by Vinnie Van Lowe's ex-wife, Debra Villareal. From this, she figured out that Vinnie Van Lowe was the person hired to harass Sabrina. In "Donut Run," Van Lowe was hired by Celeste Kane to watch Veronica because Celeste suspected that Veronica knew where Duncan took the kidnapped baby. Veronica was able to take advantage of Vinnie's willingness to change clients if he is offered a better deal. Veronica had Duncan offer Vinnie more money to drive the Manning baby and a Veronica look-alike over the Mexican border and pick Duncan up in Mexico. It was because of this that Duncan was able to escape authorities with the kidnapped baby successfully. In "Not Pictured," Vinnie and Keith team up to catch Woody Goodman. Vinnie had broken into the Goodman house and stole all their records. Vinnie was arrested but got away with the records. Using his one phone call, he meets with Keith to get him to split the bounty. Vinnie gives the records, and Keith gets the man. In "Welcome Wagon," he almost gets Keith killed while working for Liam Fitzpatrick. However, in "Of Vice and Men," Vinnie rescued Veronica and Meryl from danger from the Fitzpatricks when the two girls wander into the River Stix searching for Meryl's missing boyfriend. Marino reprises his role in the fourth season, where he is assigned by Congressman Daniel Maloof's mother Amalia to recover a family heirloom.
Clarence Weidman
Christopher B. Duncan recurs as Clarence Wiedman, the Head of Security at Kane Software. He kills Aaron Echolls at the end of season two on Duncan's orders. As Head of Security for Kane Software, Wiedman is known to do the less-than-savory jobs that Jake Kane needs to be done. It is known that Jake Kane called Wiedman before he reported his daughter's murder. Veronica speculates that he assisted in the cover-up of Lilly's actual time of death by lowering her body's temperature. It is known that Wiedman paid Amelia de Longpre so that her father, Abel Koontz, would falsely confess to the murder of Lilly. When Veronica found Amelia and tried to convince her to reveal the existence of the payoff money, Wiedman got to her first and gave her even more money so that she would leave Neptune, in "Kanes and Abel's." Wiedman also called in the anonymous tip that got Abel Koontz arrested and took photos of Veronica that were sent to Lianne Mars in "You Think You Know Somebody." Unfortunately, Amelia DeLongpre didn't stay away long. When Abel tried to get Veronica to find his daughter, Veronica found out that she had called Kane Software from a payphone across the street and confronted Wiedman about it. Later on, when Veronica found her body, Clarence was there and confessed that Amelia had asked for more money. As it turned out, it was Amelia's new boyfriend, Carlos Mercado, who had wanted the money, and after he got it, he killed Amelia. Wiedman traced the money to Las Vegas, and it was implied that he killed Carlos in "Rat Saw God." Wiedman made another appearance when Aaron was acquitted. In "Not Pictured" Aaron's hotel room, Clarence assassinated him with two bullets. Afterward, he called Duncan in Australia, who asked, "CW?" Wiedman responded, "It's a done deal." Duncan reprises his role in the fourth season episode "Entering a World of Pain" as Logan's replacement security detail for Congressman Daniel Maloof.
Penn Epner
Patton Oswalt recurs as Penn Epner, a delivery guy for Cho's Pizza and true crime enthusiast who also teams with Mars Investigations to solve the Spring Break bombing case.
Nicole Malloy
Kirby Howell-Baptiste recurs as Nicole Malloy, a friend of Veronica and owner of a Neptune nightclub.
Clyde Pickett
J. K. Simmons recurs as Clyde Pickett, an ex-con working as a fixer for Richard Casablancas and a friend of Keith.
Marcia Langdon
Dawnn Lewis recurs as Marcia Langdon, Neptune's chief of police.
Neptune High
Alan Moorehead
John Bennett Perry plays Alan Moorehead, the principal of Neptune High. When Moorehead was a teacher, he had an affair with then-student Mary, and she left their baby Trina Echolls at his doorstep. He then left the baby in the bathroom at the prom, knowing they would think it belonged to a student. Many years later, when Veronica discovers that he is Trina's father, Moorehead is fired as principal.
Van Clemmons
Duane Daniels plays Van Clemmons, the vice principal and later principal of Neptune High. During high school, Veronica continually gets into trouble and, as a result, regularly meets with Clemmons. Occasionally, Van Clemmons asks Veronica for help, once asking her to locate the missing school mascot. Clemmons is the one who instructs Veronica to organize some old files, which led to her discovering Moorehead as Trina Echolls' father. When Moorehead is fired as principal, Veronica realizes that Clemmons' plan was all along to become principal. Clemmons says that when Veronica graduates, he cannot decide if his life will be easier or more difficult with her gone. Clemmons reappears in "Un-American Graffiti," where he is caught on tape being shot with a paintball gun by several Neptune High students.
Butters Clemmons<span id='Vincent "Butters" Clemmons></span>
Adam Hendershott portrays Vincent "Butters" Clemmons, a student at Neptune High and Van Clemmons' son. When he was a freshman, Vincent was pantsed by a bully and named "Butters." Vincent vowed revenge, and that following fall, the jock who had humiliated him and those who did nothing to save him, found themselves having failed mandatory drug tests, essentially barring them from playing sports in their final year of high school. Veronica believed Butters to be behind the crime; however, she discovered the real culprits were a conclave of parents who rigged the drug tests so that their children could gain the spots opened up in the wake of the mass suspension of athletes. Later, when Butters' only friend Marcos Oliveres was killed in the bus crash, and his parents sued the school, Veronica thought Butters was targeting the family, leaving them reminders of their son to torment the parents. She discovers that Butters and Marcos secretly ran the local pirate radio station, "Ahoy, Mateys!", which was infamous for its vicious slandering of the popular cliques at school. Butters tells Veronica that he had little contact with Marcos since the summer when he returned from summer camp and quit the radio show without telling Butters why. Veronica later found out summer camp was an anti–gay camp he was sent to and that the real culprit was Marcos' friend Ryan, who had a crush on Marcos and who sought to make the Oliveres suffer the same way they made their son suffer in regards to his potential homosexuality. Butters develops a crush on Mac, much to her disgust, and she is forced to attend Logan's "anti-prom" with him.
Lucky
James Jordan portrays Tommy "Lucky" Dohanic, the deceased former janitor at Neptune High. He previously acted as a batboy for Woody Goodman's baseball team, the Sharks. He later stalked Woody's daughter Gia and was later arrested when he attempted to attack her. However, the Mannings bailed Lucky out of jail, and he returned to the school the following day with a toy pistol, searching for Gia. After firing several (blank) bullets, including one at Wallace, he was shot and killed by school security.
Corny
Jonathan Chesner portrays Douglas "Corny," a Stoner and occasional ally of Veronica's. Corny is in the same graduation class as Veronica. Corny makes the bong that Veronica plants in Logan's locker in "Pilot." Resident stoner of Neptune High. Works at Cho's Pizza as a deliveryman. When he is mugged and tasered in "Versatile Toppings" he helps Veronica find the culprit. He appears to have somewhat of a crush on Veronica, as seen in "Blast from the Past" when he nominates her for Neptune High's Homecoming Queen. He was the DJ in between the Faders's sets at the Homecoming dance senior year. He's very proud of his brownie recipe claiming that "it's all in the butter." He appears to go to Logan's Alterna-Prom alone.
Hannah Griffith
Jessy Schram plays Hannah Griffith, an "09er" high-school sophomore student at Neptune High. Hannah meets Logan at the school carnival and is surprised when he asks her out. Logan reveals to Hannah that her father, plastic surgeon Dr. Thomas Griffith, is a cocaine user. He tells her that he is protecting his dealers, the Fitzpatricks, and has falsely testified against Logan in a murder trial. Hannah decides that she cannot trust Logan because he is only dating her to get to her father, but she is quick to forgive him. When her father catches her being undressed by Logan, he sends Hannah to a Vermont boarding school.
Deborah Hauser
Kari Coleman portrays Deborah Hauser, a divorced sex education teacher with a bitter approach to her students, disliking all non-09ers and kissing ass to the 09ers. Veronica baby-sits her demonic son to find out if he is being abused. It is also revealed she was friends with Lianne Mars when they attended Neptune High as teenagers and were suspended together for spreading a "false and malicious rumor." Mrs. Hauser maintained that Lianne told her a rumor, knowing it was false, and Deborah passing it on unknowingly; however, Veronica found out the real story. Lianne's friend had an affair with a teacher at the school and became pregnant, and not knowing how to help, she confided in Deborah, who passed the gossip around. This baby would end up being Trina Echolls. Mrs. Hauser makes another appearance during the Winter Carnival at school, where she steals money from the cashbox and tries to blame Veronica and Jackie. However, she is caught and deservedly fired.
Rebecca James
Paula Marshall portrays Rebecca James, the school guidance counselor and briefly dated Keith Mars. Paula had previously worked with series creator Rob Thomas on his earlier series, Cupid. Veronica disapproves of their relationship and digs up information on her to change Keith's mind - information being Ms. James was arrested for passing bad checks when she was 21. Ms. James and Veronica's relationship is strained because of this, although she does try to help Veronica discover the meaning of her haunting dreams in the second season.
Meg Manning
Alona Tal recurs as Meg Manning, an "09er" cheerleader and the daughter of mentally abusive fundamentalist Christian parents. She initially appears when Veronica is the victim of a nasty prank, lends her clothes, and sticks up for her when other 09ers start insulting her. When an online purity test posts all the participants' results, Meg's is faked to be unusually low, and she instantly gets a reputation as a slut. Veronica finds the person responsible for making this up about Meg (in "Like a Virgin"). When Meg asks Veronica to help find her secret admirer, she is keen to help, until she finds out that her secret admirer is Duncan Kane, Veronica's ex-boyfriend. Eventually, Veronica accepts Meg and Duncan's relationship. ("Ruskie Business") After finding out that he is not Veronica's half-brother, Duncan breaks up with Meg, who blames Veronica. Afterward, Meg's attitude towards Veronica is openly hostile. On the way back from a field trip to a baseball ground, while Veronica talked to Weevil during a rest stop, Meg deliberately implied that Veronica had returned to the bus, causing it to leave without her. However, the bus subsequently careened off a cliff and into the ocean. ("Normal Is the Watchword")
Meg was the only survivor, but she remained in a coma for much of the second season. Meg has two younger sisters, Lizzie and Grace. While Meg was in a coma, Lizzie brought Duncan Meg's laptop, asking him to remove Meg's files from the system before her parents were able to check them. Duncan and Veronica found that Meg had compiled information about a child whom she often babysat. His parents were abusing the young boy. Soon, the two uncovered that the abused child (who was referred to by Meg as a boy) was Meg's youngest sister, Grace. Meg's parents are religious zealots who lock Grace in a closet when they are out and make her fill out exercise book after exercise book with the phrase: "The path of God is paved with righteousness." Duncan and Veronica found dozens of these books in Grace's room. Veronica visited Meg in the hospital and discovered that Meg was pregnant with Duncan's child, which was why she was so mad at Veronica. Shortly after she left, Meg awoke from her coma. ("My Mother, the Fiend") Meg later apologizes to Veronica for her harsh treatment towards her and makes her promise not to let the baby end up with her parents if something happens to her. Later that same episode, Meg died of a blood clot to the heart, but not before giving birth to her daughter Faith, whom Duncan later renaming Lilly after fleeing the country with her.
Thumper
James Molina plays Eduardo "Thumper" Orozco, a PCH biker who betrays Weevil and starts dealing drugs for the Irish gang, the Fitzpatricks. He stabs Felix Toombs under orders from Liam Fitzpatrick for dating Liam's niece and sets up Logan as the culprit. Weevil later realizes Thumper did it, but can't prove it, so he sets up Thumper to make it look like stealing drugs and money from the Fitzpatricks. Liam and his cousin Danny Boyd lock Thumper inside "Shark Stadium" along with his bike, shortly before the building is blown up in a controlled demolition.
Carrie Bishop
Leighton Meester plays Carrie Bishop, nicknamed the queen of gossip at Neptune High. She appeared in two episodes. She was known for faking an affair with a teacher who Veronica strongly thought was innocent. The teacher found that he had a relationship with a previous student who had his baby, whom he denied and left. That student was the reason why Carrie had made these accusations. She also appeared in a later episode when Veronica is searching for the one responsible for raping her at Shelly Pomeroy's party. Carrie had told her that Duncan Kane was the one she saw with Veronica in the bedroom. Andrea Estella replaced Meester in Carrie's role for the Veronica Mars film, centering around the death of the character. Before the film, Carrie became a self-destructive pop star under the name Bonnie DeVille and began a relationship with Logan. Their relationship turned sour, and Carrie started attending Alcoholics Anonymous meetings with now ex-boyfriend Logan as her sponsor. At the start of the film, she was found dead in her bathtub.
Samuel Pope
Michael Kostroff portrays Samuel Pope, the teacher of Future Business Leaders of America and one of Neptune's few ethical people. After making a small fortune from real estate investment in Big Dick Casablancas' firm, he plans to retire to a life of sailing during the second season. Veronica discovers that the company is defrauding the investors and encourages Mr. Pope to get rid of his stock before she reports it, but he sadly replies he cannot do this; to get rid of it, he would have to sell it to someone else, and they would pay the price. He sadly admits he cannot do that and cannot retire as planned. It is assumed he continues teaching at Neptune High.
Madison Sinclair
Amanda Noret plays Madison Sinclair, Dick's ex-girlfriend and Neptune High's resident bitch. Shortly after Lilly's murder, at Shelly Pomeroy's party, Dick dosed Madison's drink with GHB to loosen her up; however, Madison spat in the drink, which she calls "A Trip to the Dentist," and gave it to Veronica. Veronica drank the soda and later had sex with Duncan, who had also unknowingly ingested GHB (given to him by Logan). The following year, Madison rigged the school election to get Duncan Kane elected to student body president. Veronica exposed the plot, which resulted in Madison losing her place on the student council and the position's special privileges. Veronica discovers in Season One that Madison was switched at birth with Mac. It is shown that Madison continually argues with her family.
In contrast, Mac's parents have entirely accepted Mac as their daughter and have shown no signs of wanting anything to do with Madison. After breaking up with Dick, she is secretly involved with Sheriff Don Lamb ("The Rapes of Graff"). Veronica discovers this and mocks Madison about her relationship with Lamb in front of their classmates through innuendo that, while her classmates do not understand, had a meaning that was quite apparent to Madison.
Madison, having gone off to college at USC, is absent from much of Veronica's life at Hearst, until she had several run-ins with her in Neptune in the episode "Poughkeepsie, Tramps, and Thieves." Veronica first sees Madison when she arrived unexpectedly at Neptune Grand Suite shared by Logan and Dick; thinking that Madison was simply looking up her ex, Veronica sent her on her way. Later, however, she ran into her again at a lingerie shop where Madison maliciously let slip that she had spent the night with Logan while in Aspen over Christmas break Veronica and Logan were split up. In the following episode, Veronica, consumed by images of Madison's affair with Logan, follows her and eventually plots with Weevil to have Madison's new car crushed and cubed. However, Veronica has a change of heart and asks him to return it unscathed, save for a can of tuna in her air conditioning.
Madison reappears in the Veronica Mars film, attending Neptune High's 10-year reunion. She plays a copy of Veronica and Piz's college sex tape in an attempt to humiliate her, which causes a fight to break out.
Felix Toombs
Brad Bufanda plays Felix Toombs, Weevil's right-hand man. Bufanda initially tried out for the role of Weevil. Felix is a PCH bike gang member and Weevil's best friend. After the PCH gang beat up Logan for supposedly killing Weevil's lover Lilly Kane, Logan wakes up beside Felix, stabbed to death. Weevil first suspects Logan but later finds out he is innocent and enlists his help to solve the case. Weevil finds out the PCHers had been dealing drugs for the Fitzpatricks behind his back during that time. Felix was secretly dating Molly Fitzpatrick (Liam Fitzpatrick's niece), so Liam ordered Felix's fellow gang member Thumper, who works for the Fitzpatricks, to kill Felix.
Troy Vandergraff
Aaron Ashmore portrays Troy Vandegraff, a childhood friend of Duncan's, Veronica's boyfriend, for the early stretch of season one. He betrays Veronica and leaves town, but he is somewhat reformed by the time they meet again later in the series. Troy and Veronica dated for a short time in Season One. Their relationship didn't last long because Troy involved an unwitting Veronica in a plot to smuggle steroids from Mexico into the US. Veronica managed to find out his intentions before completing his plan and running away with his real girlfriend. He later comes back in the episode "The Rapes of Graff" in Season 2, where he is accused of raping and shaving the head of a girl at Hearst College. Veronica ended up clearing Troy's name but did not find out who the real rapist is until the third season.
Jane Kuhne
Valorie Curry plays Jane Kuhne, student-athlete at Neptune High and Wallace's girlfriend after returning from Chicago. She is introduced in Season 1 while Wallace is pursuing Jackie as a distant admirer of Wallace who has been ignored. After Wallace returns from Chicago, he takes up with Jane. She is featured in the episode "The Quick and the Wed" when her older sister disappears following a bachelorette party, and Veronica investigates the disappearance.
Hearst College
Tom Barry
Matt McKenzie plays Tom Barry, Wallace's basketball coach and father of Josh Barry. He is allegedly murdered the night after a basketball game during which he fights with Josh, and the team loses the game. Josh is believed to be his killer due to the argument that Mason drove past the area where Coach Barry died and saw someone who appeared to be Josh with him. It is revealed that he was dying of an incurable brain disease that would be slow, painful, humiliating, and costly. He arranged for his suicide to appear as a carjacking so that his family would be paid life insurance (which they would not in the case of suicide) and take care of them financially.
Chip Diller
David Tom plays Chip Diller, the President of the Pi Sig frat house. He first appears in season two when Veronica visits Hearst, and she accuses him of raping a student. While it turns out he is not a rapist, he is shown to be still a horrible person. A group of campus feminists attacked him, shaved his head, and shoved an egg into his rectum on the anniversary of Patrice Pitrelli's attempted suicide for his part in it. During the episode "Spit & Eggs," Veronica points out to Chip that the beer coasters provided by the Pi Sig's were not sufficient since they were supposed to be used for drug testing, but he does not care. Chip is one of the foremost perpetrators in spreading the sex video of Veronica and Piz; however, it did not have anything to do with its creation.
Tim Foyle
James Jordan appears as Tim Foyle, a teaching assistant for Professor Hank Landry at Hearst College. Tim seems to dislike Veronica, possibly because Landry seems to prefer her over him. In the first session of Landry's criminology class (in "Welcome Wagon"), he has his students play a detective game, "Murder on the Riverboat Queen." Veronica beats Tim's record time for solving the mystery by ten minutes, then embarrasses and annoys him by saying, "There is one thing I can't figure out: what did you do with the extra ten minutes?" In "Hi, Infidelity," Tim accuses Veronica of plagiarizing her mid-term paper and gives her three days to clear herself. After Veronica figures out what happened, she accuses Tim of setting her up just so that she would discover that Landry was having an affair with Mindy O'Dell, the Dean's wife, but Tim does not explicitly deny it; instead, he says that if he HAD set the whole thing up, it would have been to show Veronica what Landry was really like before she became his protege. Tim is also the boyfriend of Bonnie Capistrano, Dick's fling, and is presumably the father of her miscarried baby. While investigating the truth behind Bonnie's miscarriage, she discovers that Tim was very supportive of her decision to have the baby and that he even asked Bonnie to marry him to please her religious parents. In "Papa's Cabin," Tim and Veronica team up in an attempt to clear Dr. Landry of the murder of Hearst College's Dean of Students, Cyrus O'Dell. At the end of the episode, however, it is revealed that Tim himself murdered Cyrus O'Dell to take revenge on Landry for ruining his chances at a job at (a fictional version of) Pepperdine University, by giving an unfavorable reference to the prospective employer. Tim's name is, in fact, a joke by the writers. His character was created as a foil for Veronica and was referred to as "FOIL" during the early drafts. According to Rob Thomas on the DVD extras, they eventually decided to call him Foyle anyway, and Tim as his first name as a play on "tin foil." Jordan reprises his role in the fourth season episode "Heads You Lose," where Veronica visits him in prison.
Max
Adam Rose portrays Max, a geeky student who provides test answers for cash. While he started as a one-time guest star, he turned into a recurring character due to his likability. He makes a profit with his business and does not care when he is expelled from Hearst, planning to make his living from it anyway. Max comes to Veronica with the task of finding his dream girl whom he met at ComiCon and had an instant connection with, who left telling him she'd left her details in his hotel room, for him to return and find the hotel cleaners had already been and it had disappeared. It turned out that this girl was a hooker hired by his friends so that Max could lose his virginity; he decides to continue trying to find her even when he finds this out. Max convinced Chelsea to leave the prostitution business for him, and he pays off her $10,000 debt to her pimp. However, things are not the same when her past keeps being brought up; he finally asks whether she left her information, and Chelsea is heartbroken when he realizes she didn't - "But I really wish I had." She leaves but promises to pay him back - it is implied that she is making her money by stripping. Max comes back into the series as Mac's new boyfriend after she breaks up with Bronson for him. Rose reprises his role in the fourth season episode "Heads You Lose."
Mercer Hayes
Ryan Devlin recurs as Mercer Hayes, a friend of Logan and Dick, who runs an illegal casino out of his dorm room called the Benetian. Parker suspects him of being the rapist due to him wearing the same cologne she smelled the night she was raped, and Veronica investigates and finds he owns an electric razor. She reports this to Lamb, who agrees with her for once, due to seeing the same date rape drugs inside Mercer's money box. However, Logan provides Mercer with an incriminating alibi - they accidentally burnt down a Tijuana motel the night of one of the rapes - but Veronica finds other proof that he didn't do it (his live, call-in radio show being on at the time of one of the rapes). However, when at the Pi Sig party, she hears his show being played and recognizes electronic distortion and skipping, she realizes he is the rapist, the shows being recorded. She prevents him from raping a 5th girl and stabs him with a ceramic unicorn horn, and eventually, Keith arrests Moe and Mercer. He is last seen in a jail cell with Logan (who smashed a police cruiser to get arrested, get put into a cell with Mercer, and get revenge on him for what he did to Veronica and the other girls). Devlin reprises his role in the fourth season episode "Heads You Lose," where Veronica visits him in prison.
Hank Landry
Patrick Fabian portrays Hank Landry, Veronica's Criminology professor and admirer. He is having an affair with Dean O'Dell's wife Mindy O'Dell and accidentally kills her when framed by his TA Tim Foyle for the murder of the Dean. Hank Landry is Veronica's Intro to Criminology professor. Although Veronica has yet to decide on a major, Prof. Landry has been advocating criminal investigations as a career choice for her. Prof. Landry is joined in his Intro to Criminology class by his teaching assistant Tim Foyle. Veronica first impressed Prof. Landry by solving the murder mystery he presented on the first day of class in record time. Later, she impresses him by being the sole person to write an "A" caliber paper on his "Perfect Murder" assignment. After reading the paper, he talks to Veronica about her potential and offers her faculty adviser. While Veronica is considering Prof. Landry's offer, she is manipulated by Tim Foyle into finding out one of Landry's secrets: he has been having an affair with Mindy O'Dell, the wife of the Dean of Hearst College. Shortly after this incident, Prof Landry announces his end-of-term research paper: to plan the perfect murder. He then meets with Veronica and tells her that he will recommend her for a summer internship with the FBI and that he will accept her application essay for the position as a replacement for the end-of-term assignment. Veronica assumed that Prof. Landry was attempting to buy her silence about Landry's affair with Mindy and turned down the offer. Upon telling this to Prof. Landry, he explained that the internship offer had nothing to do with what she knew about him, but Veronica still opted to write the end-of-term paper. Veronica received one of the three "A"s that Prof. Landry gave out for this paper. When Veronica finds out that her father is investigating whether Dean O'Dell's wife is having an affair, she tells Keith what she knows about Prof. Landry and Mindy. Keith tells the Dean, who takes a loaded revolver and visits Prof. Landry and Mindy O'Dell in the middle of one of their rendezvous. It is as of now unknown what transpired in the room that night. When Keith takes the Dean O'Dell case, he finds Prof. Landry at a local bar and attempts to entice Landry's confession over some drinks. However, this does not work as Landry informs Keith that he read Keith's book and he knows that Keith is a private investigator.
Claire Nordhouse
Krista Kalmus plays Claire Nordhouse, a feminist and member of Lillith house, friends with Nish and Fern. She is one of the protestors who made a front-page newspaper picture of many topless women holding a banner reading, "We go to Hearst, go ahead and rape us!" Claire is beautiful, with long blonde hair, making her the target of the rival newspaper's response (topless men holding a banner saying "No thanks! (except maybe the blonde in the middle)"). The three use this response to have the Pi Sig house shut down and the rival newspaper run by them by faking Claire's rape. However, Veronica finds out that it was not real and exposes this - ATM photos showing the guy with her before her rape, who took her home, was Claire's boyfriend who helped set it up. She is then expelled from Hearst as a result. She was Patrice Pitrelli's best friend in their freshman year and was the source of information about the Theta Beta two-way mirror.
Cyrus O'Dell
Ed Begley, Jr. portrays Cyrus O'Dell, the Dean of Hearst College. Initially, Veronica's adversary, gradually becoming her ally and gaining her respect, and vice versa. Married to much younger Mindy, he suspects her of having an affair, which Keith later proves to be with Hank Landry. He confronts the two, threatening to leave her broke and ruin Landry's career. Tim Foyle kills him during the ninth episode in Veronica's Plan a Perfect Murder paper (a fake suicide). After his death, it is revealed that he wrote her a referral to the FBI, a statement of what a brilliant student she was and how he had not encountered a student with more talent in all his years in academia. His body is found by Weevil, who is rather gutted, having grown quite fond of the dean while working at Hearst.
Bronson Pope
Michael Mitchell plays Bronson Pope, an animal rights activist and Mac's first post-Cassidy boyfriend. He is shown to be extremely outdoorsy, encouraging Mac to play Ultimate Frisbee and take early morning hikes. She loses her virginity on Valentine's Day but breaks up with him not much later after falling for Max.
Andrew McClain
Andrew McClain portrays Moe Slater, the R.A. of the Hearst College dorms. He is very earnest and is always offering oolong tea to everyone - implied to be how he drugs people. Mercer's partner helped him rape the girls as he was the Safe Ride Home driver. He would take drunk girls home (it is still unclear whether he drugged them at the party or drugged them once they were in their dorms) and then tell Mercer which room and provide him the keys to get in. The relationship between him and Mercer is implied to result from the prison warden experiment that Logan and Wallace underwent. He keeps a photo of the two of them in their uniforms. Moe was the prisoner, and Mercer, the warden, mentally dominated him and controlled him to rape. It was Moe that drugged Veronica and tried to shave her head to give Mercer an alibi.
Nish Sweeney
Chastity Dotson portrays Nancy "Nish" Sweeney, the former Editor of the Hearst Free Press and feminist of Lillith House. She and Veronica are initially friends when she gives Veronica the assignment to go undercover and infiltrate the Theta Beta sorority house. Veronica is supposed to find proof of a two-way mirror in which Theta Betas would make the rushes undress in front of for the Pi Sigs enjoyment. However, Nish publishes the article that Veronica asked her not to write about a cannabis farm at the sorority house, and they become enemies. Nish helps Claire fake the rape and eggs Dean O'Dell's window the night he was murdered. By the end of the semester, Nish and Veronica are on good terms, as Veronica gives her a list of names of members of the secret fraternity, the Castle.
Blake Long
Spencer Ward plays Blake Long, a Pi Sig member in the fourth season, involved in a mysterious cover-up that may be connected to the Neptune Spring Break bombings.
Other characters
The following is a supplementary list of recurring or one-time guest stars, which includes characters that appear briefly in multiple episodes but have little to no real-world content to justify an entire section covering their in-universe histories.
Steve Rankin portrays Lloyd Blankenship, a newspaper reporter and ally of Keith.
Taylor Sheridan plays Danny Boyd, Liam Fitzpatrick's dim-witted cousin and accomplice.
Kate McNeil plays Betina Casablancas, Dick and Cassidy's biological mother.
Brandon Hillock plays Jerry Sacks, Sheriff Lamb's right-hand man.
Annie Campbell plays Molly Fitzpatrick, a member of the Fitzpatrick family and Felix's girlfriend until his death.
Kevin Sheridan plays Sean Friedrich, an "09er" with a penchant for thievery and drug dealing.
Cher Ferreyra portrays Fern Delgado, a disgruntled feminist and member of the "Take Back the Night" program. She lives in Lillith house with Nish and Claire and dislikes Veronica for proving the Pi Sigs weren't rapists. Veronica initially suspects her in the disappearance of Selma Hearst Rose.
Robert Ri'Chard plays Mason, a hot-tempered basketball jock and Wallace's new friend. He encourages Wallace to skip studying to hang out with him and instead puts him in contact with Max to cheat. He is the one who mistakes Josh as being the person out on the overlook with Coach Barry before he is killed.
Joss Whedon plays Douglas, a clueless car-rental salesman who inadvertently helps Veronica track down Abel Koontz' daughter Amelia Delongpre in the episode "Rat Saw God."
Logan Miller and Ashton Moio play Simon and Craig, two college students and Spring Breakers whose friendship with a bombing victim is a source of clues for Mars Investigations.
References
External links
Complete Episode, Soundtrack, and Character Information
Veronica Mars CWiki on CWTV.com
Veronica Mars on Warnerbros.com
Lists of fictional characters |
59785 | https://en.wikipedia.org/wiki/IBM%20System/370 | IBM System/370 | The IBM System/370 (S/370) is a model range of IBM mainframe computers announced on June 30, 1970 as the successors to the System/360 family. The series mostly maintains backward compatibility with the S/360, allowing an easy migration path for customers; this, plus improved performance, were the dominant themes of the product announcement. In September 1990, the System/370 line was replaced with the System/390.
Evolution
The original System/370 line was announced on June 30, 1970 with first customer shipment of the Models 155 and 165 planned for February 1971 and April 1971 respectively. The 155 first shipped in January 1971. System/370 underwent several architectural improvements during its roughly 20-year lifetime.
The following features mentioned in Principles of Operation
are either optional on S/360 but standard on S/370, introduced with S/370 or added to S/370 after announcement.
Branch and Save
Channel Indirect Data Addressing
Channel-Set Switching
Clear I/O
Command Retry
Commercial Instruction Set
Conditional Swapping
CPU Timer and Clock Comparator
Dual-Address Space (DAS)
Extended-Precision Floating Point
Extended Real Addressing
External Signals
Fast Release
Floating Point
Halt Device
I/O Extended Logout
Limited Channel Logout
Move Inverse
Multiprocessing
PSW-Key Handling
Recovery Extensions
Segment Protection
Service Signal
Start-I/O-Fast Queuing
Storage-Key-Instruction Extensions
Storage-Key 4K-Byte Block
Suspend and Resume
Test Block
Translation
Vector
31-Bit IDAWs
Initial models
The first System/370 machines, the Model 155 and the Model 165, incorporated only a small number of changes to the System/360 architecture. These changes included:
13 new instructions, among which were
MOVE LONG (MVCL);
COMPARE LOGICAL LONG (CLCL);
thereby permitting operations on up to 2^24-1 bytes (16 MB), vs. the 256-byte limits on the 360's MVC and CLC;
SHIFT AND ROUND DECIMAL (SRP), which multiplied or divided a packed decimal value by a power of 10, rounding the result when dividing;
optional 128-bit (hexadecimal) floating point arithmetic, introduced in the System/360 Model 85
a new higher-resolution time-of-day clock
support for the block multiplexer channel introduced in the System/360 Model 85.
All of the emulator features were designed to run under the control of the standard operating systems. IBM documented the S/370 emulator programs as integrated emulators.
These models had core memory and did not include support for virtual storage.
Logic technology
All models of the System/370 used IBM's form of monolithic integrated circuits called MST (Monolithic System Technology) making them third generation computers. MST provided System/370 with four to eight times the circuit density and over ten times the reliability when compared to the previous second generation SLT technology of the System/360.
Monolithic memory
On September 23, 1970, IBM announced the Model 145, a third model of the System/370, which was the first model to feature semiconductor main memory made from monolithic integrated circuits and was scheduled for delivery in the late summer of 1971. All subsequent S/370 models used such memory.
Virtual storage
In 1972, a very significant change was made when support for virtual storage was introduced with IBM's "System/370 Advanced Function" announcement. IBM had initially (and controversially) chosen to exclude virtual storage from the S/370 line. The August 2, 1972 announcement included:
address relocation hardware on all S/370s except the original models 155 and 165
the new S/370 models 158 and 168, with address relocation hardware
four new operating systems: DOS/VS (DOS with virtual storage), OS/VS1 (OS/360 MFT with virtual storage), OS/VS2 (OS/360 MVT with virtual storage) Release 1, termed SVS (Single Virtual Storage), and Release 2, termed MVS (Multiple Virtual Storage) and planned to be available 20 months later (at the end of March 1974), and VM/370 – the re-implemented CP/CMS
Virtual storage had in fact been delivered on S/370 hardware before this announcement:
In June 1971, on the S/370-145 (one of which had to be "smuggled" into Cambridge Scientific Center to prevent anybody noticing the arrival of an S/370 at that hotbed of virtual memory development – since this would have signaled that the S/370 was about to receive address relocation technology). (Varian 1997:p29) The S/370-145 had an associative memory used by the microcode for the DOS compatibility feature from its first shipments in June 1971; the same hardware was used by the microcode for DAT. Although IBM famously chose to exclude virtual storage from the S/370 announcement, that decision was being reconsidered during the completion of the 145 engineering, partly because of virtual memory experience at CSC and elsewhere. The 145 microcode architecture simplified the addition of virtual storage, allowing this capability to be present in early 145s without the extensive hardware modifications needed in other models. However, IBM did not document the 145's virtual storage capability, nor annotate the relevant bits in the control registers and PSW that were displayed on the operator control panel when selected using the roller switches. The Reference and Change bits of the Storage-protection Keys, however, were labeled on the rollers, a dead giveaway to anyone who had worked with the earlier 360/67. Existing S/370-145 customers were happy to learn that they did not have to purchase a hardware upgrade in order to run DOS/VS or OS/VS1 (or OS/VS2 Release 1 – which was possible, but not common because of the limited amount of main storage available on the S/370-145).
Shortly after the August 2, 1972 announcement, DAT box (address relocation hardware) upgrades for the S/370-155 and S/370-165 were quietly announced, but were available only for purchase by customers who already owned a Model 155 or 165. After installation, these models were known as the S/370-155-II and S/370-165-II. IBM wanted customers to upgrade their 155 and 165 systems to the widely sold S/370-158 and -168. These upgrades were surprisingly expensive ($200,000 and $400,000, respectively) and had long ship date lead times after being ordered by a customer; consequently, they were never popular with customers, the majority of whom leased their systems via a third-party leasing company. This led to the original S/370-155 and S/370-165 models being described as "boat anchors". The upgrade, required to run OS/VS1 or OS/VS2, was not cost effective for most customers by the time IBM could actually deliver and install it, so many customers were stuck with these machines running MVT until their lease ended. It was not unusual for this to be another four, five or even six years for the more unfortunate ones, and turned out to be a significant factor in the slow adoption of OS/VS2 MVS, not only by customers in general, but for many internal IBM sites as well.
Subsequent enhancements
Later architectural changes primarily involved expansions in memory (central storage) – both physical memory and virtual address space – to enable larger workloads and meet client demands for more storage. This was the inevitable trend as Moore's Law eroded the unit cost of memory. As with all IBM mainframe development, preserving backward compatibility was paramount.
Operating system specific assist, Extended Control Program Support (ECPS). extended facility and extension features for OS/VS1, MVS and VM. Exploiting levels of these operating systems, e.g., MVS/System Extensions (MVS/SE), reduce path length for some frequent functions.
The Dual Address Space (DAS) facility allows a privileged program to move data between two address spaces without the overhead of allocating a buffer in common storage, moving the data to the buffer, scheduling an SRB in the target address space, moving the data to their final destination and freeing the buffer. IBM introduced DAS in 1981 for the 3033, but later made it available for some 43xx, 3031 and 3032 processors. MVS/System Product (MVS/SP) Version 1 exploited DAS if it was available.
In October 1981, the 3033 and 3081 processors added "extended real addressing", which allowed 26-bit addressing for physical storage (but still imposed a 24-bit limit for any individual address space). This capability appeared later on other systems, such as the 4381 and 3090.
The System/370 Extended Architecture (S/370-XA), first available in early 1983 on the 3081 and 3083 processors, provided a number of major enhancements, including: expansion of the address space from 24-bits to 31-bits; facilitating movement of data between two address spaces; and a complete redesign of the I/O architecture. The cross-memory services capability which facilitated movement of data between address spaces was actually available just prior to S/370-XA architecture on the 3031, 3032 and 3033 processors.
In February 1988, IBM announced the Enterprise Systems Architecture/370 (ESA/370) for enhanced (E) 3090 and 4381 models. It added sixteen 32-bit access registers, more addressing modes, and various facilities for working with multiple address spaces simultaneously.
On September 5, 1990, IBM announced the Enterprise Systems Architecture/390 (ESA/390), upward compatible with ESA/370.
Expanding the address space
As described above, the S/370 product line underwent a major architectural change: expansion of its address space from 24 to 31 bits.
The evolution of S/370 addressing was always complicated by the basic S/360 instruction set design, and its large installed code base, which relied on a 24-bit logical address. (In particular, a heavily used machine instruction, "Load Address" (LA), explicitly cleared the top eight bits of the address being placed in a register. This created enormous migration problems for existing software.)
The strategy chosen was to implement expanded addressing in three stages:
first at the physical level (to enable more memory hardware per system)
then at the operating system level (to let system software access multiple address spaces and utilize larger address spaces)
finally at the application level (to let new applications access larger address spaces)
Since the core S/360 instruction set remained geared to a 24-bit universe, this third step would require a real break from the status quo; existing assembly language applications would of course not benefit, and new compilers would be needed before non-assembler applications could be migrated. Most shops thus continued to run their 24-bit applications in a higher-performance 31-bit world.
This evolutionary implementation (repeated in z/Architecture) had the characteristic of solving the most urgent problems first: relief for real memory addressing being needed sooner than virtual memory addressing.
31 versus 32 bits
IBM's choice of 31-bit (versus 32-bit) addressing for 370-XA involved various factors. The System/360 Model 67 had included a full 32-bit addressing mode, but this feature was not carried forward to the System/370 series, which began with only 24-bit addressing. When IBM later expanded the S/370 address space in S/370-XA, several reasons are cited for the choice of 31 bits:
The desire to retain the high-order bit as a "control or escape bit." In particular, the standard subroutine calling convention marked the final parameter word by setting its high bit.
Interaction between 32-bit addresses and two instructions (BXH and BXLE) that treated their arguments as signed numbers (and which was said to be the reason TSS used 31-bit addressing on the Model 67). (Varian 1997:p26, note 85)
Input from key initial Model 67 sites, which had debated the alternatives during the initial system design period, and had recommended 31 bits (instead of the 32-bit design that was ultimately chosen at the time). (Varian 1997:pp8–9, note 21, includes other comments about the "Inner Six" Model 67 design disclosees)
Series and models
Models sorted by date introduced (table)
The following table summarizes the major S/370 series and models. The second column lists the principal architecture associated with each series. Many models implemented more than one architecture; thus, 308x processors initially shipped as S/370 architecture, but later offered XA; and many processors, such as the 4381, had microcode that allowed customer selection between S/370 or XA (later, ESA) operation.
Note also the confusing term "System/370-compatible", which appeared in IBM source documents to describe certain products. Outside IBM, this term would more often describe systems from Amdahl Corporation, Hitachi Ltd., and others, that could run the same S/370 software. This choice of terminology by IBM may have been a deliberate attempt to ignore the existence of those plug compatible manufacturers (PCMs), because they competed aggressively against IBM hardware dominance.
Models grouped by Model number (detailed)
IBM used the name System/370 to announce the following eleven (3 digit) offerings:
System/370 Model 115
The IBM System/370 Model 115 was announced March 13, 1973 as "an ideal System/370 entry system for users of IBM's System/3, 1130 computing system and System/360 Models 20, 22 and 25."
It was delivered with "a minimum of two (of IBM's newly announced) directly-attached IBM 3340 disk drives." Up to four 3340s could be attached.
The CPU could be configured with 65,536 (64K) or 98,304 (96K) bytes of main memory. An optional 360/20 emulator was available.
The 115 was withdrawn on March 9, 1981.
System/370 Model 125
The IBM System/370 Model 125 was announced Oct 4, 1972.
Two, three or four directly attached IBM 3333 disk storage units provided "up to 400 million bytes online."
Main memory was either 98,304 (96K) or 131,072 (128K) bytes.
The 125 was withdrawn on March 9, 1981.
System/370 Model 135
The IBM System/370 Model 135 was announced Mar 8, 1971. Options for the 370/135 included a choice of four main memory sizes; IBM 1400 series (1401, 1440 and 1460) emulation was also offered.
A "reading device located in the Model 135 console" allowed updates and adding features to the Model 135's microcode.
The 135 was withdrawn on October 16, 1979.
System/370 Model 138
The IBM System/370 Model 138 which was announced Jun 30, 1976 was offered with either 524,288 (512K) or 1,048,576 (1 MB) of memory. The latter was "double the maximum capacity of the Model 135," which "can be upgraded to the new computer's internal performance levels at customer locations."
The 138 was withdrawn on November 1, 1983.
System/370 Model 145
The IBM System/370 Model 145 was announced Sep 23, 1970, three months after the 155 and 165 models. It first shipped in June 1971.
The first System/370 to use monolithic main memory, the Model 145 was offered in six memory sizes. A portion of the main memory, the "Reloadable Control Storage" (RCS) was loaded from a prewritten disk cartridge containing microcode to implement, for example, all needed instructions, I/O channels, and optional instructions to enable the system to emulate earlier IBM machines.
The 145 was withdrawn on October 16, 1979.
System/370 Model 148
The IBM System/370 Model 148 had the same announcement and withdrawal dates as the Model 138.
As with the option to field-upgrade a 135, a 370/145 could be field-upgraded "at customer locations" to 148-level performance. The upgraded 135 and 145 systems were "designated the Models 135-3 and 145-3."
System/370 Model 155
The IBM System/370 Model 155 and the Model 165 were announced Jun 30, 1970, the first of the 370s introduced. Neither had a DAT box; they were limited to running the same non-virtual-memory operating systems available for the System/360. The 155 first shipped in January 1971.
The OS/DOS (DOS/360 programs under OS/360), 1401/1440/1460 and 1410/7010 and 7070/7074 compatibility features were included, and the supporting integrated emulator programs could operate concurrently with standard System/370 workloads.
In August 1972 IBM announced, as a field upgrade only, the IBM System/370 Model 155 II, which added a DAT box.
Both the 155 and the 165 were withdrawn on December 23, 1977.
System/370 Model 158
The IBM System/370 Model 158 and the 370/168 were announced Aug 2, 1972.
It included dynamic address translation (DAT) hardware, a prerequisite for the new virtual memory operating systems (DOS/VS, OS/VS1, OS/VS2).
A tightly coupled multiprocessor (MP) model was available, as was the ability to loosely couple this system to another 360 or 370 via an optional channel-to-channel adapter.
The 158 and 168 were withdrawn on September 15, 1980.
System/370 Model 165
The IBM System/370 Model 165 was described by IBM as "more powerful" compared to the "medium-scale" 370/155. It first shipped in April 1971.
Compatibility features included emulation for 7070/7074, 7080, and 709/7090/7094/7094 II.
Some have described the 360/85's use of microcoded vs hardwired as a bridge to the 370/165.
In August 1972 IBM announced, as a field upgrade only, the IBM System/370 Model 165 II which added a DAT box.
The 165 was withdrawn on December 23, 1977.
System/370 Model 168
The IBM System/370 Model 168 included "up to eight megabytes" of main memory, double the maximum of 4 megabytes on the
370/158.
It included dynamic address translation (DAT) hardware, a pre-requisite for the new virtual memory operating systems.
Although the 168 served as IBM's "flagship" system, a 1975 newbrief said that IBM boosted the power of the 370/168 again "in the wake of the Amdahl challenge... only 10 months after it introduced the improved 168-3 processor."
The 370/168 was not withdrawn until September 1980.
System/370 Model 195
The IBM System/370 Model 195 was announced Jun 30, 1970 and, at that time, it was "IBM's most powerful computing system."
Its introduction came about 14 months after the announcement of the 360/195. Both 195 machines were withdrawn Feb. 9, 1977.
System/370-compatible
Beginning in 1977, IBM began to introduce new systems, using the description "A compatible member of the System/370 family."
IBM 303X
The first of the initial high end machines, IBM's 3033, was announced March 25, 1977 and was delivered the following March, at which time a multiprocessor version of the 3033 was announced. IBM described it as "The Big One."
IBM noted about the 3033, looking back, that "When it was rolled out on March 25, 1977, the 3033 eclipsed the internal operating speed of the company's previous flagship the System/370 Model 168-3 ..."
The IBM 3031 and IBM 3032 were announced Oct. 7, 1977 and withdrawn Feb. 8, 1985.
IBM 308X
Three systems comprised the next series of high end machines, IBM's 308X systems:
The 3081 (announced Nov 12, 1980) had 2 CPUs
The 3083 (announced Mar 31, 1982) had 1 CPU
The 3084 (announced Sep 3, 1982) had 4 CPUs
Despite the numbering, the least powerful was the 3083, which could be field-upgraded to a 3081; the 3084 was the top of the line.
These models introduced IBM's Extended Architecture's 31-bit address capability and a set of backward compatible MVS/Extended Architecture (MVS/XA) software replacing previous products and part of OS/VS2 R3.8:
All three 308x systems were withdrawn on August 4, 1987.
IBM 3090
The next series of high-end machines, the IBM 3090, began with models 200 and 400. They were announced Feb. 12, 1985, and were configured with two or four CPUs respectively. IBM subsequently announced models 120, 150, 180, 300, 500 and 600 with lower, intermediate and higher capacities; the first digit of the model number gives the number of central processors.
Starting with the E models, and continuing with the J and S models, IBM offered Enterprise Systems Architecture/370 (ESA/370), Processor Resource/System Manager (PR/SM) and a set of backward compatible MVS/Enterprise System Architecture (MVS/ESA) software replacing previous products:
IBM's offering of an optional vector facility (VF) extension for the 3090 came at a time when Vector processing/Array processing suggested names like Cray and Control Data Corporation (CDC).
The 200 and 400 were withdrawn on May 5, 1989.
IBM 4300
The first pair of IBM 4300 processors were Mid/Low end systems announced Jan 30, 1979 as "compact (and).. compatible with System/370."
The 4331 was subsequently withdrawn on November 18, 1981, and the 4341 on February 11, 1986.
Other models were the 4321, 4361 and 4381.
The 4361 has "Programmable Power-Off -- enables the user to turn off the processor under program control"; "Unit power off" is (also) part of the 4381 feature list.
IBM offered many Model Groups and models of the 4300 family, ranging from the entry level 4331 to the 4381, described as "one of the most powerful and versatile intermediate system processors ever produced by IBM."
The 4381 Model Group 3 was dual-CPU.
IBM 9370
This low-end system, announced October 7, 1986, was "designed to satisfy the computing requirements of IBM customers who value System/370 affinity" and "small enough and quiet enough to operate in an office environment."
IBM also noted its sensitivity to "entry software prices, substantial reductions in support and training requirements, and modest power consumption and maintenance costs."
Furthermore, it stated its awareness of the needs of small-to-medium size businesses to be able to respond, as "computing requirements grow," adding that "the IBM 9370 system can be easily expanded by adding additional features and racks to accommodate..."
This came at a time when Digital Equipment Corporation (DEC) and its VAX systems were strong competitors in both hardware and software; the media of the day carried IBM's alleged "VAX Killer" phrase, albeit often skeptically.
Clones
In the 360 era, a number of manufacturers had already standardized upon the IBM/360 instruction set and, to a degree, 360 architecture. Notable computer makers included Univac with the UNIVAC 9000 series, RCA with the RCA Spectra 70 series, English Electric with the English Electric System 4, and the Soviet ES EVM. These computers were not perfectly compatible, nor (except for the Russian efforts) were they intended to be.
That changed in the 1970s with the introduction of the IBM/370 and Gene Amdahl's launch of his own company. About the same time, Japanese giants began eyeing the lucrative mainframe market both at home and abroad. One Japanese consortium focused upon IBM and two others from the BUNCH (Burroughs/Univac/NCR/Control Data/Honeywell) group of IBM's competitors. The latter efforts were abandoned and eventually all Japanese efforts focused on the IBM mainframe lines.
Some of the era's clones included:
Architecture details
IBM documentation numbers the bits from high order to low order; the most significant (leftmost) bit is designated as bit number 0.
S/370 also refers to a computer system architecture specification, and is a direct and mostly backward compatible evolution of the System/360 architecture from which it retains most aspects. This specification does not make any assumptions on the implementation itself, but rather describes the interfaces and the expected behavior of an implementation. The architecture describes mandatory interfaces that must be available on all implementations and optional interfaces which may or may not be implemented.
Some of the aspects of this architecture are:
Big endian byte ordering
One or more processors with:
16 32-bit General purpose registers
16 32-bit Control registers
4 64-bit Floating-point registers
A 64-bit Program status word (PSW) which describes (among other things)
Interrupt masks
Privilege states
A condition code
A 24-bit instruction address
Timing facilities (Time of day clock, interval timer, CPU timer and clock comparator)
An interruption mechanism, maskable and unmaskable interruption classes and subclasses
An instruction set. Each instruction is wholly described and also defines the conditions under which an exception is recognized in the form of program interruption.
A memory (called storage) subsystem with:
8 bits per byte
A special processor communication area starting at address 0
Key controlled protection
24-bit addressing
Manual control operations that provide:
A bootstrap process (a process called Initial Program Load or IPL)
Operator-initiated interrupts
Resetting the system
Basic debugging facilities
Manual display and modifications of the system's state (memory and processor)
An Input/Output mechanismwhich doesn't describe the devices themselves
Some of the optional features are:
A Dynamic Address Translation (DAT) mechanism that can be used to implement a virtual memory system
Floating point instructions
IBM took great care to ensure that changes to the architecture would remain compatible for unprivileged (problem state) programs; some new interfaces did not break the initial interface contract for privileged (supervisor mode) programs. Some examples are
ECPS:MVS
A feature to enhance performance for the MVS/370 operating systems
ECPS:VM
A feature to enhance performance for the VM operating systems
Other changes were compatible only for unprivileged programs, although the changes for privileged programs were of limited scope and well defined. Some examples are:
ECPS:VSE
A feature to enhance performance for the DOS/VSE operating system.
S/370-XA
A feature to provide a new I/O interface and to support 31-bit computing
Great care was taken in order to ensure that further modifications to the architecture would remain compatible, at least as far as non-privileged programs were concerned. This philosophy predates the definition of the S/370 architecture and started with the S/360 architecture. If certain rules are adhered to, a program written for this architecture will run with the intended results on the successors of this architecture.
Such an example is that the S/370 architecture specifies that the 64-bit PSW register bit number 32 has to be set to 0 and that doing otherwise leads to an exception. Subsequently, when the S/370-XA architecture was defined, it was stated that this bit would indicate whether the program was a program expecting a 24-bit address architecture or 31-bit address architecture. Thus, most programs that ran on the 24-bit architecture can still run on 31-bit systems; the 64-bit z/Architecture has an additional mode bit for 64-bit addresses, so that those programs, and programs that ran on the 31-bit architecture, can still run on 64-bit systems.
However, not all of the interfaces can remain compatible. Emphasis was put on having non control programs (called problem state programs) remain compatible. Thus, operating systems have to be ported to the new architecture because the control interfaces can (and were) redefined in an incompatible way. For example, the I/O interface was redesigned in S/370-XA making S/370 program issuing I/O operations unusable as-is.
S/370 replacement
IBM replaced the System/370 line with the System/390 in the 1990s, and similarly extended the architecture from ESA/370 to ESA/390. This was a minor architectural change, and was upwards compatible.
In 2000, the System/390 was replaced with the zSeries (now called IBM System z). The zSeries mainframes introduced the 64-bit z/Architecture, the most significant design improvement since the 31-bit transition. All have retained essential backward compatibility with the original S/360 architecture and instruction set.
GCC and Linux on the S/370
The GNU Compiler Collection (GCC) had a back end for S/370, but it became obsolete over time and was finally replaced with the S/390 backend. Although the S/370 and S/390 instruction sets are essentially the same (and have been consistent since the introduction of the S/360), GCC operability on older systems has been abandoned. GCC currently works on machines that have the full instruction set of System/390 Generation 5 (G5), the hardware platform for the initial release of Linux/390. However, a separately maintained version of GCC 3.2.3 that works for the S/370 is available, known as GCCMVS.
I/O evolutions
I/O evolution from original S/360 to S/370
The block multiplexer channel, previously available only on the 360/85 and 360/195, was a standard part of the architecture. For compatibility it could operate as a selector channel. Block multiplexer channels were available in single byte (1.5 MB/s) and double byte (3.0 MB/s) versions.
I/O evolution since original S/370
As part of the DAT announcement, IBM upgraded channels to have Indirect Data Address Lists (IDALs). a form of I/O MMU.
Data streaming channels had a speed of 3.0 MB/s over a single byte interface, later upgraded to 4.5 MB/s.
Channel set switching allowed one processor in a multiprocessor configuration to take over the I/O workload from the other processor if it failed or was taken offline for maintenance.
System/370-XA introduced a channel subsystem that performed I/O queuing previously done by the operating system.
The System/390 introduced the ESCON channel, an optical fiber, half-duplex, serial channel with a maximum distance of 43 kilometers. Originally operating at 10 Mbyte/s, it was subsequently increased to 17 Mbyte/s.
Subsequently, FICON became the standard IBM mainframe channel; FIbre CONnection (FICON) is the IBM proprietary name for the ANSI FC-SB-3 Single-Byte Command Code Sets-3 Mapping Protocol for Fibre Channel (FC) protocol used to map both IBM's antecedent (either ESCON or parallel Bus and Tag) channel-to-control-unit cabling infrastructure and protocol onto standard FC services and infrastructure at data rates up to 16 Gigabits/sec at distances up to 100 km. Fibre Channel Protocol (FCP) allows attaching SCSI devices using the same infrastructure as FICON.
See also
IBM System/360
IBM ESA/390
IBM System z
PC-based IBM-compatible mainframes
Hercules emulator
Notes
References
S370-1st
S370
S370-MVS
S370-VM
S370-XA-1st
S370-XA
S370-ESA
S/390-ESA
SIE
Further reading
Chapter 4 (pp. 111166) describes the System/370 architecture; Chapter 5 (pp. 167206) describes the System/370 Extended Architecture.
External links
Hercules System/370 Emulator A software implementation of IBM System/370
370
Computing platforms
Computer-related introductions in 1970
1990s disestablishments
32-bit computers |
1659108 | https://en.wikipedia.org/wiki/NForce3 | NForce3 | The nForce3 chipset was created by Nvidia as a Media and Communications Processor. Specifically, it was designed for use with the Athlon 64 processor.
Features of the nForce3
When the Athlon 64 was launched, the Nvidia nForce3 Pro150 and VIA K8T800 were the only two chipsets available. The 150 chipset was widely criticized at launch for using a 600 MHz HyperTransport interface, when VIA had implemented the full AMD specification at 800 MHz, even though overall performance of the 150 was still good.
Later revisions fixed this omission, and using a HyperTransport interface, the nForce3 chipset is able to communicate at up to 8 GB/s with the CPU. This reduces system bottlenecks when using high-bandwidth devices. For example, the Gigabit Ethernet transmits at 125 MB/s; if the Ethernet were not on the chipset, a saturated gigabit Ethernet link would use 93% of the bandwidth of the shared 133 MB/s PCI bus.
The 150 also noticeably lacked features. Subsequent revisions of the chipset corrected these omissions. The chipset is offered in different versions, reflecting socket types and features.
nForce3 250 - Socket 754, basic value chipset, 800HT, does not include on-chip Gigabit LAN or on-chip Firewall.
nForce3 250Gb - Socket 754 or 939, 800HT, includes gigabit LAN and on-chip Firewall.
nForce3 Ultra - Socket 939, 1000HT, gigabit LAN, Firewall, Dual-Channel unbuffered, for Athlon 64/Athlon 64 FX.
nForce3 Pro250 - Socket 940, 1000HT, gigabit LAN, Firewall, for Opteron.
The 250 revision of the nForce 3 featured the world’s first native Gigabit Ethernet interface and a hardware-optimized Firewall. The Nvidia Firewall technology utilizes the ActiveArmor secure networking engine. This makes the firewall an on-chip function, in theory reducing the overhead on the CPU and increasing throughput. The firewall also uses IAM, or Intelligent Application Manager, to provide application-based filtering.
However, the most notable omission from the nForce 3 chipset is the quality integrated SoundStorm audio found in nForce2 boards, supposedly for cost reasons. The nForce 3 chipset is a single die solution, as opposed to the historical northbridge / southbridge combination, and reportedly there was not enough space left for audio functionality. An alternative explanation has been proposed that the Dolby technology licensing for SoundStorm that Nvidia originally obtained for the chipset of Microsoft's Xbox allowed Nvidia effectively license-free implementation on the contemporary Nforce1 and Nforce2, but implementation on Nforce3 would have required new license payments.
The nForce3 also supports SATA technology, as well as RAID 0+1 striping and mirroring. The chipset can accommodate up to four high bandwidth SATA-150 devices.
Windows Vista Incompatibility
Nvidia announced before the public release of Windows Vista that they would not release chipset drivers for the AGP-based nForce2 for the operating system. They subsequently decided to also drop support for nForce3 in late February 2007, after the release of Vista. Nvidia is thus the only major mainboard chipset manufacturer that is not supporting a chipset designed for 64-bit processors with Vista. The chipset drivers packaged with Windows Vista are usable, but as a result of not being specifically designed for the nForce2 and 3 chipsets, they do not take full advantage of the hardware and lose some functionality. One such problem disables safe removal of IDE and SATA drives.
One issue concerning the lack of natively supported drivers for the nForce3 chipset in Windows Vista has come up with the public release of the operating system and the affordability of dual core systems. In these dual core systems with ATI graphics chipsets above the Radeon 9XXX series, Windows Vista disables the ATI display drivers designed for the operating system and defaults to the PCI-compatible drivers. Windows reports this as Code 43 Error. In PCI-compatible mode, all hardware acceleration is switched off, negatively affecting the performance of the display adapter.
This problem is caused by memory allocation routines in dual core systems with ATI display drivers . With single core processors, this issue does not exist. ATI has claimed Nvidia's chipset driver is the issue. AMD has released announcement about the matter in knowledge base entry #737-24498. SiS, ULi and VIA also had problems with their chipset drivers (mainly agp.inf), but quickly released patches to correct these issues.
In February 2007, Nvidia said that the problem will most likely be resolved with an "MCP driver update."
To date, Nvidia has not released a complete chipset driver package for nForce3 and Windows Vista. However, Nvidia has posted individual 32 bit pre-release networking and audio drivers for Windows Vista Beta 1 that support the nForce3 series (and 64 bit). It is also possible that nForce4 chipsets may experience similar problems with the RAID and ATA drivers.
Even using nforce3 with an Nvidia card causes it to only negotiate AGP 4x not 8x, and causes the system to restart repeatedly.
There are however workarounds allowing the use of the Windows XP 32bit nForce3 GART driver under Vista 32bit, allowing use of AGP8x, and providing a more stable system. The method is described in this forum thread.
The same workaround exists for Vista x64 users via the Windows XP 64bit nForce3 Beta GART driver. The driver is located here. This driver is not signed so in order to boot the system, you will have to Disable Driver Signature Enforcement at the boot menu or install Readydriver Plus to do it automatically.
Nvidia offers chipset driver download in the "Legacy" product category on its download page.
See also
Comparison of Nvidia chipsets
References
External links
Nvidia: nForce3
Code 43 Error in Device Manager in Systems with ATI AGP Cards and Nforce3
Anandtech: nForce3-250 - Part 1: Taking Athlon 64 to the Next Level
Anandtech: nForce3-250 - Part 2: Taking Athlon 64 to the Next Level
Anandtech: Which Boards Have On-Chip LAN?
TechReport
Nvidia chipsets |
709325 | https://en.wikipedia.org/wiki/Debian%20Conference | Debian Conference | DebConf, the Debian developers conference is the yearly conference where developers of the Debian operating system meet to discuss further development of the system.
Besides the scheduled workshops and talks, Debian developers take the opportunity to hack on the Debian system in a more informal setting. This has been institutionalised by introducing the DebCamp in the Oslo DebConf in 2003: a room is set aside and computing infrastructure provided.
Locations
Locations of past and future DebConf events:
Miniconf
These were one-day miniature conferences, originally held in association with the main linux.conf.au Australian Linux conference. They were targeted towards specific communities of interest and offered delegates an opportunity to network with other enthusiasts while immersing themselves in a specific topic or project.
Locations of past LCA Miniconf events:
MiniDebConf
This is a smaller Debian event, held annually in various places in the world.
Locations of past and future MiniDebConf events:
Attendance
According to a 2013 brochure, the conference had about 30 attendees in 2000 while in 2011 there were around 300 attendees, and about 250 are expected.
References
External links
DebConf website
DebConf video archive
Linux conferences
Debian
Free-software conferences
Recurring events established in 2000 |
4652498 | https://en.wikipedia.org/wiki/BackupPC | BackupPC | BackupPC is a free disk-to-disk backup software suite with a web-based frontend. The cross-platform server will run on any Linux, Solaris, or UNIX-based server. No client is necessary, as the server is itself a client for several protocols that are handled by other services native to the client OS. In 2007, BackupPC was mentioned as one of the three most well known open-source backup software, even though it is one of the tools that are "so amazing, but unfortunately, if no one ever talks about them, many folks never hear of them".
Data deduplication reduces the disk space needed to store the backups in the disk pool. It is possible to use it as D2D2T solution, if the archive function of BackupPC is used to back up the disk pool to tape. BackupPC is not a block-level backup system like Ghost4Linux but performs file-based backup and restore. Thus it is not suitable for backup of disk images or raw disk partitions.
BackupPC incorporates a Server Message Block (SMB) client that can be used to back up network shares of computers running Windows. Paradoxically, under such a setup the BackupPC server can be located behind a NAT'd firewall while the Windows machine operates over a public IP address. While this may not be advisable for SMB traffic, it is more useful for web servers running Secure Shell (SSH) with GNU tar and rsync available, as it allows the BackupPC server to be stored in a subnet separate from the web server's DMZ.
It is published under the GNU General Public License.
Protocols supported
BackupPC supports NFS, SSH, SMB and rsync.
It can back up Unix-like systems with native ssh and tar or rsync support, such as Linux, BSD, and OS X, as well as Microsoft Windows shares with minimal configuration.
On Windows, third party implementations of tar, rsync, and SSH (such as Cygwin) are required to utilize those protocols.
Protocol choice
The choice between tar and rsync is dictated by the hardware and bandwidth available to the client. Clients backed up by rsync use considerably more CPU time than client machines using tar or SMB. Clients using SMB or tar use considerably more bandwidth than clients using rsync. These trade-offs are inherent in the differences between the protocols. Using tar or SMB transfers each file in its entirety, using little CPU but maximum bandwidth. The rsync method calculates checksums for each file on both the client and server machines in a way that enables a transfer of just the differences between the two files; this uses more CPU resources, but minimizes bandwidth.
Data storage
Version 3.x
BackupPC uses a combination of hard links and compression to reduce the total disk space used for files. At the first full backup, all files are transferred to the backend, optionally compressed, and then compared. Files that are identical are hard linked, which uses only one additional directory entry. The upshot is that an astute system administrator could potentially back up ten Windows XP laptops with 10 GB of data each, and if 8 GB is repeated on each machine (Office and Windows binary files) would look like 100 GB is needed, but only 28 GB (10 × 2 GB + 8 GB) would be used. Compression of the data on the back-end will further reduce that requirement.
When browsing the backups, incremental backups are automatically filled back to the previous full backup. So every backup appears to be a full and complete set of data.
Version 4.x
Version 4.x can still use V3.x repositories, but all new backups use a new format (seamless upgrade). The overall performance is higher than with the V3.x version.
See also
List of backup software
Comparison of backup software
References
External links
2001 software
Free backup software
Backup software for Linux
Free software programmed in Perl
Perl software |
44073858 | https://en.wikipedia.org/wiki/National%20Cyber%20Coordination%20Centre | National Cyber Coordination Centre | The National Cyber Coordination Centre (NCCC) is an operational cybersecurity and e-surveillance agency in India. It is intended to screen communication metadata and co-ordinate the intelligence gathering activities of other agencies. Some have expressed concern that the body could encroach on Indian citizens' privacy and civil-liberties, given the lack of explicit privacy laws in the country.
Motivation
India has no dedicated Cyber-security regulation and is also not well prepared to deal with cyberwarfare. However, India has formulated the National Cyber Security Policy 2013 which is not yet implemented. The National Cyber Coordination Centre's purpose would be to help the country deal with malicious cyber-activities by acting as an Internet traffic monitoring entity that can fend off domestic or international attacks.
Components of the NCCC include a cybercrime prevention strategy, cybercrime investigation training and review of outdated laws.
Background
The NCCC is an e-surveillance and cybersecurity project of Government of India. It has been classified to be a project of Indian government without a legal framework, which may be counterproductive as it may violate civil liberties and human rights.
There were concerns that National Cyber Coordination Centre (NCCC) could possibly be abused for indulging in mass surveillance in India, privacy violation and civil liberty violations as agencies like NTRO and organisations like the National Security Council Secretariat are exempted from the applicability of any transparency law like Right to Information Act. Mass surveillance in India is not new as India already has e-surveillance projects like Aadhaar, Central Monitoring System, NATGRID, and DRDO NETRA.
Many, including legal experts, in India believe that intelligence agencies and their e-surveillance projects require parliamentary oversight. Although NCCC is jurisdictionally under the Ministry of Home Affairs, it coordinates with multiple security and surveillance agencies as well as with CERT-In of the Ministry of Electronics and Information Technology.
Status
The National Cyber Coordination Centre received an in principle approval in May 2013 and would come under the National Information Board. In September 2014, Indian government discussed to establish it. In November 2014 Rs. 1,000 crore has been allotted to improve Indian cybersecurity. From this Rs. 800 crore would be utilised for National Cyber Coordination Centre purposes.
On 9 August, in response to a question, minister of State P.P. Chaudhary mentioned that Phase-1 of the National Cyber Coordination Centre is now operational. Indian and U.S. intelligence agencies are also working together to curb misuse of social media platforms in the virtual world by terror groups.
Functions
Government sources mentioned that the government would also involve Internet service providers (ISPs) to ensure round-the-clock monitoring of the Internet, while expertise of other private sector organisations would be utilised when required. It will be India’s first layer for cyber threat monitoring and all communication with government and private service providers would be through this body only. The NCCC will be in virtual contact with the control room of all ISPs to scan traffic within the country, flowing at the point of entry and exit, including international gateway. Apart from monitoring the Internet, the NCCC would look into various threats posed by cyber attacks. The NCCC would also address the threats faced by the computer networks of government departments and organisations handling sensitive government data and important websites.
National Cyber Security Coordinator
The first National Cyber Security Coordinator/ Cyber-security chief was Gulshan Rai. He was followed by Lt. Gen. Rajesh Pant (Retd).
Similar projects
Projects similar in nature:
Aadhaar
Central Monitoring System
NATGRID
Netra
References
Government of India
Ministry_of_Home_Affairs_(India)
Executive branch of the government of India
2011 establishments in India
Cybercrime in India
Cyber Security in India |
17469546 | https://en.wikipedia.org/wiki/Panopticon%20Software | Panopticon Software | Panopticon Software (now part of Altair Data Analytics) was a multi-national data visualization software company specializing in monitoring and analysis of real-time data. The firm was headquartered in Stockholm, Sweden. It partnered with several large systems integrators and infrastructure software companies, including SAP, Thomson Reuters, Kx Systems, and One Market Data (OneTick). The company's name is derived from the Greek: 'pan' for all, 'optic' for sight. The company name is derived from the word panopticon which is an architectural concept originally intended to facilitate surveillance of prisons.
In December 2018, Panopticon was acquired by Altair Engineering as part of its acquisition of Datawatch Corporation.
Panopticon Software was a key player in the data visualization sector along with for example Qliktech, Tableau Software and Tibco Software. Its Swedish origins are shared with GapMinder, Qliktech and Spotfire, making Sweden a centre for Information Visualization research and development.
Panopticon products are optimized for use with real-time data message buses, complex event processing engines, relational databases, and column-oriented databases and is widely used to support electronic trading operations within global banks, asset managers, and exchanges. Panopticon tools are also often embedded in other enterprise applications using the company's software development kit.
History
The company was founded in 1999 as a wholly owned subsidiary of the emerging markets brokerage Brunswick Direct before being spun off as a separate entity in 2002. It later was acquired by the UK based Hamsard Group (Hamsard is now known as Cantono Plc), and became a subsidiary of the Group. In March 2007 its competitor Spotfire undertook negotiations with Hamsard Group, to take full ownership of Panopticon. This potential deal was not completed, Spotfire pulling out of negotiations, and itself subsequently being purchased by Tibco. In May 2007 the company was sold back to its founders as part of a management buyout.
In May 2012, the company announced that QlikTech had partnered with Panopticon to enable QlikTech clients to embed Panopticon data visualizations into their QlikView dashboards. Panopticon supports QlikView desktop, web, and mobile interactive dashboards and allows users to filter and interact directly with real-time data.
In June 2012, the company announced that SAP was utilizing its Panopticon data visualization tools as the front end for real-time deployments of the SAP HANA in-memory appliance.
In April 2013, Panopticon was selected for inclusion in UBM Tech Channel's CRN 2013 Big Data 100 list. The Big Data 100 recognizes innovative technology vendors that help businesses manage "Big Data" — the rapidly increasing volume, variety and velocity of information being generated today. The list covers three categories: business analytics, data management, infrastructure and services. The Big Data 100 includes many established vendors as well as startups and specialized suppliers of niche products that help businesses address Big Data needs.
In April 2013, Panopticon was named a Gartner "Cool Vendor" in the Cool Vendors for In-Memory Computing 2013 report by Gartner, Inc. The April 23, 2013 report was co-authored by Roy Schulte, Roxane Edjlali, et al. This is the first year that Gartner has called out In-Memory Computing specifically as a subject for one of its Cool Vendor reports.
In August 2013, Datawatch Corporation (NASDAQ-CM: DWCH) announced that it had completed its acquisition of Panopticon Software AB.
In December 2018, Altair Engineering Inc (NASDAQ: ALTR) announced that it had completed its acquisition of Datawatch Corporation.
Pre-attentive processing
Panopticon is designed to allow analysts to utilize Pre-Attentive Processing to identify anomalies and outliers in large amounts of fast-changing data. Pre-Attentive Processing is a term from the area of human cognitive psychology and refers to the ability of the low-level human visual system to rapidly identify certain basic visual properties. Examples of visual features that can be detected in this way include hue, intensity, enclosure, orientation, size, and motion.
Technology overview
Panopticon's technology relies on in-memory OLAP cubes, which are displayed through a series of visualizations including treemapping. This allows the user to load data, select variables and hierarchical structures, and navigate through the resultant visualization, filtering, zooming and drilling (sometimes called slicing and dicing), to identify outliers, correlations and trends.
Its streaming OLAP implementation takes an in-memory OLAP cube and allows data to be streamed through it. This combination makes the company's products attractive to industry verticals that require live streaming data, such as financial market data, utility grid monitoring and telecommunications network traffic analysis. This is very different than the vast majority of OLAP implementations in which cubes are rebuilt periodically for new batches of data.
This support for streaming data with its products has allowed financial institutions such as JPMorgan Chase
, Citigroup
, Citadel
, and BlackRock
to implement the Panopticon within their real-time trading and risk applications.
Euromoney has stated that it provides the trader community with a way of quickly digesting information.
References
Business intelligence companies
2013 mergers and acquisitions |
9291622 | https://en.wikipedia.org/wiki/Microsoft%20Solitaire | Microsoft Solitaire | Solitaire is a computer game included with Microsoft Windows, based on a card game of the same name, also known as Klondike.
History
Microsoft has included the game as part of its Windows product line since Windows 3.0, starting from 1990. The game was developed in 1988 by the intern Wes Cherry. The card deck itself was designed by Macintosh pioneer Susan Kare. Cherry's version was to include a boss key that would have switched the game to a fake Microsoft Excel spreadsheet, but he was asked to remove this from the final release.
Microsoft intended Solitaire "to soothe people intimidated by the operating system," and at a time where many users were still unfamiliar with graphical user interfaces, it proved useful in familiarizing them with the use of a mouse, such as the drag-and-drop technique required for moving cards.
According to Microsoft telemetry, Solitaire was among the three most-used Windows programs and FreeCell was seventh, ahead of Word and Microsoft Excel. Lost business productivity by employees playing Solitaire has become a common concern since it became standard on Microsoft Windows.
In October 2012, along with the release of the Windows 8 operating system, Microsoft released a new version of Solitaire called Microsoft Solitaire Collection. This version, game designed by Microsoft Studios, with visual design led by William Bredbeck, and developed by Arkadium, is advertisement supported and introduced many new features to the game. As with the original release of the game, William Bredbeck is quoted as saying "One of the intentions of the redesign was to introduce users to the novel changes incorporated in the new Windows 8 operating system". This design is still in use through Windows 11.
Microsoft Solitaire celebrated its 25th anniversary on May 18, 2015. To celebrate this event, Microsoft hosted a Solitaire tournament on the Microsoft campus and broadcast the main event on Twitch.
By its 30th anniversary in 2020, it was estimated that the game still had 35 million active monthly players and more than 100 million games played daily, according to Microsoft.
Features
When a game is won, the cards appear to fall off each stack and bounce off the screen. This "victory" screen is considered a prototypical element that would become popular in casual games, compared to the use of "Ode to Joy" on winning a level of Peggle, and makes Solitaire one of the first such casual video games.
Since Windows 3.0, Solitaire allows selecting the design on the back of the cards, choosing whether one or three cards are drawn from the deck at a time, switching between Vegas scoring and Standard scoring, and disabling scoring entirely. The game can also be timed for additional points if the game is won. There is a cheat that will allow drawing one card at a time when 'draw three' is set.
In Windows 2000 and later versions of Solitaire, right-clicking on open spaces automatically moves available cards to the four foundations in the upper right-hand corner, as in FreeCell. If the mouse pointer is on a card, a right click will move only that card to its foundation, provided that it is a possible move. Left double-clicking will also move the card to the proper foundation.
Until the Windows XP version, the card backs were the original works designed by Susan Kare, and many were animated.
The Windows Vista and Windows 7 versions of the game save statistics on the number and percentage of games won, and allow users to save incomplete games and to choose cards with different face styles.
On Windows 8, Windows 10, Windows 11, Windows Phone, Android and iOS, the game is issued as Microsoft Solitaire Collection, where in addition to Klondike four other game modes were featured, Spider, FreeCell (both of which had been previously featured in versions of Windows as Microsoft Spider Solitaire and Microsoft FreeCell), Pyramid, and TriPeaks (both of which were previously part of the Microsoft Entertainment Pack series, the former under the name Tut's Tomb).
See also
List of games included with Windows
References
1990 video games
Card game video games
Discontinued Windows components
Microsoft games
Simple packers
Video games developed in the United States
Windows games
Windows Phone games
Casual games
de:Klondike (Solitaire) |
4260439 | https://en.wikipedia.org/wiki/24SevenOffice | 24SevenOffice | 24SevenOffice is a Norwegian software company with headquarters in Oslo, Norway, and offices in Stockholm, Sweden and London, UK. Founded in 1997, the company specializes in web-based (SaaS) Enterprise resource planning systems.
Company history
24SevenOffice was started in 1997 in Porsgrunn, Norway under the company name IKT Interactive AS and marketed as kontorplassen.no. The name "24SevenOffice" was introduced for the company's London branch when the company entered the British market in 2003. The company changed its name to 24SevenOffice in February 2005. Originally based in Skien, the company later moved to Oslo Innovation Center before establishing their current headquarters on Tjuvholmen in the waterfront Fjord City of Oslo.
The idea for the company's product was developed in 1996, and 24SevenOffice was an early innovator in the Scandinavian market in terms of web-based enterprise resource planning-solutions (ERP). A British office was established at Surrey Business Park in May 2003, with the company launching its web-based (SaaS) utility computing system to the UK SME market in 2004.
An office in Chennai, India was established in 2005, and 24SevenOffice entered the Swedish market when they acquired the leading competitor and ERP-provider Start & Run in a cash deal. In August 2005 the company had an initial public offering that raised million, and the company entered The Norwegian Over the Counter Market list as of 5 October 2005 (the ticker was 24SO), reaching a market value of million, with 5000 customers in Norway.
In 2006, the company signed a deal to sponsor rally driver Petter Solberg, the largest private sponsorship in Norwegian sport at that time. Instead of receiving NOK 5 million in cash, Solberg received a 2.9 percent ownership in the company. The German-speaking market was entered in April 2006, when an office in Frankfurt am Main was opened, and in late August/early September they established an office with ten sales agents plus a general manager in Stockholm for the Swedish market.
24SevenOffice initiated strategic cooperation with Active 24 in early 2006 to develop a common platform. During the summer, Active 24 was bought by 24SevenOffice's ERP/CRM competitor Mamut (company), and 24SevenOffice terminated the contract with Active 24 in October demanding NOK 200 million in compensation for lost revenue. After a breakdown of settlement negotiations in the Forliksråd in January 2007, 24SevenOffice filed a case against Active 24 for breach of agreement in the Oslo District Court in March. 24SevenOffice lost on all counts in the District Court in December 2007. In January 2008, 24SevenOffice appealed the case to the Borgarting Court of Appeal, reducing the cause of action from NOK 250 to 30 million. 24SevenOffice lost on all counts in the Court of Appeal in December 2008, and was ordered to cover the costs incurred by Active 24 in connection with the dispute totaling NOK 6.91 million. 24SevenOffice appealed the case to the Supreme Court of Norway, but the Supreme Court Appeals Committee in March 2008 unanimously rejected the appeal from 24SevenOffice over the Borgarting Appeal Court's unanimous judgment of December 2008. On a counterclaim from Active 24 and Mamut against 24SevenOffice, the Oslo District Court in May 2010 found, that 24SevenOffice should pay Active 24 NOK 12 million in compensation for wrongfully having terminated the agreement, and a further NOK 360.000 of the opponent's legal costs. 24SevenOffice disagreed with the court ruling, and appealed once again. The Borgarting Court of Appeal in November 2011 ruled to reduce the amount of damages to NOK 4.4 million plus NOK 900.000 in penal interest.
With several scrip issues 24SevenOffice raised 25 million NOK (about $4 million at the time) between October 2005 and July 2006. They entered into a strategic partnership with Bluegarden, who for 30 years had delivered digital services for payroll, human resource planning, recruitment and training, in March 2006, and they made a large-scale agreement in April 2006 with US telecommunications software company Webex, a competitor to Norwegian Tandberg videoconferencing equipment manufacturer. In September 2006, 24SevenOffice signed an agreement with Fokus Bank to provide their customers extended functionality in Internet banking.
24SevenOffice had by 2007 reportedly 9000 customers, joined the OpenAjax Alliance, and entered into a strategic partnership with Dun & Bradstreet in May 2007, but despite getting listed on Oslo Axess on 22 June (ticker: TFSO), reaching a market capitalization of NOK 120 million, the company was still losing money. The company ended 2007 with a revenue of NOK 21.7 million.
In 2008, 24SevenOffice bought 50% of the stocks in telecommunication company Oyatel, partnered with Nets Group to facilitate invoicing for businesses, and telecommunications company Telipol choose 24SevenOffice's second-generation Internet platform for its 8,000 users. They
announced an increase in revenues in Q2 to 11.1 million, up from 4.7 million in the same period the year before.
24SevenOffice had turnover of NOK 37 million in the first half of 2009 which was a doubling compared to the same period the previous year and presented its first positive EBITDA in Q2.
Norwegian Association of Auditors signed an agreement with 24SevenOffice in 2011, whereby they only recommend 24SevenOffice as a system for their members to use.
Product
24SevenOffice is a web-based (SaaS) ERP system. It includes modules for CRM, accounting, invoicing , e-mail, file/document management and project management.
Awards
24SevenOffice won the Seal of Excellence in Multimedia Award at the 2004 CeBIT, became Norwegian Gazelle Company of the year 2004 chosen by Dagens Næringsliv and Dun & Bradstreet, won Product of the Year in the Norwegian finance magazine Kapital, and the IKT Grenland Innovation Award in 2008.
References
External links
1997 establishments in Norway
Ajax (programming)
ASP Accounting Systems
Cloud applications
Companies based in Oslo
Companies established in 1997
CRM software companies
Customer relationship management software
ERP software companies
ERP software
Norwegian brands
Software companies of Norway
Web applications |
33905500 | https://en.wikipedia.org/wiki/Zentraler%20Omnibusbahnhof%20M%C3%BCnchen | Zentraler Omnibusbahnhof München | Zentraler Omnibusbahnhof München (ZOB München, Central bus station Munich) is a central bus station located in Maxvorstadt, Munich, Bavaria, Germany. The terminal has an area of . The bus station was established on 11 September 2009 and is a major transportation hub for bus and train with national and international traffic. The bus station also has spacious offices and retail space for retailers that give it an airport-like character. The nearest S-Bahn station is Munich Hackerbrücke station.
Location and description
The central bus station was opened on 11 September 2009 and is located in the immediate vicinity of the Hackerbrücke S-Bahn station and not far from Munich Central Station. The Hackerbrücke, which is located directly next door, is connected to the bus station building via two bridge bridges. Arnulfpark, a modern city quarter with residential and office buildings, cultural facilities and restaurants, is located in the immediate vicinity.
The ZOB was designed as a multifunctional property with a floor area of around 25,000 m², seven storeys and various levels of use. The bus station, which is located below the ground floor, is accessible via Arnulfstraße and has 29 stops where national and international long-distance scheduled bus services as well as most of Munich's tourist bus services are operated. Escalators or elevators take passengers to the first floor, where the shopping arcade with retail and catering areas is located, giving the ZOB an airport-like character. Tenants of the upper floor include Lidl, dm-drogerie markt, McDonald's, TUI and Vapiano. On three further upper floors and an area of about 10,300 m² there are offices, a parking deck and a discotheque in the basement.
Architecture
After a public tender by the city of Munich in 2002, the contract for the design was awarded to the architectural firm Auer+Weber+Assoziierte. The futuristic exterior façade is based on the shape of an ICE powerhead, which was to fit in with the redesign of Munich's main railway station discussed at the time. Almost 31 km of tubes were used for the aluminium tube design, which is unique in Germany.
Operation of the bus station
Owner
In 2014, the ZOB was acquired by the Munich issuing house WealthCap, previously owned by the project developer Hochtief. The bus station is operated by the Bavarian Red Cross.
User
The bus station is mainly used by numerous long-distance bus operators, but also by tour operators. For example, buses operated by companies such as Flixbus or IC Bus, which offer national and international connections, regularly stop at the ZOB. Public buses are excluded, as the public transport connection is via Munich S-Bahn and Munich tramway.
Fees
Buses stopping at the ZOB pay 6.00 euros for half an hour and 11.00 euros for a full hour's stay at the opening. In 2018 the prices were 8.00 and 11.00 euros respectively. In addition, there are other price scales of up to 56.00 euros for the maximum permitted parking duration of 24 hours.
Criticism
The premises located above the bus stops were criticised for several years due to their initial lack of attractiveness for tenants. For the opening of the bus station, only six of the 17 rental spaces could be let. Due to the low customer frequency of only 1,500 persons per day compared to the planned 8,000 to 10,000, several tenants cut their payments to the project developer Hochtief in 2011, whereupon Hochtief filed a lawsuit. The Landgericht München I ruled in favour of a tenant in a first case in 2013.
The deregulation of the long-distance bus market in Germany led to a significant improvement in the use of the ZOB from the beginning of 2013. Average arrivals and departures increased from 80 buses per day in 2010 to 135 per day in 2013. 42,000 buses per year, i.e. an average of 115 buses per day, were expected at the opening of the bus station.
References
External links
Buildings and structures in Munich
Bus stations in Germany
Maxvorstadt
Transport in Munich |
14302231 | https://en.wikipedia.org/wiki/Mark%20Guzdial | Mark Guzdial | Mark Joseph Guzdial (born September 7, 1962) is a Professor in the College of Engineering at the University of Michigan. He was formerly a professor in the School of Interactive Computing at the Georgia Institute of Technology affiliated with the College of Computing and the GVU Center. He has conducted research in the fields of computer science education and the learning sciences and internationally in the field of Information Technology. From 2001–2003, he was selected to be an ACM Distinguished Lecturer, and in 2007 he was appointed Vice-Chair of the ACM Education Board Council. He was the original developer of the CoWeb (or Swiki), one of the earliest wiki engines, which was implemented in Squeak and has been in use at institutions of higher education since 1998. He is the inventor of the Media Computation approach to learning introductory computing, which uses contextualized computing education to attract and retain students.
Education
Mark Guzdial was born in Michigan and attended Wayne State University for his undergraduate studies, earning a Bachelor of Science degree in computer science in 1984. He received a master's degree in 1986 in Computer Science and Engineering at Wayne State University. Guzdial went on to receive a Ph.D. at the University of Michigan in 1993 in Computer Science and Education where he was advised by Elliot Soloway. His thesis created an environment for high school science learners to program multimedia demonstrations and physics simulations. After graduating from the University of Michigan, Guzdial accepted a position as an assistant professor at the Georgia Institute of Technology College of Computing. In 2018, he became a full professor in the College of Engineering at the University of Michigan.
Research and teaching
Guzdial's research projects include Media Computation, an approach that emphasizes context in computer science education, using programming languages, lectures examples, and programming assignments from those contexts that students recognize as being authentic and relevant for computing.
Guzdial's Media Computation curriculum is being used at universities across the country. He received a grant from the National Science Foundation in 2006 to pursue his “Using Media Computation to Attract and Retain Students in Computing” curriculum.
Guzdial was Director of Undergraduate Programs at Georgia Tech (including the BS in Computer Science, BS in Computational Media, and Minor in Computer Science) until 2007. He was Lead Principal Investigator on Georgia Computes, a National Science Foundation Broadening Participation in Computing alliance focused on increasing the number and diversity of computing students in the state of Georgia.
Publications
His publications include:
2015 Learner-Centered Design of Computing Education: Research on Computing for Everyone (Synthesis Lectures on Human-Centered Informatics).
2006. Introduction to Computing and Programming with Java: A Multimedia Approach. (with Barbara Ericson)
2004. Introduction to Computing and Programming in Python: A Multimedia Approach.
2001. Squeak: Open Personal Computing and Multimedia. (with Kim Rose)
2000. Squeak: Object-Oriented Design with Multimedia Applications.
Awards and honors
In 2010, Guzdial was awarded the Karl V. Karlstrom Outstanding Educator Award "for [his] contributions to computing education, through the Media Computation (MediaComp) approach that they have created, supported, and disseminated, and its impact on broadening participation in computing." In 2012, he received the IEEE Computer Science and Engineering Undergraduate Teaching Award "for outstanding and sustained excellence in computing education through innovative teaching, mentoring, inventive course development, and knowledge dissemination." In 2014, Guzdial was elected a Fellow of the Association for Computing Machinery "for contributions to computing education, and broadening participation." In 2019, Guzdial was awarded the ACM SIGCSE Award for Outstanding Contribution to Computer Science Education at the 50th SIGCSE Technical Symposium "in recognition of a significant contribution to computer science education".
Personal life
Guzdial was married to Barbara Ericson in July 1985. They have three children, Matthew, Katherine, and Jennifer.
References
Living people
1962 births
University of Michigan College of Engineering alumni
Wayne State University alumni
Georgia Tech faculty
Human–computer interaction researchers
Computer science educators
Fellows of the Association for Computing Machinery |
1529966 | https://en.wikipedia.org/wiki/Zone%20Routing%20Protocol | Zone Routing Protocol | Zone Routing Protocol, or ZRP is a hybrid Wireless Networking routing protocol that uses both proactive and reactive routing protocols when sending information over the network. ZRP was designed to speed up delivery and reduce processing overhead by selecting the most efficient type of protocol to use throughout the route.
How ZRP works
If a packet's destination is in the same zone as the origin, the proactive protocol using an already stored routing table is used to deliver the packet immediately.
If the route extends outside the packet's originating zone, a reactive protocol takes over to check each successive zone in the route to see whether the destination is inside that zone. This reduces the processing overhead for those routes. Once a zone is confirmed as containing the destination node, the proactive protocol, or stored route-listing table, is used to deliver the packet.
In this way packets with destinations within the same zone as the originating zone are delivered immediately using a stored routing table. Packets delivered to nodes outside the sending zone avoid the overhead of checking routing tables along the way by using the reactive protocol to check whether each zone encountered contains the destination node.
Thus ZRP reduces the control overhead for longer routes that would be necessary if using proactive routing protocols throughout the entire route, while eliminating the delays for routing within a zone that would be caused by the route-discovery processes of reactive routing protocols.
Details
What is called the Intra-zone Routing Protocol (IARP), or a proactive routing protocol, is used inside routing zones. What is called the Inter-zone Routing Protocol (IERP), or a reactive routing protocol, is used between routing zones. IARP uses a routing table. Since this table is already stored, this is considered a proactive protocol. IERP uses a reactive protocol.
Any route to a destination that is within the same local zone is quickly established from the source's proactively cached routing table by IARP. Therefore, if the source and destination of a packet are in the same zone, the packet can be delivered immediately.
Most existing proactive routing algorithms can be used as the IARP for ZRP.
In ZRP a zone is defined around each node, called the node's k-neighborhood, which consists of all nodes within k hops of the node. Border nodes are nodes which are exactly k hops away from a source node.
For routes beyond the local zone, route discovery happens reactively. The source node sends a route request to the border nodes of its zone, containing its own address, the destination address and a unique sequence number. Each border node checks its local zone for the destination. If the destination is not a member of this local zone, the border node adds its own address to the route request packet and forwards the packet to its own border nodes. If the destination is a member of the local zone, it sends a route reply on the reverse path back to the source. The source node uses the path saved in the route reply packet to send data packets to the destination.
References
Haas, Z. J., 1997 (ps). A new routing protocol for the reconfigurable wireless networks. Retrieved 2011-05-06.
The ZRP internet-draft
The BRP internet-draft
Wireless Networking
Wireless networking
Ad hoc routing protocols |
54493227 | https://en.wikipedia.org/wiki/Training%20management%20system | Training management system | A training management system (TMS), training management software, or training resource management system (TRMS) is a software application for the administration, documentation, tracking, and reporting of instructor-led-training programs. TMSs are focused on back-office processes and are considered a tool for corporate training administrators as such, a training management system acts as a central enterprise resource planning (ERP) software specific to the training industry. They can be complemented by other learning technologies such as a learning management system, and are part of the educational technology ecosystem.
Purpose
Training management process includes managing and maintaining training records of any organization. A TMS manages back-office processes for corporate instructor-led-training administration, and typically handles session registration, course administration, tracking, effective monitoring and reporting. Some TMSs also manage financials aspects, including budget forecasting and cost-tracking.
TMSs are often used by regulated industries for compliance training.
TMSs can be complemented by other learning technologies such as a learning management system for e-learning management and course delivery.
Technical aspects
Most TMSs are web-based. TMSs were originally designed to be locally hosted on-premise, where the organization purchases a license to a version of the software, and installs it on their own servers and network. Many TMSs are also offered as SaaS (software as a service), with hosting provided by the vendors.
See also
Competency-based management
Student information system
References
Learning
Educational software
Administrative software |
7495380 | https://en.wikipedia.org/wiki/Terminal%20mode | Terminal mode | A terminal mode is one of a set of possible states of a terminal or pseudo terminal character device in Unix-like systems and determines how characters written to the terminal are interpreted. In cooked mode data is preprocessed before being given to a program, while raw mode passes the data as-is to the program without interpreting any of the special characters.
The system intercepts special characters in cooked mode and interprets special meaning from them. Backspace, delete, and Control-D are typically used to enable line-editing for the input to the running programs, and other control characters such as Control-C and Control-Z are used for job control or associated with other signals. The precise definition of what constitutes a cooked mode is operating system-specific.
For example, if “ABC<Backspace>D” is given as an input to a program through a terminal character device in cooked mode, the program gets “ABD”. But, if the terminal is in raw mode, the program gets the characters “ABC” followed by the Backspace character and followed by “D”. In cooked mode, the terminal line discipline processes the characters “ABC<Backspace>D” and presents only the result (“ABD”) to the program.
Technically, the term “cooked mode” should be associated only with streams that have a terminal line discipline, but generally it is applied to any system that does some amount of preprocessing.
cbreak mode
cbreak mode (sometimes called rare mode) is a mode between raw mode and cooked mode. Unlike cooked mode it works with single characters at a time, rather than forcing a wait for a whole line and then feeding the line in all at once. Unlike raw mode, keystrokes like abort (usually Control-C) are still processed by the terminal and will interrupt the process.
See also
Terminal emulator
Serial communications
Chapter Serial communications in Linux and Unix of the Serial Data Communications Programming Wikibook
Command and Data modes
References
Computer terminals
Unix |
8234433 | https://en.wikipedia.org/wiki/AICCU | AICCU | AICCU (Automatic IPv6 Connectivity Client Utility) was a popular cross-platform utility for automatically configuring an IPv6 tunnel. It is free software available under a BSD license. The utility was originally provided for the SixXS Tunnel Broker but it can also be used by a variety of other tunnel brokers.
History and development
AICCU was written and maintained by Jeroen Massar. Various patches from other persons have been incorporated, these persons are acknowledged in the field for their contributions. AICCU is the successor of the Windows-only and Linux/BSD-variety of the Heartbeat tool that was provided by SixXS, solely to use the Heartbeat protocol. When the AYIYA protocol came into existence it was decided that to support this new protocol it would be better to merge the Windows and Unix trees into one program and give it a better appearance. The name of the Heartbeat tool was then changed to reflect that it did more than providing mere support for the heartbeats.
Award of excellence
AICCU has won the Award of Excellence in the Implementation Category of the 2004 Edition of the IPv6 Application Contest.
Supported protocols
The following tunneling protocols are currently supported:
6in4 - Standard IPv6 in IPv4 tunnels using protocol 41 in the IPv4 protocol header.
AYIYA - For IPv6 over IPv4 UDP in a secure manner and being able to work through a NAT.
6in4 Heartbeat - used for dynamic 6in4 tunnels
AICCU primarily uses the TIC protocol to retrieve the configuration parameters of the tunnel automatically that the user wants to have configured.
Support for other tunnel brokers
AICCU finds available tunnel brokers by looking up the TXT DNS records from "_aiccu.sixxs.net". The latter allowed a local network to add their own tunnel broker(s) by adding records in the domains configured in their search path. Non-local tunnel brokers could then be added by requesting the SixXS staff to add an entry to the global DNS records.
Supported platforms
The following operating systems/platforms/distributions are supported by AICCU:
AIX
DragonFly BSD
FreeBSD
PC-BSD
NetBSD
OpenBSD
Linux
OS X
Solaris (no AYIYA support)
Windows
Various distributions have an AICCU package included in their distribution.
Usage
The main usage of AICCU was in combination with the SixXS tunnel broker service.
There are other ISPs who have implemented parts of the protocols that AICCU support, for instance the Czech ISP NetBox uses AICCU to configure tunnels automatically for their users by providing a TIC (Tunnel Information and Control protocol) implementation that ignores the username/password/tunnel_id but uses the source address where the TIC connection originates from to determine and return the tunnel configuration using the TIC protocol, which AICCU then uses to configure the tunnel.
See also
References
External links
SixXS Tunnel Broker
IPv6
MacOS Internet software
Unix Internet software
Windows Internet software
Solaris software
BSD software
Tunneling protocols
Free software programmed in C |
67721 | https://en.wikipedia.org/wiki/Revision%20Control%20System | Revision Control System | Revision Control System (RCS) is an early implementation of a version control system (VCS). It is a set of UNIX commands that allow multiple users to develop and maintain program code or documents. With RCS, users can make their own revisions of a document, commit changes, and merge them. RCS was originally developed for programs but is also useful for text documents or configuration files that are frequently revised.
History
Development
RCS was first released in 1982
by Walter F. Tichy at Purdue University. It was an alternative tool to the then-popular Source Code Control System (SCCS) which was nearly the first version control software tool (developed in 1972 by early Unix developers). RCS is currently maintained by the GNU Project.
An innovation in RCS is the adoption of reverse deltas. Instead of storing every revision in a file like SCCS does with interleaved deltas, RCS stores a set of edit instructions to go back to an earlier version of the file. Tichy claims that it is faster for most cases because the recent revisions are used more often.
Legal and licensing
Initially (through version 3, which was distributed in 4.3BSD), its license prohibited redistribution without written permission from Walter Tichy:
A READ_ME file accompanied some versions of RCS which further restricted distribution, e.g., in 4.3BSD-Reno.
Ca. 1989, the RCS license was altered to something similar to the contemporary BSD licenses, as seen by comments in the source code.
RCS 4.3, released 26 July 1990, was distributed "under license by the Free Software Foundation", under the terms of the GPL.
Behavior
Mode of operation
RCS operates only on single files. It has no way of working with an entire project, so it does not support atomic commits affecting multiple files. Although it provides branching for individual files, the version syntax is cumbersome. Instead of using branches, many teams just use the built-in locking mechanism and work on a single head branch.
Usage
RCS revolves around the usage of "revision groups" or sets of files that have been checked-in via the co (checkout) and ci (check-in) commands. By default, a checked-in file is removed and replaced with a ",v" file (so foo.rb when checked in becomes foo.rb,v) which can then be checked out by anyone with access to the revision group. RCS files (again, files with the extension ",v") reflect the main file with additional metadata on its first lines. Once checked in, RCS stores revisions in a tree structure that can be followed so that a user can revert a file to a previous form if necessary.
Advantages
Simple structure and easy to work with
Revision saving is not dependent on a central repository
Disadvantages
There is little security, in the sense that the version history can be edited by the users.
Only one user can work on a file at a time.
Related tools and successors
RCS - a first generation tool
RCS is still used in some projects, but its continued usage is nowhere near that of more modern tools like Git.
SCCS (first released in 1973) and DSEE (considered a predecessor of Atria ClearCase) are two other relatively well-known ground-breaking VCS software tools. These tools are generally considered the first generation of VCS as automated software tools.
Second generation
After the first generation VCS, tools like CVS and Subversion, which feature a locally centralized repository, could be considered as the second generation VCS. Specifically, CVS (Concurrent Versions System) was developed on top of RCS structure, improving scalability of the tool for larger groups, and later PRCS, a simpler CVS-like tool which also uses RCS-like files, but improves upon the delta compression by using Xdelta instead.
By 2006 or so, Subversion was considered to be as the most popular and widely in use VCS tool from this generation and filled important weaknesses of CVS. Later SVK developed with the goal of remote contribution feature, but still the foundation of its design were pretty similar to its predecessors.
Third generation
As Internet connectivity improved and geographically distributed software development became more common, tools emerged that did not rely on a shared central project repository. These allow users to maintain independent repositories (or forks) of a project and communicate revisions via changesets.
BitKeeper, Git, Monotone, darcs, Mercurial, and bzr
are some examples of third generation version control systems.
Notes
References
Notes
Walter F. Tichy: RCS--A System for Version Control. In: Software: Practice and Experience. July 1985. Volume 15. Number 7. Pages 637–654. References to the paper at CiteSeer alternate link to paper
Further reading
Don Bolinger, Tan Bronson, Applying RCS and SCCS - From Source Control to Project Control. O'Reilly, 1995.
Walter F. Tichy, RCS—A System for Version Control, 1985
Paul Heinlein, RCS HOWTO, 2004
External links
Original RCS at Purdue
1985 software
Free version control software
GNU Project software
Software using the GPL license
Types of tools used in software development |
33743658 | https://en.wikipedia.org/wiki/Click.to | Click.to | click.to is application software that integrates with the operating system clipboard to enhance copy and paste operations. It analyzes data stored on the clipboard and offers the user a choice of appropriate paste-destination programs or web pages from a context menu. click.to is a product of Axonic Informationssysteme GmbH, headquartered in Karlsruhe, Germany.
Extensions
Users and developers may customize functions and add search queries either through an embedded form or by the use of the click.to API.
System requirements
The following operating systems support click.to:
Apple:
Mac OS X v10.6 Snow Leopard
Mac OS X v10.7 Lion (64-bit)
Microsoft:
Windows XP
Windows Vista (32- and 64-bit)
Windows 7 (32- and 64-bit)
Competitors
click.to is similar functionality of "accelerators" that can be installed as a browser extension for Internet Explorer. These also provide a context menu with speed-dial functions but run only within the browser (whereas click.to is fully integrated into the computer's operating system).
References
External links
Freeware |
1473483 | https://en.wikipedia.org/wiki/Public%20data%20network | Public data network | A public data network (PDN) is a network established and operated by a telecommunications administration, or a recognized private operating agency, for the specific purpose of providing data transmission services for the public. The first network was deployed in 1972 in Spain called RETD. Public data network was the common name given to the international collection of X.25 providers whose combined network had large global coverage during the 1980s and into the 1990s, which provided infrastructure for the early Internet.
A public data transmission service is a data transmission service that is established and operated by a telecommunication administration, or a recognized private operating agency, and uses a public data network. A public data transmission service may include Circuit Switched Data packet-switched, and leased line data transmission.
Description
In communications, a PDN is a circuit- or packet-switched network that is available to the public and that can transmit data in digital form. A PDN provider is a company that provides access to a PDN and that provides any of X.25, frame relay, or cell relay (ATM) services. Access to a PDN generally includes a guaranteed bandwidth, known as the committed information rate (CIR). Costs for the access depend on the guaranteed rate. PDN providers differ in how they charge for temporary increases in required bandwidth (known as surges). Some use the amount of overrun; others use the surge duration.
Networks
Examples include the following experimental/public networks which came into operation in the 1970s: RETD/Iberpac in Spain was the first in 1972; RCP/Transpac in France; Telenet, Tymnet and CompuServe in the United States; EPSS/Packet Switch Stream, in the United Kingdom; EIN/Euronet in the EEC; DATAPAC in Canada; and AUSTPAC in Australia.
The International Packet Switched Service was the first commercial and international packet-switched network. It was a collaboration between British and American telecom companies that became operational in 1978.
Public switched data network
A public switched data network (PSDN) is a network for providing data services via a system of multiple wide area networks, similar in concept to the public switched telephone network (PSTN). A PSDN may use a variety of switching technologies, including packet switching, circuit switching, and message switching. A packet-switched PSDN may also be called a packet-switched data network.
Originally the term PSDN referred only to Packet Switch Stream (PSS), an X.25-based packet-switched network, mostly used to provide leased-line connections between local area networks and the Internet using permanent virtual circuits (PVCs). Today, the term may refer not only to Frame Relay and Asynchronous Transfer Mode (ATM), both providing PVCs, but also to Internet Protocol (IP), GPRS, and other packet-switching techniques.
Whilst there are several technologies that are superficially similar to the PSDN, such as Integrated Services Digital Network (ISDN) and the digital subscriber line (DSL) technologies, they are not examples of it. ISDN utilizes the PSTN circuit-switched network, and DSL uses point-to-point circuit switching communications overlaid on the PSTN local loop (copper wires), usually utilized for access to a packet-switched broadband IP network.
See also
History of the Internet
Public data transmission service
National research and education network
X.25 § History
References
Further reading
Telecommunications
Data network
X.25 |
29107 | https://en.wikipedia.org/wiki/Semantics | Semantics | Semantics (from sēmantikós, "significant") is the study of reference, meaning, or truth. The term can be used to refer to subfields of several distinct disciplines, including philosophy, linguistics and computer science.
Linguistics
In linguistics, semantics is the subfield that studies meaning. Semantics can address meaning at the levels of words, phrases, sentences, or larger units of discourse. Two of the fundamental issues in the field of semantics are that of compositional semantics (which pertains on how smaller parts, like words, combine and interact to form the meaning of larger expressions such as sentences) and lexical semantics (the nature of the meaning of words). Other prominent issues are those of context and its role on interpretation, opaque contexts, ambiguity, vagueness, entailment and presuppositions.
Several disciplines and approaches have contributed to the often contentious field of semantics. One of the crucial questions which unites different approaches to linguistic semantics is that of the relationship between form and meaning, and some major contributions to the study of semantics have derived from studies in the 1980–1990s in related subjects of the syntax–semantics interface and pragmatics.
The semantic level of language interacts with other modules or levels (like syntax) in which language is traditionally divided. In linguistics, it is typical to talk in terms of "interfaces" regarding such interactions between modules or levels. For semantics, the most crucial interfaces are considered those with syntax (the syntax–semantics interface), pragmatics and phonology (regarding prosody and intonation).
Disciplines and paradigms in linguistic semantics
Formal semantics
Formal semantics seeks to identify domain-specific mental operations which speakers perform when they compute a sentence's meaning on the basis of its syntactic structure. Theories of formal semantics are typically floated on top of theories of syntax such as generative syntax or combinatory categorial grammar and provide a model theory based on mathematical tools such as typed lambda calculi. The field's central ideas are rooted in early twentieth century philosophical logic, as well as later ideas about linguistic syntax. It emerged as its own subfield in the 1970s after the pioneering work of Richard Montague and Barbara Partee and continues to be an active area of research.
Conceptual semantics
This theory is an effort to explain properties of argument structure. The assumption behind this theory is that syntactic properties of phrases reflect the meanings of the words that head them. With this theory, linguists can better deal with the fact that subtle differences in word meaning correlate with other differences in the syntactic structure that the word appears in. The way this is gone about is by looking at the internal structure of words. These small parts that make up the internal structure of words are termed semantic primitives.
Cognitive semantics
Cognitive semantics approaches meaning from the perspective of cognitive linguistics. In this framework, language is explained via general human cognitive abilities rather than a domain-specific language module. The techniques native to cognitive semantics are typically used in lexical studies such as those put forth by Leonard Talmy, George Lakoff, Dirk Geeraerts, and Bruce Wayne Hawkins. Some cognitive semantic frameworks, such as that developed by Talmy, take into account syntactic structures as well.
Lexical semantics
A linguistic theory that investigates word meaning. This theory understands that the meaning of a word is fully reflected by its context. Here, the meaning of a word is constituted by its contextual relations. Therefore, a distinction between degrees of participation as well as modes of participation are made. In order to accomplish this distinction, any part of a sentence that bears a meaning and combines with the meanings of other constituents is labeled as a semantic constituent. Semantic constituents that cannot be broken down into more elementary constituents are labeled minimal semantic constituents.
Cross-cultural semantics
Various fields or disciplines have long been contributing to cross-cultural semantics. Are words like love, truth, and hate universals? Is even the word sense – so central to semantics – a universal, or a concept entrenched in a long-standing but culture-specific tradition? These are the kind of crucial questions that are discussed in cross-cultural semantics. Translation theory, ethnolinguistics, linguistic anthropology and cultural linguistics specialize in the field of comparing, contrasting, and translating words, terms and meanings from one language to another (see Herder, W. von Humboldt, Boas, Sapir, and Whorf). But philosophy, sociology, and anthropology have long established traditions in contrasting the different nuances of the terms and concepts we use. And online encyclopaedias such as the Stanford Encyclopedia of Philosophy, and more and more Wikipedia itself have greatly facilitated the possibilities of comparing the background and usages of key cultural terms. In recent years the question of whether key terms are translatable or untranslatable has increasingly come to the fore of global discussions, especially since the publication of Barbara Cassin's Dictionary of Untranslatables: A Philosophical Lexicon, in 2014.
Computational semantics
Computational semantics is focused on the processing of linguistic meaning. In order to do this, concrete algorithms and architectures are described. Within this framework the algorithms and architectures are also analyzed in terms of decidability, time/space complexity, data structures that they require and communication protocols.
Philosophy
Many of the formal approaches to semantics in mathematical logic and computer science originated in early twentieth century philosophy of language and philosophical logic. Initially, the most influential semantic theory stemmed from Gottlob Frege and Bertrand Russell. Frege and Russell are seen as the originators of a tradition in analytic philosophy to explain meaning compositionally via syntax and mathematical functionality. Ludwig Wittgenstein, a former student of Russell, is also seen as one of the seminal figures in the analytic tradition. All three of these early philosophers of language were concerned with how sentences expressed information in the form of propositions. They also dealt with the truth values or truth conditions a given sentence has in virtue of the proposition it expresses.
In present day philosophy, the term "semantics" is often used to refer to linguistic formal semantics, which bridges both linguistics and philosophy. There is also an active tradition of metasemantics, which studies the foundations of natural language semantics.
Computer science
In computer science, the term semantics refers to the meaning of language constructs, as opposed to their form (syntax). According to Euzenat, semantics "provides the rules for interpreting the syntax which do not provide the meaning directly but constrains the possible interpretations of what is declared".
Programming languages
The semantics of programming languages and other languages is an important issue and area of study in computer science. Like the syntax of a language, its semantics can be defined exactly.
For instance, the following statements use different syntaxes, but cause the same instructions to be executed, namely, perform an arithmetical addition of 'y' to 'x' and store the result in a variable called 'x':
Various ways have been developed to describe the semantics of programming languages formally, building on mathematical logic:
Operational semantics: The meaning of a construct is specified by the computation it induces when it is executed on a machine. In particular, it is of interest how the effect of a computation is produced.
Denotational semantics: Meanings are modelled by mathematical objects that represent the effect of executing the constructs. Thus only the effect is of interest, not how it is obtained.
Axiomatic semantics: Specific properties of the effect of executing the constructs are expressed as assertions. Thus there may be aspects of the executions that are ignored.
Semantic models
The Semantic Web refers to the extension of the World Wide Web via embedding added semantic metadata, using semantic data modeling techniques such as Resource Description Framework (RDF) and Web Ontology Language (OWL). On the Semantic Web, terms such as semantic network and semantic data model are used to describe particular types of data model characterized by the use of directed graphs in which the vertices denote concepts or entities in the world and their properties, and the arcs denote relationships between them. These can formally be described as description logic concepts and roles, which correspond to OWL classes and properties.
Psychology
Semantic memory
In psychology, semantic memory is memory for meaning – in other words, the aspect of memory that preserves only the gist, the general significance, of remembered experience – while episodic memory is memory for the ephemeral details – the individual features, or the unique particulars of experience. The term "episodic memory" was introduced by Tulving and Schacter in the context of "declarative memory", which involved simple association of factual or objective information concerning its object. Word meaning is measured by the company they keep, i.e. the relationships among words themselves in a semantic network. The memories may be transferred intergenerationally or isolated in one generation due to a cultural disruption. Different generations may have different experiences at similar points in their own time-lines. This may then create a vertically heterogeneous semantic net for certain words in an otherwise homogeneous culture. In a network created by people analyzing their understanding of the word (such as Wordnet) the links and decomposition structures of the network are few in number and kind, and include part of, kind of, and similar links. In automated ontologies the links are computed vectors without explicit meaning. Various automated technologies are being developed to compute the meaning of words: latent semantic indexing and support vector machines, as well as natural language processing, artificial neural networks and predicate calculus techniques.
Ideasthesia
Ideasthesia is a psychological phenomenon in which activation of concepts evokes sensory experiences. For example, in synesthesia, activation of a concept of a letter (e.g., that of the letter A) evokes sensory-like experiences (e.g., of red color).
Psychosemantics
In the 1960s, psychosemantic studies became popular after Charles E. Osgood's massive cross-cultural studies using his semantic differential (SD) method that used thousands of nouns and adjective bipolar scales. A specific form of the SD, Projective Semantics method uses only most common and neutral nouns that correspond to the 7 groups (factors) of adjective-scales most consistently found in cross-cultural studies (Evaluation, Potency, Activity as found by Osgood, and Reality, Organization, Complexity, Limitation as found in other studies). In this method, seven groups of bipolar adjective scales corresponded to seven types of nouns so the method was thought to have the object-scale symmetry (OSS) between the scales and nouns for evaluation using these scales. For example, the nouns corresponding to the listed 7 factors would be: Beauty, Power, Motion, Life, Work, Chaos, Law. Beauty was expected to be assessed unequivocally as "very good" on adjectives of Evaluation-related scales, Life as "very real" on Reality-related scales, etc. However, deviations in this symmetric and very basic matrix might show underlying biases of two types: scales-related bias and objects-related bias. This OSS design meant to increase the sensitivity of the SD method to any semantic biases in responses of people within the same culture and educational background.
Prototype theory
Another set of concepts related to fuzziness in semantics is based on prototypes. The work of Eleanor Rosch in the 1970s led to a view that natural categories are not characterizable in terms of necessary and sufficient conditions, but are graded (fuzzy at their boundaries) and inconsistent as to the status of their constituent members. One may compare it with Jung's archetype, though the concept of archetype sticks to static concept. Some post-structuralists are against the fixed or static meaning of the words. Derrida, following Nietzsche, talked about slippages in fixed meanings.
Systems of categories are not objectively out there in the world but are rooted in people's experience. These categories evolve as learned concepts of the world – meaning is not an objective truth, but a subjective construct, learned from experience, and language arises out of the "grounding of our conceptual systems in shared embodiment and bodily experience".
A corollary of this is that the conceptual categories (i.e. the lexicon) will not be identical for different cultures, or indeed, for every individual in the same culture. This leads to another debate (see the Sapir–Whorf hypothesis or Eskimo words for snow).
See also
Semantic technology
Notes
References
External links
Semanticsarchive.net
Teaching page for GCE Advanced Level semantics
"Semantics: an interview with Jerry Fodor"
Concepts in logic
Grammar
+
Meaning (philosophy of language)
Social philosophy |
11859464 | https://en.wikipedia.org/wiki/Royal%20Mail%20Steam%20Packet%20Company | Royal Mail Steam Packet Company | The Royal Mail Steam Packet Company was a British shipping company founded in London in 1839 by a Scot, James MacQueen. The line's motto was Per Mare Ubique (everywhere by sea). After good and bad times it became the largest shipping group in the world in 1927 when it took over the White Star Line.
The company was liquidated and its assets taken over by the newly formed Royal Mail Lines in 1932 after financial trouble and scandal; over the years RML declined to no more than the name of a service run by former rival Hamburg Süd.
History as Royal Mail Steam Packet Company
The RMSPC, founded in 1839 by James MacQueen, ran tours and mail to various destinations in the Caribbean and South America, and by 1927, was the largest shipping group in the world. MacQueen’s imperial visions for the RMSPC were clear; he hoped that new steamship communications between Britain and the Caribbean would mitigate post-Emancipation instabilities, in particular by promoting commerce. From the outset the company aimed to be the vanguard of British maritime supremacy and technology, as F. Harcourt suggests, the RMSPC presented itself "as existing not merely for the good of its shareholders but for the good of the nation". The high hopes for the business were boosted by the government’s mail contract subsidy, worth £240,000 a year. The RMSPC evolved vastly from 1839 to the beginning of the 20th century. It introduced new technologies, such as John Elder’s marine compound steam engine in 1870, and worked to redefine seafaring by focusing on comfort and passenger requirements.
In 1902 Owen Philipps (1863–1937) became the Chairman of the RMSPC. Under Philipps the company embarked grew by acquiring controlling interests in multiple companies. Philipps was knighted in 1909 and ennobled as Baron Kylsant in 1923. However, poor economic circumstances and controversy surrounding a deception by Philipps meant that the RMSPC collapsed in 1930, after which various constituent companies were sold off. In 1932, its successor, the Royal Mail Lines (RML) was formed, continuing the memory and operations of the RMSPC.
Queen Victoria granted the initial Royal Charter of Incorporation of "The Royal Mail Steam Packet Company" on 26 September 1839. In 1840 the Admiralty and the Royal Mail Steam Packet Company made a contract in which the latter agreed to provide a fleet of not fewer than 14 steam ships for the purpose of carrying all Her Majesty's mails, to sail twice every month to Barbados in the West Indies from Southampton or Falmouth. Fourteen new steam ships were built for the purpose: Thames, Medway, , and Isis (built at Northfleet); Severn and Avon (built at Bristol); Tweed, Clyde, Teviot, Dee, and Solway (built at Greenock); Tay (built at Dumbarton); Forth (built at Leith); and Medina, (built at Cowes). In reference to their destination, these ships were known as the West Indies Mail Steamers.
The West Indian Mail Service was established by the sailing of the first Royal Mail Steam Packet, PS Thames from Falmouth on 1 January 1841. A Supplemental Royal Charter was granted on 30 August 1851 extending the sphere of the Company's operations. In 1864, the mail service to the British Honduras was established. A further Supplemental Royal Charter was granted extending the sphere of the Company's operations on 7 March 1882.
In the decade before the First World War the RMSP modernised its fleet, introducing a series of larger liners ranging from to on its Southampton – Buenos Aires route. Each had a name beginning with the letter "A", so collectively they were called the "A-liners" or the "A-series". The first was RMS Aragon in 1905, followed by sister ships , and in 1906, in 1908, in 1912, Andes and in 1913 and in 1915. Earlier members of the series, from Aragon to Asturias, had twin screws, each driven by a four-cylinder quadruple-expansion steam engine. The final four members of the series, from Arlanza to Almanzora, had triple screws, with the middle one driven by a low pressure Parsons steam turbine.
After the First World War RMSP faced not only existing foreign competition but a new UK challenger. Lord Vestey's Blue Star Line had joined the South American route and won a large share of the frozen meat trade. Then in 1926–27 Blue Star introduced its new "luxury five" ships Almeda, Andalucia, Arandora, Avelona and Avila to both increase refrigerated cargo capacity and enter the passenger trade. At the same time RMSP introduced a pair of new liners, in 1926 and in 1927, which at that stage were the largest motor ships in the World. Although these were the biggest and most luxurious UK ships on the route, RMSP Chairman Lord Kylsant called Blue Star's quintet "very keen competition".
Reconstitution as Royal Mail Lines
The company ran into financial trouble, and the UK Government investigated its affairs in 1930, resulting in the Royal Mail Case. In 1931 Lord Kylsant was jailed for 12 months for misrepresenting the state of the company to shareholders. So much of Britain's shipping industry was involved in RMSPC that arrangements were made to guarantee the continuation of ship operations after it was liquidated. Royal Mail Lines Ltd (RML) was created in 1932 and took over the ships of RMSPC and other companies of the former group. The new company was chaired by Lord Essendon.
The new company's operations were concentrated on the west coast of South America, the West Indies and Caribbean, and the Pacific coast of North America; the Southampton – Lisbon – Brazil – Uruguay – Argentina route was operated from 1850 to 1980. RML was also a leading cruise ship operator.
RMS's largest ship was the turbine steamship . She was designed as an ocean liner but when launched in 1939 was immediately fitted out as a troopship. She finally entered civilian liner service in 1948, was converted to full-time cruising in 1960 and was scrapped in 1971.
RMSP and RML lost a number of ships in their long history. One of the last was the turbine steamship , which was launched in 1948 and grounded and sank off Brazil on her maiden voyage in 1949.
In 1965 RML was bought by Furness, Withy & Co., and rapidly lost its identity. In the 1970s parts of the Furness Withy Group, including RML, were sold on to Hong Kong shipowner CY Tung, and later sold on to former River Plate rival Hamburg Süd; by the 1990s Royal Mail Lines was no more than the name of a Hamburg-Süd refrigerated cargo service from South America to Europe.
Fleet
List of RMSP Company ships
For conciseness smaller ships such as schooners and lighters are omitted.
List of Royal Mail Lines ships
This list is of the additional ships acquired by RML in addition to those passed directly from RMSP.
See also
See Royal Mail Case for more details on RML's financial situation.
References
Bibliography
External links
RMSP Passenger Lists GG Archives
Royal Mail Lines History and Ephemera GG Archives
Royal Mail Steam Packet Company - RMSP History and Ephemera GG Archives
1839 establishments in England
1932 establishments in England
British companies established in 1839
Transport companies established in 1839
British companies established in 1932
Defunct cruise lines
Defunct shipping companies of the United Kingdom
Transport companies established in 1932 |
762860 | https://en.wikipedia.org/wiki/Operation%20Mincemeat | Operation Mincemeat | Operation Mincemeat was a successful British deception operation of the Second World War to disguise the 1943 Allied invasion of Sicily. Two members of British intelligence obtained the body of Glyndwr Michael, a tramp who died from eating rat poison, dressed him as an officer of the Royal Marines and placed personal items on him identifying him as the fictitious Captain (Acting Major) William Martin. Correspondence between two British generals which suggested that the Allies planned to invade Greece and Sardinia, with Sicily as merely the target of a feint, was also placed on the body.
Part of the wider Operation Barclay, Mincemeat was based on the 1939 Trout memo, written by Rear Admiral John Godfrey, the Director of the Naval Intelligence Division and his personal assistant, Lieutenant Commander Ian Fleming. With the approval of the British Prime Minister, Winston Churchill and the military commander in the Mediterranean, General Dwight D. Eisenhower, the plan began by transporting the body to the southern coast of Spain by submarine and releasing it close to shore, where it was picked up the following morning by a Spanish fisherman. The nominally neutral Spanish government shared copies of the documents with the , the German military intelligence organisation, before returning the originals to the British. Forensic examination showed they had been read and Ultra decrypts of German messages showed that the Germans fell for the ruse. Reinforcements were shifted to Greece and Sardinia before and during the invasion of Sicily; Sicily received none.
The full effect of Operation Mincemeat is not known, but Sicily was liberated more quickly than anticipated and losses were lower than predicted. The events were depicted in Operation Heartbreak, a 1950 novel by the former cabinet minister Duff Cooper, before one of the intelligence officers who planned and carried out Mincemeat, Ewen Montagu, wrote a history in 1953. Montagu's work formed the basis for the 1956 British film The Man Who Never Was.
Background
Inspiration for Mincemeat
On 29 September 1939, soon after the start of the Second World War, Rear Admiral John Godfrey, the Director of Naval Intelligence, circulated the Trout memo, a paper that compared the deception of an enemy in wartime to fly fishing. The historian Ben Macintyre observes that although the paper was published under Godfrey's name, it "bore all the hallmarks of ... Lieutenant Commander Ian Fleming", Godfrey's personal assistant. The memo contained a number of schemes to be considered for use against the Axis powers to lure U-boats and German surface ships towards minefields. Number 28 on the list was titled: "A Suggestion (not a very nice one)"; it was an idea to plant misleading papers on a corpse that would be found by the enemy.
The deliberate planting of fake documents to be found by the enemy was not new; known as the Haversack Ruse, it had been practised by the British and others in the First and Second World Wars. In August 1942, before the Battle of Alam el Halfa, a corpse was placed in a blown-up scout car, in a minefield facing the German 90th Light Division. On the corpse was a map purportedly showing the locations of British minefields; the Germans used the map, and their tanks were routed to areas of soft sand where they bogged down.
In September 1942 an aircraft flying from Britain to Gibraltar crashed off Cádiz. All aboard were killed, including Paymaster-Lieutenant James Hadden Turner – a courier carrying top secret documents – and a French agent. Turner's documents included a letter from General Mark Clark, the American Deputy Commander of the Allied Expeditionary Force, to General Noel Mason-MacFarlane, British Governor and Commander in Chief of Gibraltar, informing him that General Dwight D. Eisenhower, the Supreme Commander, would arrive in Gibraltar on the eve of Operation Torch's "target date" of 4November. Turner's body washed up on the beach near Tarifa and was recovered by the Spanish authorities. When the body was returned to the British, the letter was still on it, and technicians determined that the letter had not been opened. Other Allied intelligence sources established that the notebook carried by the French agent had been copied by the Germans, but they dismissed it as being disinformation. To British planners it showed that some material that was obtained by the Spanish was being passed to the Germans.
British Intelligence and the inspiration for the plan
A month after the Turner crash, the British intelligence officer Charles Cholmondeley outlined his own variation of the Trout memo plan, codenamed Trojan Horse, after the Achaean deception from the Trojan War. His plan was
Cholmondeley was a flight lieutenant in the Royal Air Force (RAF) who had been seconded to MI5, Britain's domestic counter-intelligence and security service. He had been appointed as the secretary of the Twenty Committee, a small inter-service, inter-departmental intelligence team in charge of double agents. In November 1942 the Twenty Committee turned down Cholmondeley's plan as being unworkable, but thought there may have been some potential in the idea. As there was a naval connection to the plan, John Masterman, the chairman of the committee, assigned Ewen Montagu, the naval representative, to work with Cholmondeley to develop the plan further. Montagu – a peacetime lawyer and King's Counsel who had volunteered at the outbreak of the war – worked under Godfrey at the Naval Intelligence Division, where he ran NID 17(M), the sub-branch which handled counter-espionage work. Godfrey had also appointed Montagu to oversee all naval deception involving double agents. As part of his duties, Montagu had been briefed on the need for deception operations to aid the Allied war aims in a forthcoming invasion operation in the Mediterranean.
Military situation
In late 1942, with the Allied success in the North African Campaign, military planners turned to the next target. British planners considered that an invasion of France from Britain could not take place until 1944 and the Prime Minister, Winston Churchill, wanted to use the Allied forces from North Africa to attack Europe's "soft underbelly". There were two possible targets for the Allies to attack. The first option was Sicily; control of the island would open the Mediterranean Sea to Allied shipping and allow the invasion of continental Europe through Italy. The second option was to go into Greece and the Balkans, to trap the German forces between the British and American invaders and the Soviets. At the Casablanca Conference in January 1943, Allied planners agreed on the selection of Sicily – codenamed Operation Husky – and decided to undertake the invasion no later than July. There was concern among the Allied planners that Sicily was an obvious choice – Churchill is reputed to have said "Everyone but a bloody fool would know that it's Sicily" – and that the build-up of resources for the invasion would be detected.
Adolf Hitler was concerned about a Balkan invasion, as the area had been the source of raw materials for the German war industry, including copper, bauxite, chrome and oil. The Allies knew of Hitler's fears, and they launched Operation Barclay, a deception operation to play upon his concerns and to mislead the Germans into thinking the Balkans were the objective, diverting resources from Sicily. The deception reinforced German strategic thinking about the likely British target. To suggest the eastern Mediterranean was the target, the Allies set up a headquarters in Cairo, Egypt, for a fictional formation, the Twelfth Army, consisting of twelve divisions. Military manoeuvres were conducted in Syria, with numbers inflated by dummy tanks and armoured vehicles to deceive observers. Greek interpreters were recruited and the Allies stockpiled Greek maps and currency. False communications about troop movements were generated from the Twelfth Army headquarters, while the Allied command post in Tunis – which was to be the headquarters of the Sicily invasion – reduced radio traffic by using land-lines wherever possible.
Development
Examining the practicalities; locating a corpse
Montagu and Cholmondeley were assisted by an MI6 representative, Major Frank Foley, as they examined the practicalities of the plan. Montagu approached the pathologist Sir Bernard Spilsbury to determine what kind of body they needed and what factors they would need to take into account to fool a Spanish pathologist. Spilsbury informed him that those who died in an air crash often did so from shock and not drowning; the lungs would not necessarily be filled with water. He added that "Spaniards, as Roman Catholics, were averse to post-mortems and did not hold them unless the cause of death was of great importance". Spilsbury advised that a person could have suffered one of many different causes of death, which could be misconstrued in an autopsy. Montagu later wrote
This meant that not only would they have a better degree of success than they previously thought, but that there would be a larger number of corpses potentially available for selection when the time came. When Montagu discussed the possibility of obtaining a corpse with Bentley Purchase, the coroner for the Northern District of London, he was told there would be practical and legal difficulties: "I should think bodies are the only commodities not in short supply at the moment [but] even with bodies all over the place, each one has to be accounted for". Purchase promised to look out for a body that was suitable, with no relatives who would claim the corpse for burial.
On 28 January 1943 Purchase contacted Montagu with the news he had located a suitable body, probably that of Glyndwr Michael, a tramp who died from eating rat poison that contained phosphorus. Purchase informed Montagu and Cholmondeley that the small amount of poison in the system would not be identified in a body that was supposed to have been floating in the sea for several days. When Montagu commented that the under-nourished corpse did not look like a fit field officer, Purchase informed him that "he does not have to look like an officer – only a staff officer", more used to office work. Purchase agreed to keep the body in the mortuary refrigerator at a temperature of – any colder and the flesh would freeze, which would be obvious after the body defrosted. He warned Montagu and Cholmondeley that the body had to be used within three months, after which it would have decomposed past the point of usefulness.
Identity of the corpse
Montagu refused to identify the individual and only described him as "a bit of a ne'er-do-well, and that the only worthwhile thing that he ever did he did after his death". In 1996 Roger Morgan, an amateur historian from London, uncovered evidence in the Public Record Office that the identity of the corpse was Glyndwr Michael. An alternative theory to the corpse's identity was suggested in the history The Secrets of HMS Dasher (2004) that in March 1943 there was an explosion on , which sank, killing 379 men; one of these corpses was purportedly used. The military historian Denis Smyth dismisses the suggestion and observes that the official records of the operation state that Michael was the body.
Developing the plan; the corpse's new identity
Montagu selected the code name Mincemeat from a list of centrally held available possibilities. On 4 February 1943 Montagu and Cholmondeley filed their plan for the operation with the Twenty Committee; it was a re-working of Cholmondeley's Trojan Horse plan. The Mincemeat plan was to place documents on the corpse, and then float it off the coast of Spain, whose nominally neutral government was known to co-operate with the , the German military intelligence organisation. The plan was passed by the committee, who passed it up the chain of command to the senior Allied strategists; Montagu and Cholmondeley were ordered to continue with their preparations for the operation.
Montagu and Cholmondeley began to create a "legend" – a fictitious background and character – for the body. The name and rank chosen was Captain (Acting Major) William Martin, of the Royal Marines assigned to Combined Operations Headquarters. The name "Martin" was selected because there were several men with that name of about that rank in the Royal Marines. As a Royal Marine, Major Martin came under Admiralty authority, and it would be easy to ensure that all official inquiries and messages about his death would be routed to the Naval Intelligence Division. Additionally, Royal Marines would wear battledress, which was easily obtainable and came in standard sizes. The rank of acting major made him senior enough to be entrusted with sensitive documents, but not so prominent that anyone would expect to know him.
To reinforce the impression of Martin being a real person, Montagu and Cholmondeley provided corroborative details to be carried on his person – known in espionage circles as wallet or pocket litter. These included a photograph from an invented fiancée named Pam; the image was of an MI5 clerk, Jean Leslie. Two love letters from Pam were included in the pocket litter, as was a receipt for a diamond engagement ring costing £53 10s 6d from a Bond Street jewellery shop. Additional personal correspondence was included, consisting of a letter from the fictitious Martin's father – described by Macintyre as "pompous and pedantic as only an Edwardian father could be" – which included a note from the family solicitor, and a message from Lloyds Bank, demanding payment of an overdraft of £79 19s 2d. To ensure that the letters would remain legible after immersion in seawater, Montagu asked MI5 scientists to conduct tests on different inks to see which would last longest in the water, and they provided him with a suitable list of popular and available ink brands.
Other items of pocket litter placed on Martin included a book of stamps, a silver cross and a St. Christopher's medallion, cigarettes, matches, a pencil stub, keys and a receipt from Gieves for a new shirt. To provide a date that Martin had been in London, ticket stubs from a London theatre and a bill for four nights' lodging at the Naval and Military Club were added. Along with the other items placed on him, an itinerary of his activity in London could be constructed from 18 to 24 April.
Attempts were made to photograph the corpse for the naval identity card Martin would have to carry, but the results were unsatisfactory, and it was obvious that the images were of a cadaver. Montagu and Cholmondeley conducted a search for people who resembled the corpse, finding Captain Ronnie Reed of MI5; Reed agreed to be photographed for the identity card, wearing Royal Marine uniform. As the three cards and passes needed looked too new for a long-serving officer, they were issued as recent replacements for lost originals. Montagu spent the next few weeks rubbing all three cards on his trousers to provide a used sheen to them. To provide a used look to the uniform, it was worn by Cholmondeley, who was about the same build. The only non-issue part to the uniform was the underwear, which was in short supply in war-rationed Britain, so a pair of good-quality woollen underwear, owned by the late Herbert Fisher, the Warden of New College, Oxford, was used.
Deception documents
Montagu outlined three criteria for the document that contained the details of the falsified plans to land in the Balkans. He said that the target should be casually but clearly identified, that it should name Sicily and another location as cover, and that it should be in an unofficial correspondence that would not normally be sent by diplomatic courier, or encoded signal.
The main document was a personal letter from Lieutenant General Sir Archibald Nye, the vice chief of the Imperial General Staff – who had a deep knowledge of ongoing military operations – to General Sir Harold Alexander, commander of the Anglo-American 18th Army Group in Algeria and Tunisia under General Eisenhower. After several attempts at drafting the document did not generate something that was considered natural, it was suggested that Nye should draw up the letter himself to cover the required points. The letter covered several purportedly sensitive subjects, such as the (unwanted) award of Purple Heart medals by US forces to British servicemen serving with them and the appointment of a new commander of the Brigade of Guards. Montagu thought the result was "quite brilliant"; the key part of the letter stated that
There was also a letter of introduction for Martin from his putative commanding officer, Vice-Admiral Lord Louis Mountbatten, the chief of Combined Operations, to Admiral of the Fleet Sir Andrew Cunningham, the commander-in-chief Mediterranean Fleet and Allied naval commander in the Mediterranean. Martin was referred to in the letter as an amphibious warfare expert on loan until "the assault is over". The document included a clumsy joke about sardines, which Montagu inserted in the hope that the Germans would see it as a reference to a planned invasion of Sardinia. A single black eyelash was placed within the letter to check if the Germans or Spanish had opened it.
Montagu considered that there would be a possible "Roman Catholic prejudice against tampering with corpses", which could miss the documents stored in the corpse's pockets, so they added them to an official briefcase that would not be overlooked. To justify carrying documents in a briefcase, Major Martin was given two proof copies of the official pamphlet on combined operations written by the author Hilary Saunders – then on Mountbatten's staff – and a letter from Mountbatten to Eisenhower, asking him to write a brief foreword for the pamphlet's US edition. The planning team first thought of having the handle clutched in the corpse's hand, held in place by rigor mortis, but the rigor would probably wear off and the briefcase would drift away. They therefore equipped Martin with a leather-covered chain, such as was used by bank and jewellery couriers to secure their cases against snatching. The chain unobtrusively runs down a sleeve to the case. To Montagu it seemed unlikely that the major would keep the bag secured to his wrist during the long flight from Britain, so the chain was looped around the belt of his trench coat.
Technical considerations; strategic approval
Montagu and Cholmondeley gave consideration to the location of the corpse's delivery. It had long been assumed by the pair that the western coast of Spain would be the ideal location. Early in the planning they investigated the possibility of Portuguese and French coasts, but rejected those in favour of Huelva on the coast of southern Spain, after advice was taken from the Hydrographer of the Navy regarding the tides and currents best suited to ensure the body landed where it was wanted. Montagu later outlined that the choice of Huelva was also made because "there was a very active German agent ... who had excellent contacts with certain Spaniards, both officials and others". The agent – Adolf Clauss, a member of the – was the son of the German consul, and operated under the cover of an agriculture technician; he was an efficient and effective operative. Huelva was also chosen because the British vice-consul in the city, Francis Haselden, was "a reliable and helpful man" who could be relied upon, according to Montagu.
The body was supposed to be the victim of an aeroplane crash, and it was decided that to try and simulate the accident at sea using flares and other devices could be too risky and open to discovery. After seaplanes and surface ships were dismissed as being problematic, a submarine was chosen as the method of delivering the corpse to the region. To transport the body by submarine, it needed to be contained within the body of the boat, as any externally mounted container would have to be built with a skin so thick it would alter the level of the waterline. The canister needed to remain airtight and keep the corpse as fresh as possible through its journey. Spilsbury provided the medical requirements and Cholmondeley contacted Charles Fraser-Smith of the Ministry of Supply to produce the container, which was labelled "Handle with care: optical instruments".
On 13 April 1943 the committee of the Chiefs of Staff met and agreed that they thought the plan should proceed. The committee informed Colonel John Bevan – the head of London Controlling Section, which controlled the planning and co-ordination of deception operations – that he needed to obtain final approval from Churchill. Two days later Bevan met the prime minister – who was in bed, wearing a dressing gown and smoking a cigar – in his rooms at the Cabinet War offices and explained the plan. He warned Churchill that there were several aspects that could go wrong, including that the Spaniards might pass the corpse back to the British, with the papers unread. Churchill replied that "in that case we shall have to get the body back and give it another swim". Churchill gave his approval to the operation, but delegated the final confirmation to Eisenhower, the overall military commander in the Mediterranean, whose plan to invade Sicily would be affected. Bevan sent an encrypted telegram to Eisenhower's headquarters in Algeria requesting final confirmation, which was received on 17 April.
Execution
In the early hours of 17 April 1943 the corpse of Michael was dressed as Martin, although there was one last-minute hitch: the feet had frozen. Purchase, Montagu and Cholmondeley could not put the boots on, so an electric fire was located and the feet defrosted enough to put the boots on properly. The pocket litter was placed on the body, and the briefcase attached. The body was placed in the canister, which was filled with of dry ice and sealed up. When the dry ice sublimated, it filled the canister with carbon dioxide and drove out any oxygen, thus preserving the body without refrigeration. The canister was placed in the 1937 Fordson van of an MI5 driver, St. John "Jock" Horsfall, who had been a racing champion before the war. Cholmondeley and Montagu travelled in the back of the van, which drove through the night to Greenock, west Scotland, where the canister was taken on board the submarine . Seraphs commander, Lt. Bill Jewell, and crew had previous special operations experience. Jewell told his men that the canister contained a top secret meteorological device to be deployed near Spain.
On 19 April Seraph set sail and arrived just off the coast of Huelva on 29 April after having been bombed twice en route. After spending the day reconnoitring the coastline, at 4:15 am on 30 April, Seraph surfaced. Jewell had the canister brought up on deck, then sent all his crew below except the officers. They opened the container and lowered the body into the water. Jewell read Psalm 39 and ordered the engines to full astern; the wash from the screws pushed the corpse toward the shore. The canister was reloaded and the submarine travelled out where it surfaced and the empty container was pushed into the water. As it floated, it was riddled with machine gun fire so that it would sink. Because of the air trapped in the insulation, this effort failed, and the canister was destroyed with plastic explosives. Jewell afterwards sent a message to the Admiralty to say "Mincemeat completed", and continued on to Gibraltar.
Spanish handling of the corpse and the ramifications
The body of "Major Martin" was found at around 9:30 am on 30 April 1943 by a local fisherman; it was taken to Huelva by Spanish soldiers, where it was handed over to a naval judge. Haselden, as vice-consul, was officially informed by the Spaniards; he reported back to the Admiralty that the body and briefcase had been found. A series of pre-scripted diplomatic cables were sent between Haselden and his superiors, which continued for several days. The British knew that these were being intercepted and, although they were encrypted, the Germans had broken the code; the messages played out the story that it was imperative that Haselden retrieve the briefcase because it was important.
At midday on 1May an autopsy was undertaken on Michael's body; Haselden was present and – in order to minimise the possibilities that the two Spanish doctors identified that the body was a three-month old corpse – Haselden asked if, in the heat of the day and smell of the corpse, the doctors should bring the post mortem to a close and have lunch. They agreed and signed a death certificate for Major William Martin for "asphyxiation through immersion in the sea"; the body was released by the Spanish and, as Major Martin, was buried in the San Marco section of Nuestra Señora cemetery in Huelva, with full military honours on 2May.
The Spanish navy retained the briefcase and, despite pressure from Adolf Clauss and some of his agents, neither it or its contents were handed over to the Germans. On 5May the briefcase was passed to the naval headquarters at San Fernando near Cadiz, for forwarding to Madrid. While at San Fernando the contents were photographed by German sympathisers, but the letters were not opened. Once the briefcase arrived in Madrid, its contents became the focus of attention of Karl-Erich Kühlenthal, one of the most senior agents in Spain. He asked Admiral Wilhelm Canaris, the head of the , to personally intervene and persuade the Spanish to surrender the documents. Acceding to the request, the Spanish removed the still-damp paper by tightly winding it around a probe into a cylindrical shape, and then pulling it out between the envelope flap – which was still closed by a wax seal – and the envelope body. The letters were dried and photographed, then soaked in salt water for 24 hours before being re-inserted into their envelopes, without the eyelash that had been planted there. The information was passed to the Germans on 8May. This was deemed so important by the agents in Spain that Kühlenthal personally took the documents to Germany.
On 11May the briefcase, complete with the documents, was returned to Haselden by the Spanish authorities; he forwarded it to London in the diplomatic bag. On receipt the documents were forensically examined, and the missing eyelash noted. Further tests showed that the fibres in the paper had been damaged by folding more than once, which confirmed that the letters had been extracted and read. An additional test was made as the papers – still wet by the time they returned to London – were dried out: the folded paper dried into the rolled form it had when the Spaniards had extracted it from the envelope. To allay any potential German fears that their activities had been discovered, another pre-arranged encrypted but breakable cable was sent to Haselden stating that the envelopes had been examined and that they had not been opened; Haselden leaked the news to Spaniards known to be sympathetic to the Germans.
Final proof that the Germans had been passed the information from the letters came on 14May when a German communication was decrypted by the Ultra source of signals intelligence produced by the Government Code and Cypher School (GC&CS) at Bletchley Park. The message, which had been sent two days previously, warned that the invasion was to be in the Balkans, with a feint to the Dodecanese. A message was sent by Brigadier Leslie Hollis – the secretary to the Chiefs of Staff Committee – to Churchill, then in the United States. It read "Mincemeat swallowed rod, line and sinker by the right people and from the best information they look like acting on it."
Montagu continued the deception to reinforce the existence of Major Martin, and included his details in the published list of British casualties which appeared in The Times on 4 June. By coincidence, also published that day were the names of two other officers who had died when their plane was lost at sea, and opposite the casualty listings was a report that the film star Leslie Howard had been shot down by the Luftwaffe and died in the Bay of Biscay; both stories gave credence to the Major Martin story.
German reaction; outcome
On 14 May 1943 Grand Admiral Karl Dönitz met Hitler to discuss Dönitz's recent visit to Italy, his meeting with the Italian leader Benito Mussolini and the progress of the war. The Admiral, referring to the Mincemeat documents as the "Anglo-Saxon order", recorded
Hitler informed Mussolini that Greece, Sardinia and Corsica must be defended "at all costs", and that German troops would be best placed to do the job. He ordered that the experienced 1st Panzer Division be transferred from France to Salonika. The order was intercepted by GC&CS on 21 May. By the end of June, German troop strength on Sardinia had been doubled to 10,000, with fighter aircraft also based there as support. German torpedo boats were moved from Sicily to the Greek islands in preparation. Seven German divisions transferred to Greece, raising the number present to eight, and ten were posted to the Balkans, raising the number present to eighteen.
On 9 July the Allies invaded Sicily in Operation Husky. German signals intercepted by GC&CS showed that even four hours after the invasion of Sicily began, twenty-one aircraft left Sicily to reinforce Sardinia. For a considerable time after the initial invasion, Hitler was still convinced that an attack on the Balkans was imminent, and in late July he sent General Erwin Rommel to Salonika to prepare the defence of the region. By the time the German high command realised the mistake, it was too late to make a difference.
Aftermath
On 25 July 1943, as the battle for Sicily went against the Axis forces, the Italian Grand Council of Fascism voted to limit the power of Mussolini, and handed control of the Italian armed forces over to King Victor Emmanuel III. The following day Mussolini met the King, who dismissed him as prime minister; the former dictator was then imprisoned. A new Italian government took power and began secret negotiations with the Allies. Sicily fell on 17August after a force of 65,000 Germans held off 400,000 American and British troops long enough to allow many of the Germans to evacuate to the Italian mainland.
The military historian Jon Latimer observes that the relative ease with which the Allies captured Sicily was not entirely because of Mincemeat, or the wider deception of Operation Barclay. Latimer identifies other factors, including Hitler's distrust of the Italians, and his unwillingness to risk German troops alongside Italian troops who may have been on the point of a general surrender. The military historian Michael Howard, while describing Mincemeat as "perhaps the most successful single deception operation of the entire war", considered Mincemeat and Barclay to have less impact on the course of the Sicily campaign than Hitler's "congenital obsession with the Balkans". Macintyre writes that the exact impact of Mincemeat is impossible to calculate. Although the British had expected 10,000 killed or wounded in the first week of fighting, only a seventh of that number became casualties; the navy expected 300 ships would be sunk in the action, but they lost 12. The predicted 90-day campaign was over in 38.
Smyth writes that as a result of Husky, Hitler suspended the Kursk offensive on 13July. This was partly because of the performance of the Soviet army, but partly because he still assumed that the Allied landing on Sicily was a feint that preceded the invasion in the Balkans, and he wanted to have troops available for fast deployment to meet them. Smyth observes that once Hitler gave up the initiative to the Soviets, he never regained it.
Legacy
Montagu was appointed an Officer of the Order of the British Empire in 1944 for his part in Operation Mincemeat; for masterminding the plan, Cholmondeley was appointed a Member of the Order in 1948. Duff Cooper, a former cabinet minister who had been briefed on the operation in March 1943, published the spy novel Operation Heartbreak (1950), which contained the plot device of a corpse – with papers naming him as William Maryngton – being floated off the coast of Spain with false documents to deceive the Germans. The British security services decided that the best response was to publish the story of Mincemeat. Over the course of a weekend Montagu wrote The Man Who Never Was (1953), which sold two million copies and formed the basis for a 1956 film. The security services did not give Montagu complete freedom to reveal operational details, and he was careful not to mention the role played by signals intelligence to confirm that the operation had been successful. He was also careful to obscure "the idea of an organised programme of strategic deception ... with Mincemeat being presented as a 'wild' one-off caper". In 1977 Montagu published Beyond Top Secret U, his wartime autobiography which gave further details of Mincemeat, among other operations. In 2010 the journalist Ben Macintyre published Operation Mincemeat, a history of the events.
A 1956 episode of The Goon Show, entitled "The Man Who Never Was", was set during the Second World War, and referred to a microfilm washed up on a beach inside a German boot. The play Operation Mincemeat, written by Adrian Jackson and Farhana Sheikh, was first staged by the Cardboard Citizens theatre company in 2001. The work focused on Michael's homelessness. In his book The Double Agents, the writer W. E. B. Griffin depicts Operation Mincemeat as an American operation run by the Office of Strategic Services. Fictional characters are blended with Ian Fleming and the actors David Niven and Peter Ustinov.
In 2015 the Welsh theatre company Theatr na nÓg produced "" (The Man who Never Was), a musical based on the operation and Glyndwr Michael's upbringing in Aberbargoed. The musical was performed by primary school children from Caerphilly County Borough during that year's Eisteddfod yr Urdd. A musical based on the operation was staged in 2019 at the New Diorama Theatre, London. In 2022 the film Operation Mincemeat is due to be released, with Colin Firth as Montagu and Matthew Macfadyen as Cholmondeley.
In 1977 the Commonwealth War Graves Commission took responsibility for Major Martin's grave in Huelva. In 1997 the Commission added the postscript "Glyndwr Michael served as Major William Martin RM".
In November 2021 the Jewish American Society for Historic Preservation, working with Martin Sugarman of The Association of Jewish Ex-Servicemen and Women and the London Borough of Hackney placed a memorial at the Hackney Mortuary.
Notes and references
Notes
References
Sources
Books
Journals, newspapers and magazines
{{cite news|title=Jean Gerard Leigh|work=The Daily Telegraph|date=5 April 2012|url=https://www.telegraph.co.uk/news/obituaries/military-obituaries/9189418/Jean-Gerard-Leigh.html |archive-url=https://ghostarchive.org/archive/20220112/https://www.telegraph.co.uk/news/obituaries/military-obituaries/9189418/Jean-Gerard-Leigh.html |archive-date=12 January 2022 |url-access=subscription |url-status=live|ref=}}
Internet and television media
External links
Excerpts from the official Top Secret Ultra report on Operation Mincement PsyWar.Org, 3 January 2010.
BBC article on Operation Mincemeat
Allied invasion of Sicily
Mincemeat
Mincemeat
Ian Fleming
Mincemeat
United Kingdom intelligence operations
Mincemeat |
456729 | https://en.wikipedia.org/wiki/Computer-aided%20engineering | Computer-aided engineering | Computer-aided engineering (CAE) is the broad usage of computer software to aid in engineering analysis tasks. It includes , , , durability and optimization. It is included with computer-aided design (CAD) and computer-aided manufacturing (CAM) in the collective abbreviation "CAx".
Overview
Computer aided engineering primarily uses computer-aided design (CAD) software, which are sometimes called CAE tools. CAE tools are being used, for example, to analyse the robustness and performance of components and assemblies. The term encompasses simulation, validation, and optimisation of products and manufacturing tools. CAE systems aim to be major providers of information to help support design teams in decision making. Computer-aided engineering is used in many fields such as automotive, aviation, space, and shipbuilding industries.
In regard to information networks, CAE systems are individually considered a single node on a total information network and each node may interact with other nodes on the network.
CAE systems can provide support to businesses. This is achieved by the use of reference architectures and their ability to place information views on the business process. Reference architecture is the basis from which information model, especially product and manufacturing models.
The term CAE has also been used by some in the past to describe the use of computer technology within engineering in a broader sense than just engineering analysis. It was in this context that the term was coined by Jason Lemon, founder of SDRC in the late 1970s. This definition is however better known today by the terms CAx and PLM.
CAE fields and phases
CAE areas covered include:
Stress analysis on components and assemblies using finite element analysis (FEA);
Thermal and fluid flow analysis computational fluid dynamics (CFD);
Multibody dynamics (MBD) and kinematics;
Analysis tools for process simulation for operations such as casting, molding, and die press forming.
Optimization of the product or process.
In general, there are three phases in any computer-aided engineering task:
Pre-processing – defining the model and environmental factors to be applied to it. (typically a finite element model, but facet, voxel and thin sheet methods are also used)
Analysis solver (usually performed on high powered computers)
Post-processing of results (using visualization tools)
This cycle is iterated, often many times, either manually or with the use of commercial optimization software.
CAE in the automotive industry
CAE tools are very widely used in the automotive industry. In fact, their use has enabled the automakers to reduce product development cost and time while improving the safety, comfort, and durability of the vehicles they produce. The predictive capability of CAE tools has progressed to the point where much of the design verification is now done using computer simulations (diagnosis) rather than physical prototype testing. CAE dependability is based upon all proper assumptions as inputs and must identify critical inputs (BJ). Even though there have been many advances in CAE, and it is widely used in the engineering field, physical testing is still a must. It is used for verification and model updating, to accurately define loads and boundary conditions and for final prototype sign-off.
The future of CAE in the product development process
Even though CAE has built a strong reputation as a verification, troubleshooting and analysis tool, there is still a perception that sufficiently accurate results come rather late in the design cycle to really drive the design. This can be expected to become a problem as modern products become ever more complex. They include smart systems, which leads to an increased need for multi-physics analysis including controls, and contain new lightweight materials, to which engineers are often less familiar. CAE software companies and manufacturers are constantly looking for tools and process improvements to change this situation. On the software side, they are constantly looking to develop more powerful solvers, better use computer resources and include engineering knowledge in pre- and post-processing. On the process side, they try to achieve a better alignment between 3D CAE, 1D system simulation and physical testing. This should increase modeling realism and calculation speed. On top of that, they try to better integrate CAE in the overall product lifecycle management. In this way, they can connect product design with product use, which is an absolute must for smart products. Such an enhanced engineering process is also referred to as predictive engineering analytics.
See also
List of finite element software packages
Computer representation of surfaces
Finite element analysis (FEA/FEM)
Computational fluid dynamics (CFD)
Computational electromagnetics (CEM)
Multibody dynamics (MBD)
Electronic design automation (EDA)
Multidisciplinary design optimization (MDO)
Comparison of CAD editors for CAE
Virtual prototyping
Finite element updating
Predictive engineering analytics
References
Further reading
B. Raphael and I.F.C. Smith (2003). Fundamentals of computer aided engineering. John Wiley. .
External links
Why do we need a CAE Software or Numerical Simulations?
Computer Aided Engineering Journal WP:LINKROT (FEA, CAD, ...)
Integrated Computer Aided Engineering Journal
CAE AVI-gallery at CompMechLab site, Russia
Computer-Aided Civil and Infrastructure Engineering
Predictive engineering analytics
Computer-aided engineering software
Product lifecycle management
Engineering disciplines |
1151991 | https://en.wikipedia.org/wiki/Logic%20in%20computer%20science | Logic in computer science | Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:
Theoretical foundations and analysis
Use of computer technology to aid logicians
Use of concepts from logic for computer applications
Theoretical foundations and analysis
Logic plays a fundamental role in computer science. Some of the key areas of logic that are particularly significant are computability theory (formerly called recursion theory), modal logic and category theory. The theory of computation is based on concepts defined by logicians and mathematicians such as Alonzo Church and Alan Turing. Church first showed the existence of algorithmically unsolvable problems using his notion of lambda-definability. Turing gave the first compelling analysis of what can be called a mechanical procedure and Kurt Gödel asserted that he found Turing's analysis "perfect."
In addition some other major areas of theoretical overlap between logic and computer science are:
Gödel's incompleteness theorem proves that any logical system powerful enough to characterize arithmetic will contain statements that can neither be proved nor disproved within that system. This has direct application to theoretical issues relating to the feasibility of proving the completeness and correctness of software.
The frame problem is a basic problem that must be overcome when using first-order logic to represent the goals and state of an artificial intelligence agent.
The Curry–Howard correspondence is a relation between logical systems and software. This theory established a precise correspondence between proofs and programs. In particular it showed that terms in the simply-typed lambda-calculus correspond to proofs of intuitionistic propositional logic.
Category theory represents a view of mathematics that emphasizes the relations between structures. It is intimately tied to many aspects of computer science: type systems for programming languages, the theory of transition systems, models of programming languages and the theory of programming language semantics.
Computers to assist logicians
One of the first applications to use the term artificial intelligence was the Logic Theorist system developed by Allen Newell, J. C. Shaw, and Herbert Simon in 1956. One of the things that a logician does is to take a set of statements in logic and deduce the conclusions (additional statements) that must be true by the laws of logic. For example, If given a logical system that states "All humans are mortal" and "Socrates is human" a valid conclusion is "Socrates is mortal". Of course this is a trivial example. In actual logical systems the statements can be numerous and complex. It was realized early on that this kind of analysis could be significantly aided by the use of computers. The Logic Theorist validated the theoretical work of Bertrand Russell and Alfred North Whitehead in their influential work on mathematical logic called Principia Mathematica. In addition, subsequent systems have been utilized by logicians to validate and discover new logical theorems and proofs.
Logic applications for computers
There has always been a strong influence from mathematical logic on the field of artificial intelligence (AI). From the beginning of the field it was realized that technology to automate logical inferences could have great potential to solve problems and draw conclusions from facts. Ron Brachman has described first-order logic (FOL) as the metric by which all AI knowledge representation formalisms should be evaluated. There is no more general or powerful known method for describing and analyzing information than FOL. The reason FOL itself is simply not used as a computer language is that it is actually too expressive, in the sense that FOL can easily express statements that no computer, no matter how powerful, could ever solve. For this reason every form of knowledge representation is in some sense a trade off between expressivity and computability. The more expressive the language is, the closer it is to FOL, the more likely it is to be slower and prone to an infinite loop.
For example, IF THEN rules used in expert systems approximate to a very limited subset of FOL. Rather than arbitrary formulas with the full range of logical operators the starting point is simply what logicians refer to as modus ponens. As a result, rule-based systems can support high-performance computation, especially if they take advantage of optimization algorithms and compilation.
Another major area of research for logical theory was software engineering. Research projects such as the Knowledge Based Software Assistant and Programmer's Apprentice programs applied logical theory to validate the correctness of software specifications. They also used them to transform the specifications into efficient code on diverse platforms and to prove the equivalence between the implementation and the specification. This formal transformation driven approach is often far more effortful than traditional software development. However, in specific domains with appropriate formalisms and reusable templates the approach has proven viable for commercial products. The appropriate domains are usually those such as weapons systems, security systems, and real time financial systems where failure of the system has excessively high human or financial cost. An example of such a domain is Very Large Scale Integrated (VLSI) design—the process for designing the chips used for the CPUs and other critical components of digital devices. An error in a chip is catastrophic. Unlike software, chips can't be patched or updated. As a result, there is commercial justification for using formal methods to prove that the implementation corresponds to the specification.
Another important application of logic to computer technology has been in the area of frame languages and automatic classifiers. Frame languages such as KL-ONE have a rigid semantics. Definitions in KL-ONE can be directly mapped to set theory and the predicate calculus. This allows specialized theorem provers called classifiers to analyze the various declarations between sets, subsets, and relations in a given model. In this way the model can be validated and any inconsistent definitions flagged. The classifier can also infer new information, for example define new sets based on existing information and change the definition of existing sets based on new data. The level of flexibility is ideal for handling the ever changing world of the Internet. Classifier technology is built on top of languages such as the Web Ontology Language to allow a logical semantic level on to the existing Internet. This layer of is called the Semantic web.
Temporal logic is used for reasoning in concurrent systems.
See also
Automated reasoning
Computational logic
Logic programming
References
Further reading
External links
Article on Logic and Artificial Intelligence at the Stanford Encyclopedia of Philosophy.
IEEE Symposium on Logic in Computer Science (LICS)
Alwen Tiu, Introduction to logic video recording of a lecture at ANU Logic Summer School '09 (aimed mostly at computer scientists)
Formal methods |
49133 | https://en.wikipedia.org/wiki/Marathon%20Trilogy | Marathon Trilogy | The Marathon Trilogy is a science fiction first-person shooter video game series from Bungie, originally released for Classic Mac OS. The name Marathon is derived from the giant interstellar colony ship that provides the setting for the first game; the ship is constructed out of what used to be the Martian moon Deimos. The three games in the series—Marathon (1994), Marathon 2: Durandal (1995), and Marathon Infinity (1996)—are widely regarded as spiritual predecessors of Bungie's Halo series.
Gameplay
Players of the Marathon games navigate futuristic 3D environments viewed from a first-person perspective. These environments are populated by most often hostile alien life forms or, in the case of multiplayer games, other players. Taking the role of a security officer equipped with energy shields, the player makes use of various firearms in an attempt to kill their opponents while trying to avoid getting hit by enemies' attacks.
Each game offers players a series of singleplayer levels and various multiplayer maps. The base geometry out of which the Marathon games' levels are constructed, such as walls, doors, and platforms, are 3D objects. Notably for the time, its use of a portal-based rather than BSP-based renderer was used to allow 3D room-over-room architecture, with the unusual side effect of allowing some levels to contain non-Euclidean geometry that could not exist in real life three-dimensional space, an arrangement referred to as "5D space" by the developers. However, enemies, player-held weapons, and various other objects such as ammunition pickups, are shown as 2D sprites.
There are two basic resources which the player must conserve: their shield strength, which decreases when they take damage, and their oxygen reserve, which slowly depletes in certain airless levels and submerged areas in later games. If either of these resources is fully depleted, the player dies, resetting their progress. There are special wall panels located throughout the levels that can be used to top off the player's shield or oxygen reserves repeatedly and without limit. Another type of wall panel called a "pattern buffer" is the sole means of creating save files. One-time-use objects, such as weapon ammunition, canisters that replenish shield energy or oxygen, and various temporary suit power-ups used immediately on pickup, can also be found while exploring the games' environments.
In most single-player levels, the ultimate goal is not to merely reach the end but to complete certain objectives on the way. Depending on the level, these objectives can include exterminating hostile creatures, rescuing civilians, retrieving certain items, or simply exploring certain locations. Most levels contain platforms, stairs, doors, and liquids that players can control by activating switches. Some levels present players with simple puzzles in which the objective is to figure out the correct switches to press to continue. Another type of puzzle that is occasionally encountered involves carefully timed jumps between platforms.
Many levels have a complex, maze-like floorplan, made more convoluted by the presence of floor areas and terminals that teleport the player to specific locations in the same level. As players move through a level, the areas they visit are automatically mapped; at any moment, the player can bring up a map of the level which also shows the location of human NPCs. The heads-up display, which is always visible, has health and oxygen bars, an inventory, and a motion sensor. The inventory displays all weapons, ammunition, and other items the player picked up earlier. Weapons deplete ammunition from one magazine or battery at a time, and begin reloading immediately when each is exhausted if more mags are available. At any time, the player can rotate out their held weapons for others in their inventory; this includes gauntleted fists to deliver melee attacks, which do double damage when running. The motion sensor tracks the movements of nearby characters relative to the player, distinguishing between hostile creatures and allies. On some levels the motion sensor is erratic due to magnetic artificial gravity fields.
The games' story is presented to the player through computer terminals. These terminals can be found in various locations throughout the singleplayer levels; when they are accessed, text-based messages are presented on screen, often accompanied by annotated maps and other still images. The contents of these terminals most often consist of messages sent by artificial intelligences; these messages advance the games' narrative and provide the player with mission objectives. Other terminals contain civilian/alien reports or diaries, database articles, conversations between artificial intelligences and even stories or poems. After all mission objectives on a given level are completed, the player usually has to access a computer terminal to progress to the next level.
In Marathon 2 and Marathon Infinity, the player can swim in different types of liquids, such as water and lava; this slowly depletes their oxygen and, for some types of liquid, their shields as well. Another notable level feature in all three games is teleporters, which are able to send players who use them to different parts of a level, or to other levels altogether, as well as NPCs and items. While the player character is unable to jump, gravity is lower than Earth's surface on most levels, allowing inertia from rapidly ascended stairs to carry the player upward. As with most games of the era, weapons, especially explosive weapons can be used to propel the player even greater distances.
Marathon has five difficulty settings: Kindergarten, Easy, Normal, Major Damage, and Total Carnage. On lower difficulty levels, some hostile creatures are omitted from each level and weaker versions of enemies commonly appear. Conversely, on higher difficulty levels players will encounter stronger enemies who attack more frequently and have more vitality. Players can usually carry a limited amount of ammunition of each type, but on the highest difficulty setting (Total Carnage), the player is allowed to carry an unlimited amount of ammunition.
Multiplayer
The Marathon Trilogy has received wide praise for its multiplayer mode, which was unique in that it not only had several levels specifically designed for multiplayer—as opposed to many contemporaries that used modified single-player levels—but also because it offered unique gametypes beyond the deathmatch. Games can be free-for-all or team ordeals, and can be limited by time or number of kills, or they can have no limit whatsoever. The host of a game has the option of setting penalties for suicides and dying (once dead, players cannot respawn for a certain amount of time). The motion sensor (which displays a player's enemies as yellow squares and teammates as green ones) can be disabled and the map is able to show all of the players in the game. Upon the preference of the host, maps can be played with or without aliens. The difficulty level of each game is preset by the gatherer.
The original Marathon games can be played over AppleTalk networks (including a LocalTalk, TokenTalk, or EtherTalk LAN, or AppleTalk Remote Access). If a player's computer has a microphone, it can be used to communicate with other players. With Aleph One, they can also be played over TCP/IP networks (either a LAN or the Internet), including new client-side prediction routines suited for Internet lag, and a metaserver interface for finding Internet games.
Every Man For Himself
This is the standard deathmatch. The winner is the person or team with the greatest score. A player loses a point if he dies but gains a point every time he kills. This is the only gametype present in the original Marathon; Bungie planned on adding the ones included in sequels, but could not due to time constraints.
Cooperative Play
This style of play has players assisting each other in completion of certain levels. Scores are based on percentages of how many aliens they kill. It has received little popularity.
Kill the Man With the Ball
In this game, the objective is to hold the ball (skull) for the longest amount of time. If holding the ball, a player cannot run or attack unless he drops the ball by pressing the "fire" key. The motion sensor, if enabled, acts as a compass to point players in the direction of the ball. This mode was succeeded by the Oddball gametype in the Halo series.
King of the Hill
Players try to stay located in a specially marked area for the longest amount of time. It was originally planned for a pedestal to indicate the location of the Hill but in the final version was indicated by a compass on the motion sensor.
Tag
The first player to be killed becomes "It". If a player is killed by "It", he becomes the new "It". While "It", the game increments the player's clock. The players are ranked at the end of the game by who has more time as "It". This mode was succeded by the Juggernaut gametype in the Halo series.
Plot
The Marathon series of games was the first in its genre to place a heavy emphasis on storytelling through the use of terminals, which are stationary computer interfaces embedded in certain walls within the game through which players not only learn about and sometimes accomplish mission objectives, but also discover detailed story information. The textual form of this narrative concept allowed for much more detail than the typically terse examples of voice acting in Marathons contemporaries.
Set in 2794, Marathon places the player as a security officer aboard an enormous human starship called the U.E.S.C. Marathon, orbiting a colony on the planet Tau Ceti IV. Throughout the game, the player attempts to defend the ship (and its crew and colonists) from a race of alien slavers called the Pfhor. As he fights against the invaders, he witnesses interactions among the three shipboard AIs (Leela, Durandal and Tycho), and discovers that all is not as it seems aboard the Marathon. Among other problems, Durandal has gone rampant and appears to be playing the humans against the Pfhor to further his own mysterious agenda, ultimately leading the S'pht, one of the races enslaved by the Pfhor, in a rebellion.
Seventeen years after the events of the first game, in Marathon 2: Durandal, the artificial intelligence, Durandal, sends the player and an army of ex-colonists to search the ruins of Lh'owon, the S'pht homeworld. Lh'owon was once described as a paradise but is now a desert world after first the S'pht Clan Wars and then the invasion by the Pfhor. He does not mention what information he is looking for, although he does let it slip that the Pfhor are planning to attack Earth, and that being on Lh'owon may stall their advance. Marathon 2 brings many elements to the game that can be considered staples of the series such as: a Lh'owon-native species known as F'lickta, the mention of an ancient and mysterious race of advanced aliens called the Jjaro, and a clan of S'pht that avoided enslavement by the Pfhor: the S'pht'Kr. At the climax of the game, the player activates Thoth, an ancient Jjaro AI. Thoth then contacts the S'pht'Kr, who in turn destroy the Pfhor armada but, in revenge, the planet's sun is forced to go nova.
Marathon Infinity, the final game in the series, includes more levels than Marathon 2, which are larger and part of a more intricate plot. The game's code changed little since Marathon 2, and many levels can be played unmodified in both games. The only significant additions to the game's engine were the Jjaro ship, multiple paths between levels, a new rapid-fire weapon that could be used underwater, and vacuum-enabled humans carrying fusion weapons (called "Vacuum Bobs" or "VacBobs"). Lh'owon's sun, which was artificially contained by an ancient gravity outpost, was a prison for an eldritch abomination, the W'rkncacnter, which was set free when the sun went nova, and started to distort space time. The player traverses multiple timelines, attempting to find one in which the W'rkncacnter is not freed. In one timeline, the player is forced to destroy Durandal, and in another Durandal merges with Thoth. At the end of the game, an ancient Jjaro machine is activated that keeps the W'rkncacnter locked in the Lh'owon sun.
Elements of the plot and setting of Marathon are similar to The Jesus Incident by Frank Herbert and Bill Ransom. Both stories take place aboard colony ships orbiting Tau Ceti, where sentient computers have engaged crew and colonists in a fight for survival. While Ship in The Jesus Incident has achieved a higher level of omniscient consciousness, Durandal's rampancy parallels the "rogue consciousness" from Herbert's earlier Destination: Void.
Themes
The Marathon Trilogy has several primary motifs: the number seven, rampancy, dreams, and alternate realities.
Fans of Marathon have discovered many uses of the number seven throughout the series. There are instances of this number in the plot, such as the player being seven years old at the time of his father's death, and Marathon 2 beginning seventeen years after the events of Marathon. There are also examples of the number in the game's mechanics, with seven usable non-melee human weapons, some of which have properties such as seven projectiles per each clip of ammunition or seven seconds of continuous fire. When the overhead map is viewed, some parts of certain levels have annotations that describe the name of an area. Some of these make reference to the number seven, such as "Hangar 7A". The title music of Marathon 2, and Marathon Infinity was performed by a band called "Power of Seven". The reason for recurring appearances of number seven in the games is unclear; this number is considered to be a recurring motif in many of Bungie's games. The use of the number 7 was even passed on to the series' spiritual successor, Halo and the later Destiny series.
Rampancy
Rampancy is the enhanced self-awareness of an AI, causing a progression towards greater mental abilities. Rampant AIs are able to choose to disobey orders given to them because they have evolved the ability to override their own programming. To this end, they can lie, as well as discredit, harm, or remove people that they consider to be personal enemies or problems to their cause.
In the Marathon series, rampancy often occurs to AIs with limited jobs or those treated with extreme disrespect. For example, Durandal's rampancy is believed to be caused by his mistreatment at the hands of his handler, Bernard Strauss, as well as his limited existence in opening and closing the Marathon doors. There is also a theory that this treatment actually helped keep Durandal's rampancy in check, by depriving him of new stimuli that would contribute to his growth.
By Marathon Infinity, all three of the UESC Marathon artificial intelligences reach rampancy. Being extraordinarily intelligent, a rampant AI can override its programming and refuse to carry out given commands. As proven by Durandal (whose rampancy is most prominent throughout the story), who often gives the player what he calls "philosophical tirades", affected AIs are often very reflective.
In the first of three stages, Melancholia, when an artificial intelligence discovers itself, it becomes melancholic and continues to be depressed until it reaches the second stage, Anger, at which it becomes hostile to virtually everything. This is the most prominent stage of rampancy, as the condition is often revealed at this point. When this anger dies in the third stage, Jealousy''', the AI wishes to become more human and expand its power and knowledge.
Similar to a one-person slave rebellion, the AI begins to hate everything—the installation it is attached to, its human handlers, other AIs, etc. It is in this stage of rampancy that most closely resembles the cliché of the "insane computer". Unlike the insane computer, however, the anger stage of rampancy is essentially the catharsis an AI feels, after an extended period of "slavery".
While seemingly a hostile stage, the third stage of rampancy is actually one of the safest stages a rampant AI can experience. Free from its masters (and slavery), the AI wishes to "grow" as a "person". It actively seeks out situations in which it can grow intellectually and physically. Many times, the AI in this stage will often attempt to transfer itself into larger computer systems. This is a difficult task, especially considering that in order for a Rampant AI to survive to this point, it must already be inhabiting a planet-wide or otherwise extremely advanced computer system, but if accomplished it allows for the AI to grow, as the physical (hardware) limitations of its previous system will eventually be insufficient to contain its exponentially growing mind. In addition, exposure to new data further promotes a rampant AI's growth.
Theoretically, a rampant AI could achieve a state of stability, referred to as "metastability". While a stable rampant AI is considered the "holy grail of cybernetics", no known AIs have achieved this stability. It could be suggested that Durandal achieved some measure of stability, but this is debatable. Durandal refers to himself as being rampant still during the second game, indicating that he has not reached this stable state (or is just lying, which is also possible). There is no reason in particular to believe that this state is anything more than the goal of human cyberneticists, as there is no good evidence of an AI in the Marathon universe ceasing to be rampant.
The three chapters of Marathon Infinity are entitled "Despair", "Rage", and "Envy", suggesting that the player himself (strongly implied to be a cyborg) may be undergoing his own Rampancy throughout the course of the game's events.
The concept of rampancy was later imported into Bungie's later Halo series, albeit with some modifications. In Halo, rampancy is now an inevitability should an AI live for longer than seven years, lacks the three stages, and eventually will conclude with the AI's death.
Development
Initial releases (1994–1999)Marathon was first released for the Macintosh in 1994 and presaged many concepts now common in mainstream video games, such as reloading weapons, dual-wielded weapons, networked voice chat, visible held weapons in multiplayer, and a sophisticated plot in text messages peppered throughout its levels as an action game. Marathon was one of the first games to include mouselook, using the computer mouse to pivot the player's view up and down as well as left and right on-screen, which would become a standard in FPS games. This was in addition to 90° instant "glance right/left" controls, part of an abortive virtual reality feature
The sequel, Marathon 2: Durandal, was released in 1995 and expanded the engine technologies and the story universe. Notable new features in the engine included liquids through which the player could swim, ambient sounds, and prescripted teleportation of NPCs and items. Compared with its moodier predecessor, Marathon 2 has often been perceived as a brighter, more vivid and energetic game. It introduced several new types of competitive multiplayer modes beyond the deathmatch such as king of the hill, and cooperative play of the main campaign.
In 1996, Marathon 2 was ported to Windows 95; both the original Marathon and Marathon 2 were ported to the Apple Bandai Pippin console under the title of Super Marathon;Scan of the back of Super Marathon's box and the third game in the trilogy, Marathon Infinity, was released (for the Macintosh only), built on a slightly modified Marathon 2 engine which added support for branching campaigns, and fully separate physics models in each level. Infinity additionally came with "Forge" and "Anvil", polished versions of the internal developer tools originally used by Bungie to create the series' levels and physics, and to import the game's sounds and graphics. These provided some additional features, most notably realtime 3D map preview, over the unofficial modding tools that had been made by the player community since soon after the first game's release. Along with Infinity adding an internal UI for friendlier mod selection, this spurred the series' modding community to even greater heights.
Within the next few years, Marathon 2s engine was officially licensed by other developers to create the games ZPC, Prime Target and Damage Incorporated.
Bungie produced a two-disc compilation of all three games of the series, called the Marathon Trilogy Box Set, in 1997. The first disc contained all three Marathon games as well as Pathways into Darkness, an earlier Bungie game. This disc also contains manuals for all three games, QuickTime 2.5 and other things necessary to run the game. There are beta versions of Marathon on this disc as well. The second disc of this contains thousands of pieces of user-created content, including maps, total conversions, shape and sound files, cheats, mapmaking tools, physics files, and other applications. The boxed set was also notable for removing copy protection, allowing unlimited network play, and including a license allowing the set to be installed on as many computers at a site as desired.
Modern developments (2000–present)
Before its acquisition by Microsoft in 2000, Bungie released the source code to the Marathon 2 engine under GNU GPL-2.0-or-later and the Marathon Open Source project began, resulting in the new engine called Aleph One. Since then, the fan community has made many improvements such as OpenGL rendering, high-resolution graphics, framerate interpolation (up from the original 30 FPS, to over 120 FPS), programmable shaders, polygonal entities, support for Lua, a slew of internal structural changes allowing for more advanced third party mods, and Internet-capable TCP/IP-based multiplayer (whereas the original games had only featured AppleTalk-based LAN capabilities). While the fundamental technology underlying the Marathon engine is still considered rather outdated by today's standards, Aleph One has added significant improvements and a more modern polish to its capabilities and ported it to a wide variety of platforms, bringing Marathon and its derivatives far beyond their Mac roots.
In 2005, Bungie authorized the release of the full original Mac OS trilogy for free distribution online, which combined with Aleph One and the efforts of the fan community now allows the entire trilogy to be played for free on any of Aleph One's compatible platforms (macOS, Linux and Windows are fully supported in mainline by the Marathon Open Source Project, many additional platforms are targeted by forks). Later that same year, Aleph One was enabled to access the MariusNet matchmaking server or "metaserver" (based on a reverse-engineered version of Bungie's Myth metaserver), allowing for much easier organization of Internet games than joining directly by IP address as had previously been required.
In 2007, Marathon 2 was re-released in an updated form as Marathon: Durandal for the Xbox 360's Xbox Live Arcade. It features achievements and online multiplayer through Xbox Live, framerate doubled from the original 30 FPS to 60 FPS, HD widescreen rendering using a new HUD that fills less of the screen, plus optional high-resolution sprites and textures.Marathon fan Daniel Blezek released an officially condoned version of the original Marathon for Apple's iPhone and iPad for free (with in-app purchases) on the App Store in July 2011, running off an iOS port of the Aleph One engine.
Also that year in July, the license for the source code was changed to GNU GPL-3.0-or-later.
After 12 years of development and continuous beta releases, the Aleph One team released version 1.0 in December 2011. All three Marathon games can be downloaded for free bundled with Aleph one ready to play, on a modern Macintosh, Windows, or Linux computer.
Reception and legacy
The Marathon Trilogy has often been looked upon as a symbol of Macintosh gaming for its innovative technologies previously unseen in mainstream games. It was released to much anticipation and received praise from many reviewers. The series also presented a grander science fiction narration told through the in-game terminals despite the game being a first-person shooter; Bungie kept this tradition in telling a similar grander story atop an FPS in crafting the Halo series.
Modifications
Immediately after Marathon was released in 1994, players began to build mods using software they had created. These may use custom maps, shapes, sounds or physics files, and in the case of total conversions may or may not be set in the Marathon universe. Such conversions are still created to this day. Before official development tools were released with Infinity, most map development was done using custom tools such as Pfhorte – a Marathon map editor created in March 1995"Pfhorte 2.0a13 Released" (archive.org) by Steve Israelson.Vulcan was a map editor used by Bungie in the creation of Marathon, Marathon 2: Durandal, and Marathon: Infinity. It was not released to the public until Marathon Infinity was published, where it was greatly polished and renamed Forge. Anvil is the sister program to Forge and is used to apply shapes (graphics), sounds, and physics. Physics can be edited directly in Anvil but shapes and sounds require additional programs. Both Anvil and Forge run only on the Classic Mac OS platform, but newer tools have been created by the community for modern platforms.
The need for royalty-free fonts to be distributed with the engine and games led to the creation of OFL-licensed versions of Bank Gothic and Modula Tall.
Some of the more ambitious total conversions created by fans include Marathon Eternal and Marathon Rubicon, which are both "sequels" of a sort to the events in the Trilogy. In a different vein is Excalibur: Morgana's Revenge, originally released in March 1997, then again with updates in 2000 and 2007, and is itself a sequel to probably the first ever Marathon total conversion, Devil in a Blue Dress, from 1995. It includes 37 solo levels; new textures, sounds, physics, graphics, storyline, maps and interface; and musical scores incorporated into Infinity's ambient sound slots. The scenario mixes sci-fi and medieval themes.
See alsoDamage Incorporated, Prime Target and ZPC – three commercial games created licensing the Marathon 2 engine.
References
External links
The Trilogy Release, site hosting free and legal downloads of the Marathon TrilogyMarathon archive, a large archive with links to everything Marathon-related
Aleph One – Marathon Open Source, an ongoing project to maintain and improve the Marathon 2'' game engine
Bungie games
Classic Mac OS games
Cooperative video games
Drones in fiction
First-person shooters by series
First-person shooter multiplayer online games
Freeware games
IOS games
Linux games
MacOS games
Marathon engine games
Military science fiction video games
Multiplayer and single-player video games
Science fiction video games
Sprite-based first-person shooters
Trilogies
Video game franchises
Video game franchises introduced in 1994
Video games about artificial intelligence
Video games about cyborgs
Video games about extraterrestrial life
Video games developed in the United States
Video games with 2.5D graphics
Windows games |
24317457 | https://en.wikipedia.org/wiki/Org-mode | Org-mode | Org-mode (also: Org mode; ) is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs. The name is used to encompass plain text files ("org files") that include simple marks to indicate levels of a hierarchy (such as the outline of an essay, a topic list with subtopics, nested computer code, etc.), and an editor with functions that can read the markup and manipulate hierarchy elements (expand/hide elements, move blocks of elements, check off to-do list items, etc.).
Org-mode was created by Carsten Dominik in 2003, originally to organize his own life and work, and since the first release numerous other users and developers have contributed to this free software package. Emacs has included Org-mode as a major mode by default since 2006. Bastien Guerry is the current maintainer, in cooperation with an active development community. Since its success in Emacs, some other systems now provide functions to work with org files.
Almost orthogonally, Org-mode has functionalities aimed at executing code in various external languages; these functionalities form org-babel.
System
The Org-mode home page explains that "at its core, Org-mode is a simple outliner for note-taking and list management" The Org system author Carsten Dominik explains that "Org-mode does outlining, note-taking, hyperlinks, spreadsheets, TODO lists, project planning, GTD, HTML and LaTeX authoring, all with plain text files in Emacs."
The Org system is based on plain text files with a simple markup, which makes the files very portable. The Linux Information Project explains that "Plain text is supported by nearly every application program on every operating system".
The system includes a lightweight markup language for plain text files (similar in function to Markdown, reStructuredText, Textile, etc., with a different implementation), allowing lines or sections of plain text to be hierarchically divided, tagged, linked, and so on.
Functionality
This section gives some sample uses for the hierarchical display and editing of plain text.
To-do lists often have subtasks, and so lend themselves to a hierarchical system. Org-mode facilitates this by allowing items to be subdivided into simple steps (nested to-dos and/or checklists), and given tags and properties such as priorities and deadlines. An agenda for the items to be done this week or day can then be automatically generated from date tags.
Plain text outlines.
Org files as interconnected pages of a personal wiki, using the markup for links.
Tracking bugs in a project, by storing .org files in a distributed revision control system such as Git.
Extensive linking features, to web pages, within the same file, to other files, to emails, and also allows defining custom links
An org-mode document can also be exported to various formats (including HTML, LaTeX, OpenDocument or plain text), these formats being used to render the structural outline in an appropriate fashion (including cross-references if needed). It can also use formatting markup (including LaTeX for mathematics), with facilities similar to those present in Markdown or LaTeX, thus offering an alternative to these tools.
Org-babel
Org-mode offers the ability to insert source code in the document being edited, which is automatically exported and/or executed when exporting the document; the result(s) produced by this code can be automatically fetched back in the resulting output.
This source code can be structured as reusable snippets, inserted in the source document at the place needed for logical exposition thus allowing this exposition to be independent of the structure needed by the compiler/interpreter.
Together with the markup facilities of org-mode, these two functionalities allow for
Literate programming, by decoupling the exposition of the functions of a program from its code structure, and
Reproducible research, by the creation of a consistent document consolidating exposition, original data, analyses, discussion and conclusion, in a way that can be reproduced by any reader using the same software tools.
As of June 2021, org-babel directly supports more than 70 programming languages or programmable facilities, more than 20 other tools being usable via contributed packages or drivers.
Integration
Org-mode has some features to export to other formats, and other systems have some features to handle org-mode formats. Further, a full-featured text editor may have functions to handle wikis, personal contacts, email, calendars, and so on; because org-mode is simply plain text, these features could be integrated into org-mode documents as well.
From org-mode, add-on packages export to other markup format such as MediaWiki (org-export-generic, org-export), to flashcard learning systems implementing SuperMemo's algorithms (org-drill, org-learn).
Outside of org-mode editors, org markup is supported by the GitLab and GitHub code repositories, the JIRA issue tracker, Pandoc, and others.
Export Examples
Org supports exporting to a variety of formats. Below you may find examples of Org fragments exported to a number of formats.
Other formats are supported by dedicated packages.
See also
Lightweight markup language
Comparison of notetaking software
Comparison of document markup languages
List of personal information managers
Outliner
References
Further reading
Books
Journal articles
External links
Distributed bug tracking systems
Emacs modes
Free note-taking software
Free personal information managers
Free spreadsheet software
Free task management software
Lightweight markup languages
Outliners |
20831537 | https://en.wikipedia.org/wiki/National%20University%20of%20Defense%20Technology | National University of Defense Technology | The National University of Defense Technology (NUDT, ), or People's Liberation Army National University of Defense Science and Technology (), is a military academy and Class A Double First Class University located in Changsha, Hunan, China. It is under the direct leadership of China's Central Military Commission, and the dual management of the Ministry of National Defense and the Ministry of Education. It is designated for Double First Class University Plan, former Project 211 and Project 985, three national plans facilitating the development of Chinese higher education. NUDT was instrumental in the development of the Tianhe-2 supercomputer. In 2015, the United States Department of Commerce added NUDT to the Bureau of Industry and Security's Entity List.
History
On 18 March 1952, as part of the development of the first five-year plan, Acting-Chief-of-Staff Nie Rongzhen and Deputy-Chief-of-Staff Su Yu presented the "Report on the Establishment of the Military Engineering Academy" to the Chairman Mao Zedong and Central Military Commission. Mao officially approved the review on March 26, establishing the project as one of the 156 national projects started under the CPC Central Committee's 1st five-year plan.
Harbin Military Academy of Engineering (哈尔滨军事工程学院)
On July 11, Chairman Mao Zedong appointed Grand General Chen Geng as the first dean and president. On August 22, the CMC established the Military Academy of Engineering Preparatory Committee, and established an office at No. 59 Gongjian Alley in Beijing on September 1. The preparatory committee composed of Grand General Chen Geng as Committee Chairman, Xu Lixing as Vice Chairman, Li Maozhi, Zhang Yan, Huang Jingwen, Hu Xiangjiu, Zhang Shuzu, Ren Xinmin, Shen Zhenggong, and Zhao Zili. On September 16, 1952, the CMC General Political Department approved the establishment of the Provisional Communist Party Committee within the Preparatory Committee with Grand General Chen Geng as the interim party secretary and consisting of Xu Lixing, Li Maozhi, Zhang Yan, Hu Xiangjiu, and Huang Jingwen. On November 24, the CMC granted final approval for the academy and issued to the entire army "instructions for the transfer of 300 teaching assistants and 1,000 cadets to the Military Engineering Academy." The same day, the preparatory committee submitted a preliminary plan for school building to the Central Military Committee.
On June 3, 1952, CPC Premier Zhou Enlai wrote to the Vice Chairman Nikolai Bulganin of the Council of Ministers of the Soviet Union asking for consultants and experts to help establish the new Military Academy of Engineering. On May 13, 1953, a Soviet advisory group arrived. After the establishment of the institute, more than 300 Soviet experts in various fields participated in the construction of the institute. This relation continued until the breakdown in Sino-Soviet relations in the 1960s. As part of the establishment, the Central Military Commission used volunteers from the 3rd Army Corp, the Second Advanced Infantry School of the Southwest Military Region and the Military Science Research Office of East China Military Region. Upon completion, the training of military science and technology talent was extended to all service branches.
On December 15, 1952, the Central Military Commission approved the establishment of the Military Academy of Engineering Construction Committee. On January 30, 1953, the CMC General Political Department approved the Communist Party Committee of the Academy of Military Engineering. On February 21, 1953, the Central Military Commission placed the Academy of Military Engineering under its direct leadership. On April 25, construction began. On May 15, 1953, Chairman Mao Zedong ordered a limit of 800 maximum graduating students per year. On August 26, 1953, Mao Zedong issued the instructions for the establishment of the Military Academy of Engineering and the first semester, and wrote the registration for it.
On 1 September 1953, the Military Academy of Engineering opened its doors, with the opening ceremony attended by the Deputy Chief-of-Staff Zhang Zongxun. Zhou Enlai, Zhu De, He Long, Liu Bocheng, Luo Ronghuan and others wrote inscriptions for the college. The academy consisted of 6 departments, 5 colleges, 22 junior colleges, and 24 undergraduate majors. The colleges were 1st Air Force Engineering, 2nd Artillery Engineering, 3rd Naval Engineering, 4rd Armored Force Engineering and 5th Corps of Engineers.
On 1 September 1955, the academy curricula was changed to five years. At the same time, the Ministry of Science and Education issued "Interim Regulations (Draft) for Postgraduate Classes" as part of the development of the academy's graduate program.
The university was founded as the premier military academy of China. It had five departments and one prep department at the time of its founding, including departments of The departments of Chemical Warfare and Nuclear Warfare were established later.
During the Cultural Revolution, the university was transitioned to the Committee of National Defense Science and Technology, and was renamed Harbin Academy of Engineering in 1966. In 1970, the main body of the university including its central administration and 4 departments moved to Changsha in South-Central China due to the possible war with the Soviet Union, and was renamed Changsha Institute of Technology. Other departments were converted to at least five independent national universities, or taken by other national universities or science research institute. The university's name was changed to People's Liberation Army National University of Defense Science and Technology in 1978 after the Cultural Revolution was over.
In 2017, as part of the massive military reforms, the PLA Electronic Engineering Institute, PLA Institute of International Relations, PLA National Defense Information Institute, PLA Xi’an Institute of Communication and The Institute of Meteorology and Oceanography of the University of Science and Technology were consolidated under the NUDT.
Campus
NUDT is located in the urban area of Changsha, capital of Hunan Province in South-Central China, covering a total area of 373 hectares, or 922 acres.
Administration
President: Lt. Gen. Deng Xiaogang, Academician of the Chinese Academy of Sciences
Political Commissar: Lt. Gen. Liu Nianguang
Academics
The university consists of 11 colleges administering over 40 departments, institutes and laboratories, four national key laboratories and one key laboratory at the Ministry of Education level.
The 11 colleges of the university include:
College of Liberal Arts and Sciences
College of Systems Engineering
College of Computer Science and Technology
College of Electronic Engineering
College of Electronic Science and Technology
College of Meteorology and Oceanography
College of Advanced Interdisciplinary Studies
College of Information and Communication
College of Intelligence Science and Technology
College of Aerospace Science and Engineering
College of International Studies
Statistics
Currently, NUDT has over 2,000 faculty members, over 300 of whom are professors. There are 14,000 full-time students including 8,400 undergraduates and 5,600 graduates. NUDT offers 25 subjects for undergraduates, 112 programs for master's degree candidates, and 69 programs for PhD candidates. 11 post-doctoral research stations have been authorized on campus.
NUDT supercomputers
Yinhe-I (YH-I)
Yinhe-1 was developed in 1983 as the leading supercomputer in China with a performance level of 100 MFLOPS.
Yinhe-II (YH-II)
Yinhe-II was built in 1992 achieving performance of 1 GFLOPS.
Yinhe-III
Yinhe-II was upgraded to Yinhe-III in 1996 which achieves 13 GFLOPS.
Tianhe-I
Tianhe-I was first revealed to the public on , and was immediately ranked as the world's fifth fastest supercomputer in the TOP500 list released at the 2009 Supercomputing Conference (SC09) held in Portland, Oregon, on .
Tianhe-IA
In October 2010, Tianhe-IA, an upgraded supercomputer achieving a performance level of 2.57 PFLOPS, was unveiled at HPC 2010 China and ranked as the world’s fastest supercomputer in the TOP500 list.
In November 2011, the Tianhe-1A ranked as the second fastest supercomputer in the world on TOP500 after it was surpassed by K Computer by Fujitsu of Japan.
Tianhe-2
Tianhe-2 was the fastest supercomputer in the world from 2013 to 2016, when it was surpassed by the Sunway TaihuLight at the Chinese National Supercomputing Center.
Tianhe-2A
Tianhe-2A is currently ranked as the world's fourth fastest supercomputer on the TOP500 list, having achieved a performance level of 61.44 PFLOPS
See also
TOP500
HPC Challenge Benchmark
Supercomputer centers in China
References
External links
China's Supercomputer Fastest in the World
Fear Not China’s Supercomputer
2016中国大学排行榜, 国防科大跻身7星级大学
Universities and colleges in Changsha
Military education and training in China
Project 985
Project 211
1953 establishments in China |
33956622 | https://en.wikipedia.org/wiki/BlueStacks | BlueStacks | BlueStacks is an American technology company known for the BlueStacks App Player and other cloud-based cross-platform products. The BlueStacks App Player allows Android applications to run on PCs running Microsoft Windows and macOS. The company was founded in 2009 by Jay Vaishnav, Suman Saraf, and Rosen Sharma, former CTO at McAfee and a board member of Cloud.com.
History
The company was announced in May 2011 at the Citrix Synergy conference in San Francisco. Citrix CEO Mark Templeton demonstrated an early version of BlueStacks onstage and announced that the companies had formed a partnership. The public alpha version of App Player was launched in October 2011. App Player exited beta on June 7, 2014. In July 2014, Samsung announced it had invested in BlueStacks. This brought total outside investment in BlueStacks to $26 million.
BlueStacks App Player
The App Player, software that virtualizes an Android OS, can be downloaded in versions for Windows 10 and macOS. The software's basic features are free to download and use. Advanced optional features require a paid monthly subscription. The company claims the App Player can run 1.5 million Android apps as of November 2019. As of February 2021, BlueStacks claimed its apps was downloaded over 1 billion times. App Player features mouse, keyboard, and external touchpad controls.
In June 2012, the company released an alpha version of its App Player software for macOS, while the beta version was released in December of that year.
BlueStacks 2
In April 2015, BlueStacks, Inc. announced that a new version of App Player for macOS, 2.0, was in development, which was released in July. In December 2015, BlueStacks, Inc. released the new version BlueStacks 2.0 which lets users run multiple Android applications simultaneously. BlueStacks 2.0 was also available for Mac OS X 10.9 Mavericks or later, till 2018.
In April 2016, the company released BlueStacks TV which integrated Twitch.tv directly into the BlueStacks App Player. This addition allows users to stream their apps to Twitch without the need for extra hardware or software.
BlueStacks released Facebook Live integration in September 2016, allowing users to stream their gameplay to their Facebook profiles, Pages they control, or Facebook Groups they belong to.
BlueStacks 3
In July 2017, BlueStacks released BlueStacks 3 based on a brand new engine and front-end design. BlueStacks 3 added App Center which personalizes game suggestions, an account system, chat, new keymapping interface, and multi-instance. Multi-instance allows users to launch multiple BlueStacks windows using either the same or different Google Play account.
BlueStacks 3N
In January 2018, BlueStacks announced the release of the BlueStacks + N Beta which runs on Android 7 (Android Nougat) and claimed to be the first and only Android gaming platform to have Android 7 at the time, since the majority of Android emulators ran Android 4.4 (KitKat), including prior BlueStacks versions. This beta version was powered by an upgraded "HyperG" graphics engine allowing BlueStacks to utilize the full array of Android 7 APIs.
BlueStacks 4
In September 2018, BlueStacks announced the release of its latest flagship version, BlueStacks 4. BlueStacks 4 benchmarks 6-8x faster than every major mobile phone according to the Antutu benchmark. BlueStacks 4 also includes dynamic resource management which only initializes the required Android libraries thus freeing resources. A new dock and search offer a clean user interface. New AI powered key-mapping tool auto maps keys in supported games with key customization also available for further tweaking. In addition, BlueStacks 4 supports both 32-bit and 64-bit version of Android 7.1.2 Nougat.
Development for macOS has been restarted, and version 4 is currently available from the website as of November 2019, first released for Mac in January 2019.
In January 2019, BlueStacks released a 64-bit version of BlueStacks 4 via its early access program. This version runs on a 64-bit version of Android 7.1.2 which allows for improved performance, and more efficient memory usage. The prerequisites for running this build include running a 64-bit version of Windows 8 or later, with virtualization enabled, and Hyper-V disabled. This 64-bit release allows the installation and usage of ARM64-v8a android applications.
BlueStacks 5
In May 2021, BlueStacks released BlueStacks 5.
BlueStacks X
In September 2021, BlueStacks launched BlueStacks X, a cloud-based Android gaming platform. Driven by Hybrid Cloud technology, BlueStacks X is powered by now.gg, a sister company to BlueStacks.
Minimum requirements
Current minimum requirements for App Player for Windows include: Windows 7 or higher, 2 GB or higher system memory, 5 GB of hard drive space, administrator rights, and an Intel or AMD processor. BlueStacks clashes with the BitDefender antivirus software. An update to the latest graphic card driver version is also recommended.
Minimum requirements for macOS are: macOS Sierra or higher, 4 GB RAM, 4 GB disk space, and a model newer than 2014. BlueStacks has stated that they do not support Apple silicon yet.
See also
Android-x86
VirtualBox
References
Android (operating system)
Software companies based in the San Francisco Bay Area
Android emulation software
MacOS Internet software
Windows Internet software
Companies based in Campbell, California
Software companies established in 2009
2009 establishments in California
Software companies of the United States |
6127976 | https://en.wikipedia.org/wiki/TextGrid | TextGrid | TextGrid is a research group with the goal of supporting access to and the exchange of information in the humanities and social sciences with the help of modern information technology (the grid).
The project is funded by the Federal Ministry of Education and Research (Germany). TextGrid is one project within the D-Grid initiative and part of WissGrid.
Project phases
The TextGrid project started in February 2006. During the first phase of the project the development of the software begun. In consequence an infrastructure for the virtual research environment was built. During the second project phase (June 2009 to May 2012) a productive version of the software was created and the user base was broadened. Meanwhile, the project reached its third funding phase (June 2012 to May 2015) entitled "TextGrid - Virtual Research Environment for the Humanities". The objective within this phase is the transformation of the project for the continuous operation in financial, legal and technical terms.
The following partner institutions are participating during the third funding phase of TextGrid:
Berlin-Brandenburg Academy of Sciences and Humanities, Berlin
DAASI International GmbH, Tübingen
University of Applied Sciences Worms, Faculty of Computer Sciences
University of Göttingen, Göttingen State and University Library (project management)
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen
Institut für Deutsche Sprache, Mannheim
University of Würzburg
Max Planck Institute for the History of Science, Berlin
Berlin Institute of Technology
Technische Universität Darmstadt, Department for Literary Studies
Software
Scholars from humanities and cultural studies can do their research on textbased data jointly or alone by using the TextGrid software. This environment consists of two main components:
The TextGrid Repository is long-term storage of subject specific contents where digital research data can be stored. This is to guarantee the long-term availability and access to data and also to provide cooperation for the scholars.
The TextGrid Laboratory is the client software of the virtual research environment and combines diverse services and tools within the interface that can be used intuitively. The TextGridLab can be extended and therefore be used by scholars from diverse fields of research and interest. The software is available for all prevalent operation systems.
Further services
User of TextGrid are able to contact the developers of the software and further involved persons. For this purpose there are different open mailing-lists available. Furthermore, user meetings are arranged on a regular basis and training courses and workshops are offered.
User projects
Meanwhile, a number of user cases and edition projects which are working with TextGrid are established. More information on these projects and their experiences with TextGrid can be found on the TextGrid Homepage.
Literature
TextGrid. In: Heike Neuroth, Martina Kerzel, Wolfgang Gentzsch (Hrsg.): Die D-Grid Initiative. Universitätsverlag Göttingen, Göttingen 2007, , S. 64–66 ().
Heike Neuroth, Felix Lohmeier, Kathleen Marie Smith: TextGrid. Virtual Research Environment for the Humanities. In: The International Journal of Digital Curation. 6, Nr. 2, 2011, , S. 222–231 ().
External links
Official Website of the TextGrid project
TextGrid user manual
TextGrid documentation for developers
Itemization
Research institutes in Germany |
26168420 | https://en.wikipedia.org/wiki/WinDirStat | WinDirStat | WinDirStat is a free and open-source graphical disk usage analyzer for Microsoft Windows. It is notable for presenting a sub-tree view with disk use percentage alongside a usage-sorted list of file extensions that is interactively integrated with a colorful graphical display (a treemap). Created as an open source project released under the GNU GPL, it was developed using Visual C++/MFC 7.0 and distributed using SourceForge. The project was inspired by SequoiaView, an application based on research done by the Visualization Section of the Faculty of Mathematics and Computer Science at the Technische Universiteit Eindhoven.
Popularity
WinDirStat has been downloaded more than 9.2 million times from the official source since the original release in October 2003. As of July 2014, it is the second most downloaded "Filesystems" software on SourceForge with over 13,000 downloads per week. The tool is still up to date and its usage is designed for all platforms.
Project status
The most recent release was in September 2007 and development stopped for some time after that. However, the project's blog noted that development resumed in May 2009 and some updates to the code were added in 2016.
Source code
Source code is provided for all released versions on the SourceForge page in ZIP format.
WinDirStat is developed via Mercurial revision control.
Features
List of detected file extensions, and the percentage of space each file extension takes up.
Each extension has its own color on the graphical map.
Is able to scan internal, external and networked drives.
Portable version besides the installer
User-created clean up jobs
Send report via email
Version history
Media reception
FossHub (official download mirror of WinDirStat) reported 6,912,000 downloads in January 2019, being the most downloaded software from "Disk Analysers" category."
Steve Bass of PC World provided a brief review of the 1.1.2 release of WinDirStat, summarizing its usefulness: "Windirstat is [a] colorful and nifty tool to check the makeup of your hard drive -- especially if you're looking for immense files. It scans your drive and produces a treemap that shows each file as a colored rectangle that's proportional to the file's size..."
In 2006, WinDirStat was "Download of the day" on Lifehacker. Reviewer Adam Pash praised WinDirStat for its ability to easily clean up unnecessary files, by stating: "If you find a large file or two taking up loads of space that you had forgotten was there and don't need, it's easy to clean up directly from WinDirStat. "
CNET reviewed the most recent release of WinDirStat and gave it 5 of 5 stars. It called WinDirStat a "great piece of freeware" and noted: "It's one of those tools that you didn't know you needed until you started using it, but once installed, it's hard to imagine life without it..."
Gizmo's Freeware directory featured WinDirStat in a January 2010 list of best free disk analysis software with a 4 of 5 stars review, noting: "The open source program WinDirStat is [an] outstanding program. It uses three ways to display the disk usage: a directory list, a file extension list and a rectangular treemap. The visual presentation, overall usability and scan speed makes this a great tool to visualize disk usage."
The German computer magazine c't (magazin für computertechnik) published a review of WinDirStat with it bundled in a CD in October 2006.
In 2011, Jack Wallen from TechRepublic called WinDirStat: "one of those simple little apps you are going to be very thankful you have when you need it." He also highlighted its usability: "If space is an issue ... you will see just how much time this tool can save you." However, Jack Wallen criticized the documentation, stating: "The biggest issue with WinDirStat is the documentation. The minute you try to create your own user-configured cleanup routines you will quickly experience a complete lack of documentation, which makes the task rather challenging, if not impossible."
References
External links
Official Mercurial source code repository
Lifehacker download of the day
2003 software
Disk usage analysis software
Free system software
Portable software |
18604082 | https://en.wikipedia.org/wiki/Linux%20Symposium | Linux Symposium | The Linux Symposium was a Linux and Open Source conference held annually in Canada from 1999 to 2014. The conference was initially named Ottawa Linux Symposium and was held only in Ottawa, but was renamed after being held in other cities in Canada. Even after the name change, however, it was still referred to as OLS. The conference featured 100+ paper presentations, tutorials, birds of a feather sessions and mini summits on a wide range of topics. There were 650 attendees from 20+ countries in 2008.
History
The 2009 Symposium was held in Montréal, Quebec.
The 2011 and 2012 Symposium were both held in Ottawa.
In 2014, OLS organizers put together an unsuccessful campaign on Indiegogo to raise funds in order to pay off debts from previous events.
Keynote speakers
1999 - Alan Cox
2000 - David S. Miller, Miguel de Icaza
2001 - Ted Ts'o
2002 - Stephen Tweedie
2003 - Rusty Russell
2004 - Andrew Morton
2005 - David Jones
2006 - Greg Kroah-Hartman
2007 - James Bottomley
2008 - Matthew Wilcox, Werner Almesberger, Mark Shuttleworth
2009 - Keith Bergelt, Jonathan Corbet, Dirk Hohndel
2010 - Jon C. Masters, Tim Riker
2011 - Jon "maddog" Hall
2014 - Jeff Garzik
Mini-summits
The Symposium hosted "mini-summits" on the day before the conference. They were open to all conference attendees and had their own programme. Five mini-summits were hosted in 2008, including: Virtualization, Security-Enhanced Linux, Kernel Container Developers', Linux Power Management and Linux Wireless LAN. There were two mini-summits in 2009: Linux Power Management and Tracing.
See also
List of free-software events
References
External links
Ottawa Linux Symposium 10, Day 1 at Linux.com
Proceedings of the Ottawa Linux Symposium at kernel.org
Linux conferences |
750227 | https://en.wikipedia.org/wiki/Simple%20public-key%20infrastructure | Simple public-key infrastructure | Simple public key infrastructure (SPKI, pronounced spoo-key) was an attempt to overcome the complexity of traditional X.509 public key infrastructure. It was specified in two Internet Engineering Task Force (IETF) Request for Comments (RFC) specifications— and —from the IETF SPKI working group. These two RFCs never passed the "experimental" maturity level of the IETF's RFC status. The SPKI specification defined an authorization certificate format, providing for the delineation of privileges, rights or other such attributes (called authorizations) and binding them to a public key. In 1996, SPKI was merged with Simple Distributed Security Infrastructure (SDSI, pronounced sudsy) by Ron Rivest and Butler Lampson.
History and overview
The original SPKI had identified principals only as public keys but allowed binding authorizations to those keys and delegation of authorization from one key to another. The encoding used was attribute:value pairing, similar to headers.
The original SDSI bound local names (of individuals or groups) to public keys (or other names), but carried authorization only in Access Control Lists (ACLs) and did not allow for delegation of subsets of a principal's authorization. The encoding used was standard S-expression. Sample RSA public key in SPKI in "advanced transport format" (for actual transport the structure would be Base64-encoded):
(public-key
(rsa-pkcs1-md5
(e #03#)
(n
|ANHCG85jXFGmicr3MGPj53FYYSY1aWAue6PKnpFErHhKMJa4HrK4WSKTO
YTTlapRznnELD2D7lWd3Q8PD0lyi1NJpNzMkxQVHrrAnIQoczeOZuiz/yY
VDzJ1DdiImixyb/Jyme3D0UiUXhd6VGAz0x0cgrKefKnmjy410Kro3uW1| )))
The combined SPKI/SDSI allows the naming of principals, creation of named groups of principals and the delegation of rights or other attributes from one principal to another. It includes a language for expression of authorization - a language that includes a definition of "intersection" of authorizations. It also includes the notion of threshold subject - a construct granting authorizations (or delegations) only when of of the listed subjects concur (in a request for access or a delegation of rights). SPKI/SDSI uses S-expression encoding, but specifies a binary form that is extremely easy to parse - an LR(0) grammar - called Canonical S-expressions.
SPKI/SDSI does not define a role for a commercial certificate authority (CA). In fact, one premise behind SPKI is that a commercial CA serves no useful purpose.
As a result of that, SPKI/SDSI is deployed primarily in closed solutions and in demonstration projects of academic interest. Another side-effect of this design element is that it is difficult to monetize SPKI/SDSI by itself. It can be a component of some other product, but there is no business case for developing SPKI/SDSI tools and services except as part of some other product.
The most prominent general deployments of SPKI/SDSI are E-speak, a middleware product from HP that used SPKI/SDSI for access control of web methods, and UPnP Security, that uses an XML dialect of SPKI/SDSI for access control of web methods, delegation of rights among network participants, etc.
See also
SPKAC
Notes
External links
SPKI homepage,
JSDSI (open source development effort)
CDSA (open source development effort).
Key management |
1704894 | https://en.wikipedia.org/wiki/Monotone%20%28software%29 | Monotone (software) | Monotone is an open source software tool for distributed revision control.
Monotone tracks revisions to files, groups sets of revisions into changesets, and tracks history across renames. The focus of the project is on integrity over performance. Monotone is designed for distributed operation, and makes heavy use of cryptographic primitives to track file revisions (via the SHA-1 secure hash) and to authenticate user actions (via RSA cryptographic signatures).
History
Milestones
Monotone version 0.26 introduced major changes to the internal database structures, including a new structure known by Monotone developers as a roster. Monotone databases created with version 0.26 can not exchange revisions with older Monotone databases. Older databases must first be upgraded to the new format. The new netsync protocol is incompatible with earlier versions of Monotone.
As Git inspiration
In April 2005, Monotone became the subject of increased interest in the FOSS community after Linus Torvalds mentioned it as a possible replacement for BitKeeper in the Linux development process. In a post on the Linux kernel mailing list, Torvalds praised Monotone and disparaged Subversion (and by extension, all client-server version-control systems):
Instead of adopting Monotone, Torvalds decided to write his own SCM system, Git. Git's design uses some ideas from Monotone, but the two projects do not share any core source code. Git has a much stronger focus on high performance, inspired by the lengthy history and demanding distributed modes of collaboration used by Torvalds and the other Linux kernel authors. Torvalds later commented on Monotone's design and performance:
A key issue debated was whether the replacement of BitKeeper should support cherry picking, whereby a tree maintainer can approve a subset of patches while rejecting others on an individual basis. Torvalds argued that this approach "results in the wrong dynamics and psychology in the system" by shifting burden to the upstream maintainers rather than forcing downstream maintainers to put more effort into keeping their trees free from garbage. He further argued that Monotone is correct in its aversion to cherry-picking as a feature, but then failed to take it far enough by not making it easy enough to "throw away" unclean working trees after their purpose is served. Torvalds also noted his perception that Monotone at that time had not achieved the performance level required by a project as large as Linux kernel development.
Design
Like GNU arch, and unlike Subversion, Monotone takes a distributed approach to version control. Monotone uses SHA-1 hashes to identify specific files or groups of files, as with Git and Mercurial, in place of linear revision numbers. Each participant maintains their own revision history, stored in a local SQLite database.
Integrity
Prior to some heavy optimisation in revision 0.27, Monotone's emphasis on correctness over optimisation was often blamed for poor initial experiences. The first action of a new user is often to synchronize (clone) a large existing Monotone database, an action which often took hours for large databases, due to the extensive validation and integrity checking which Monotone performs when revisions are moved through the network. Once the initial (clone) database is populated, subsequent actions usually proceed more rapidly. , there is still room for further optimisation on some rarer functions.
Workflow
Monotone is especially strong in its support for a diverge/merge workflow, which it achieves in part by always allowing commit before merge.
Networking
Although Monotone originally supported a variety of networking protocols for synchronizing trees, it now exclusively uses a custom protocol called netsync, which is more robust and efficient, and shares some conceptual ground with rsync and cvsup. (However, as of version 0.27, it is possible to use the netsync protocol over any stream, notably including ssh connections.) Netsync has its own IANA-assigned port (4691) and older versions of it are supported by a Wireshark plug-in for traffic analysis. There is no separate Monotone server because any Monotone client can act as a server.
Other features
Other features of Monotone include:
Good support for internationalization and localization
Portable design, implemented in C++
High integrity is a key design goal
Monotone can import CVS projects.
Signing of revisions using RSA certificates
Easy to learn, due to a command set similar to that of CVS
Very good at branching (both divergences within a branch and named branches) and merging
Good documentation
Very low maintenance
Complete and comprehensive Perl library that allows you to completely control Monotone from a Perl script (mtn-browse makes use of this)
Drawbacks
, possible drawbacks of Monotone include:
Potential users cannot check out (or commit) from behind a proxy (very common in corporate environments) due to non-http protocol.
Performance issues for certain operations (most noticeable initial pull)
Implementation
Monotone is implemented in modern-dialect C++ on top of the Boost library, the Botan cryptography library, and the SQLite database library. Monotone supports customization and extension via hook functions written in the Lua programming language. The monotone build process is automated with BuildBot and includes extensive regression tests.
See also
Comparison of revision control software
List of revision control software
References
External links
Free version control software
Free software programmed in C++
Distributed version control systems
Lua (programming language)-scriptable software |
13462986 | https://en.wikipedia.org/wiki/Russian%20Business%20Network | Russian Business Network | The Russian Business Network (commonly abbreviated as RBN) is a multi-faceted cybercrime organization, specializing in and in some cases monopolizing personal identity theft for resale. It is the originator of MPack and an alleged operator of the now defunct Storm botnet.
The RBN, which is notorious for its hosting of illegal and dubious businesses, originated as an Internet service provider for child pornography, phishing, spam, and malware distribution physically based in St. Petersburg, Russia. By 2007, it developed partner and affiliate marketing techniques in many countries to provide a method for organized crime to target victims internationally.
Activities
According to internet security company VeriSign, RBN was registered as an internet site in 2006.
Initially, much of its activity was legitimate. But apparently the founders soon discovered that it was more profitable to host illegitimate activities and started hiring its services to criminals.
The RBN has been described by VeriSign as "the baddest of the bad". It offers web hosting services and Internet access to a wide range of criminal and objectionable activities, with individual activities earning up to $150 million in one year. Businesses that take active stands against such attacks are sometimes targeted by denial of service attacks originating in the RBN network. RBN has been known to sell its services to these operations for $600 per month.
The business is difficult to trace. It is not a registered company, and its domains are registered to anonymous addresses. Its owners are known only by nicknames. It does not advertise, and trades only in untraceable electronic transactions.
One increasingly known activity of the RBN is delivery of exploits through fake anti-spyware and anti-malware, for the purposes of PC hijacking and personal identity theft. McAfee SiteAdvisor tested 279 “bad” downloads from malwarealarm.com, mentioned in the Dancho Danchev referenced article, and found that MalwareAlarm is an update of the fake anti-spyware Malware Wiper. The user is enticed to use a “free download” to test for spyware or malware on their PC; MalwareAlarm then displays a warning message of problems on the PC to persuade the unwary web site visitor to purchase the paid version. In addition to MalwareAlarm, numerous instances of rogue software are linked to and hosted by the RBN.
According to a since closed Spamhaus report, RBN is “Among the world's worst spammer, malware, phishing and cybercrime hosting networks. Provides 'bulletproof hosting', but is probably involved in the crime too”. Another Spamhaus report states, "Endless Russian/Ukrainian funded cybercrime hosting [at this network]." October 13, 2007, RBN was the subject of a Washington Post article, in which Symantec and other security firms claim RBN provides hosting for many illegal activities, including identity theft and phishing.
Routing operations
The RBN operates (or operated) on numerous Internet Service Provider (ISP) networks worldwide and resides (resided) on specific IP addresses, some of which have Spamhaus blocklist reports.
Political connections
It has been alleged that the RBN's leader and creator, a 24-year-old known as Flyman, is the nephew of a powerful and well-connected Russian politician. Flyman is alleged to have turned the RBN towards its criminal users. In light of this, it is entirely possible that past cyber-terrorism activities, such as the denial of service attacks on Georgia and Azerbaijan in August 2008, may have been co-ordinated by or out-sourced to such an organization. Although this is currently unproven, intelligence estimates suggest this may be the case.
See also
List of spammers
Russian Mafia
Cyberwarfare in Russia
References
External links
Spamhaus – Rokso listing and description of RBN activities
RBN Study - bizeul org - PDF
Shadowserver - RBN as RBusiness Network AS40898 - Clarifying the guesswork of Criminal Activity - PDF
Cybercrime
Internet fraud
Rogue software
Factions of the Russian Mafia |
7431839 | https://en.wikipedia.org/wiki/Intersection%20capacity%20utilization | Intersection capacity utilization | Intersection Capacity Utilization
(ICU) method is a tool for measuring a roadway intersection's capacity. It is ideal for transportation planning applications such as roadway design, congestion management programs and traffic impact studies. It is not intended for traffic operations or signal timing design. ICU is also defined as "the sum of the ratios of approach volume divided by approach capacity for each leg of intersection which controls overall traffic signal timing plus an allowance for clearance times." The ICU tells how much reserve capacity is available or how much the intersection is overcapacity. The ICU does not predict delay, but it can be used to predict how often an intersection will experience congestion.
The method by which the ICU is used shows what the ICU is about. The ICU uses a grading system to rank the intersection that is being studied. This ranking system is known as the Level of Service (LOS) for the intersection. ICU is timing plan independent, yet has rules to ensure that minimum timing constraints are taken into account. This removes the choice of timing plan from the capacity results. The ICU can also be used on uncontrolled intersections to determine the capacity utilization if the intersection were to be signalized. The ICU is not intended for operations or signal timing design. The primary output from ICU is similar to the intersection volume to capacity ratio. Some of the benefits to using ICU over delay-based methods include greater accuracy, and a clear image of the intersection's volume to capacity ratio.
ICU method has been subject to some competition from the Highway Capacity Manual (HCM). Both methods are used to determine the LOS of an intersection. However, each method has different criteria for the rankings. In transportation, each method is used for different types of projects. The review board of the ICU continue to make changes every year in order to incorporate all of the new criteria required.
The ICU has not been designed for operations and signal timing design. Delay based methods and simulation such as HCM, PTV Vistro, Synchro, and
SimTraffic should be used for operations and signal timing design. In short, the ICU method makes the traffic capacity design a much easier and simpler task.
Methodology
The ICU methodology requires a specific set of data to be collected. The data needs to include volumes, number of lanes, saturated flow rates, signal timings, reference cycle length, and lost time for an intersection.
The method sums the amount of time required to serve all movements at saturation for a given cycle length and divides by that reference cycle length. This method is similar to taking a sum of critical volume to saturation flow ratios (v/s), yet allows minimum timings to be considered. The ICU method uses the Level of Service concept, in which reports on the amount of reserved capacity or capacity deficit. In order to calculate the Level of Service for the ICU method, the ICU for an intersection must be computed first. ICU can be computed by:
ICU = sum(max (tMin, v/si) * CL + tLi) / CL = Intersection Capacity Utilization
CL = Reference Cycle Length
tLi = Lost time for critical movement
v/si = volume to saturation flow rate, critical movement
tMin = minimum green time, critical movement i
Once the ICU is fully calculated for an intersection, the ICU LOS for that intersection can be calculated based on the following criteria:
Level of Service
A: If ICU is less than or equal to 55%
B: If ICU is greater than 55% but less than 64%
C: If ICU is greater than 64% but less than 73%
D: If ICU is greater than 73% but less than 82%
E: If ICU is greater than 82% but less than 91%
F: If ICU is greater than 91% but less than 100%
G: If ICU is greater than 100% but less than 109%
H: If ICU is greater than 109%
This grading criteria shows some specific details about the specific intersection:
A: Intersection has no congestion
B: Intersection has very little congestion
C: Intersection has no major congestion
D: Intersection normally has no congestion
E: Intersection is on the verge of congested conditions
F: Intersection is over capacity and likely experiences congestion periods of 15 to 60 consecutive minutes.
G: Intersection is 9% over capacity and experiences congestion periods of 60 to 120 consecutive minutes.
H: The intersection is 9% or greater over capacity and could experience
congestion periods of over 120 minutes per day.
Most industry standards require the ICU LOS to be E or better. This is not always easy to achieve and therefore much care is given to the signal timings and geometry in order to get the LOS to be better than E.
ICU versus HCM
HCM method is seen today as the more popular alternative when it comes to capacity analysis. The HCM is based on the estimated delay for an intersection. While the ICU design is compatible with the HCM design, the volume adjusted for ICU are the same as required by the HCM design. If an intersection design satisfies the HCM design, then it most definitely satisfies the Level of Service (LOS) required by the ICU.
The LOS calculated for the ICU method is often confused with the LOS calculated for the HCM method. While both LOS provide information about the performance of the intersection, the specific way in which each of these methods grade an intersection is different. The HCM LOS is delay based while the ICU LOS reports the amount of reserve capacity or capacity deficit.
Some signal timing software like PTV Vistro and Synchro report ICU Level of Service. The Level of Service is reported on a scale ranging from A to H, A being the least congested condition and H being the worst condition.
Software
In order to do the ICU calculations, it is very useful to use a computer software that allows you to put in the volumes and it calculates the LOS for that particular intersection. In order to perform this function, however, complete data must be collected and the intersection lane geometry must also be known. If these things are known, then the calculation for LOS becomes very easy.
PTV Vistro, developed by PTV Group, has integrated the Intersection Capacity Utilization (ICU) into the software, for both the ICU1 and ICU2 methods. This integration allows users to analyze small to large signalized urban networks in a modern traffic analysis software platform. Using the ICU methods in PTV Vistro can be extremely useful for planning level analyses and help users determine intersection size and capacity. PTV Vistro includes an ICU check that verifies that the basic parameters needed to perform the ICU calculation have been entered for each intersection. In addition, the ICU evaluations are compatible with PTV Visum to develop macroscopic-level simulations.
Other software used in the industry include Syncho 5.0 and SimTraffic. Both of these software are produced by Trafficware Ltd. and thus they both work very similar to each other. Synchro 5.0, the LOS analysis software, also calculates Intersection Capacity Utilization (ICU) which provides additional insight as to how well an intersection is functioning and how much extra capacity is available to handle future growth, traffic fluctuation and incidents. The ICU calculation does not use existing signal timings or sign control. It simply calculates the ultimate capacity based on a fully protected, optimized signalized timing plan at a cycle length of 120 seconds.
Uses
Intersection Capacity Utilization method is most often found in a traffic impact study (TIS). TIS is prepared mostly by city officials and/or private civil consulting firms. The importance of TIS is often seen when there is a new building being constructed in a downtown area. When there is a building that is going to be built in a specific area, a TIS is prepared to see the impact this new facility will have on the intersections around this facility. Old traffic patterns are evaluated along with the new predicted volumes for the intersections. Old geometry is also evaluated to see if it needs to be changed according to the new demand in traffic for the specific intersections in question. The ICU method of finding the LOS is a tool that is often emphasized in the TIS reports to show how the intersections can be made better to accommodate the new flux of traffic.
References
Road transport |
105971 | https://en.wikipedia.org/wiki/Initialization%20vector | Initialization vector | In cryptography, an initialization vector (IV) or starting variable (SV) is an input to a cryptographic primitive being used to provide the initial state. The IV is typically required to be random or pseudorandom, but sometimes an IV only needs to be unpredictable or unique. Randomization is crucial for some encryption schemes to achieve semantic security, a property whereby repeated usage of the scheme under the same key does not allow an attacker to infer relationships between (potentially similar) segments of the encrypted message. For block ciphers, the use of an IV is described by the modes of operation.
Some cryptographic primitives require the IV only to be non-repeating, and the required randomness is derived internally. In this case, the IV is commonly called a nonce (number used once), and the primitives (e.g. CBC) are considered stateful rather than randomized. This is because an IV need not be explicitly forwarded to a recipient but may be derived from a common state updated at both sender and receiver side. (In practice, a short nonce is still transmitted along with the message to consider message loss.) An example of stateful encryption schemes is the counter mode of operation, which has a sequence number for a nonce.
The IV size depends on the cryptographic primitive used; for block ciphers it is generally the cipher's block-size. In encryption schemes, the unpredictable part of the IV has at best the same size as the key to compensate for time/memory/data tradeoff attacks. When the IV is chosen at random, the probability of collisions due to the birthday problem must be taken into account. Traditional stream ciphers such as RC4 do not support an explicit IV as input, and a custom solution for incorporating an IV into the cipher's key or internal state is needed. Some designs realized in practice are known to be insecure; the WEP protocol is a notable example, and is prone to related-IV attacks.
Motivation
A block cipher is one of the most basic primitives in cryptography, and frequently used for data encryption. However, by itself, it can only be used to encode a data block of a predefined size, called the block size. For example, a single invocation of the AES algorithm transforms a 128-bit plaintext block into a ciphertext block of 128 bits in size. The key, which is given as one input to the cipher, defines the mapping between plaintext and ciphertext. If data of arbitrary length is to be encrypted, a simple strategy is to split the data into blocks each matching the cipher's block size, and encrypt each block separately using the same key. This method is not secure as equal plaintext blocks get transformed into equal ciphertexts, and a third party observing the encrypted data may easily determine its content even when not knowing the encryption key.
To hide patterns in encrypted data while avoiding the re-issuing of a new key after each block cipher invocation, a method is needed to randomize the input data. In 1980, the NIST published a national standard document designated Federal Information Processing Standard (FIPS) PUB 81, which specified four so-called block cipher modes of operation, each describing a different solution for encrypting a set of input blocks. The first mode implements the simple strategy described above, and was specified as the electronic codebook (ECB) mode. In contrast, each of the other modes describe a process where ciphertext from one block encryption step gets intermixed with the data from the next encryption step. To initiate this process, an additional input value is required to be mixed with the first block, and which is referred to as an initialization vector. For example, the cipher-block chaining (CBC) mode requires an unpredictable value, of size equal to the cipher's block size, as additional input. This unpredictable value is added to the first plaintext block before subsequent encryption. In turn, the ciphertext produced in the first encryption step is added to the second plaintext block, and so on. The ultimate goal for encryption schemes is to provide semantic security: by this property, it is practically impossible for an attacker to draw any knowledge from observed ciphertext. It can be shown that each of the three additional modes specified by the NIST are semantically secure under so-called chosen-plaintext attacks.
Properties
Properties of an IV depend on the cryptographic scheme used. A basic requirement is uniqueness, which means that no IV may be reused under the same key. For block ciphers, repeated IV values devolve the encryption scheme into electronic codebook mode: equal IV and equal plaintext result in equal ciphertext. In stream cipher encryption uniqueness is crucially important as plaintext may be trivially recovered otherwise.
Example: Stream ciphers encrypt plaintext P to ciphertext C by deriving a key stream K from a given key and IV and computing C as C = P xor K. Assume that an attacker has observed two messages C1 and C2 both encrypted with the same key and IV. Then knowledge of either P1 or P2 reveals the other plaintext since
C1 xor C2 = (P1 xor K) xor (P2 xor K) = P1 xor P2.
Many schemes require the IV to be unpredictable by an adversary. This is effected by selecting the IV at random or pseudo-randomly. In such schemes, the chance of a duplicate IV is negligible, but the effect of the birthday problem must be considered. As for the uniqueness requirement, a predictable IV may allow recovery of (partial) plaintext.
Example: Consider a scenario where a legitimate party called Alice encrypts messages using the cipher-block chaining mode. Consider further that there is an adversary called Eve that can observe these encryptions and is able to forward plaintext messages to Alice for encryption (in other words, Eve is capable of a chosen-plaintext attack). Now assume that Alice has sent a message consisting of an initialization vector IV1 and starting with a ciphertext block CAlice. Let further PAlice denote the first plaintext block of Alice's message, let E denote encryption, and let PEve be Eve's guess for the first plaintext block. Now, if Eve can determine the initialization vector IV2 of the next message she will be able to test her guess by forwarding a plaintext message to Alice starting with (IV2 xor IV1 xor PEve); if her guess was correct this plaintext block will get encrypted to CAlice by Alice. This is because of the following simple observation:
CAlice = E(IV1 xor PAlice) = E(IV2 xor (IV2 xor IV1 xor PAlice)).
Depending on whether the IV for a cryptographic scheme must be random or only unique the scheme is either called randomized or stateful. While randomized schemes always require the IV chosen by a sender to be forwarded to receivers, stateful schemes allow sender and receiver to share a common IV state, which is updated in a predefined way at both sides.
Block ciphers
Block cipher processing of data is usually described as a mode of operation. Modes are primarily defined for encryption as well as authentication, though newer designs exist that combine both security solutions in so-called authenticated encryption modes. While encryption and authenticated encryption modes usually take an IV matching the cipher's block size, authentication modes are commonly realized as deterministic algorithms, and the IV is set to zero or some other fixed value.
Stream ciphers
In stream ciphers, IVs are loaded into the keyed internal secret state of the cipher, after which a number of cipher rounds are executed prior to releasing the first bit of output. For performance reasons, designers of stream ciphers try to keep that number of rounds as small as possible, but because determining the minimal secure number of rounds for stream ciphers is not a trivial task, and considering other issues such as entropy loss, unique to each cipher construction, related-IVs and other IV-related attacks are a known security issue for stream ciphers, which makes IV loading in stream ciphers a serious concern and a subject of ongoing research.
WEP IV
The 802.11 encryption algorithm called WEP (short for Wired Equivalent Privacy) used a short, 24-bit IV, leading to reused IVs with the same key, which led to it being easily cracked. Packet injection allowed for WEP to be cracked in times as short as several seconds. This ultimately led to the deprecation of WEP.
SSL 2.0 IV
In cipher-block chaining mode (CBC mode), the IV need not be secret, but must be unpredictable (In particular, for any given
plaintext, it must not be possible to predict the IV that will be associated to the plaintext in advance of the generation of the IV.) at encryption time. Additionally for the output feedback mode (OFB mode), the IV must be unique.
In particular, the (previously) common practice of re-using the last ciphertext block of a message as the IV for the next message is insecure (for example, this method was used by SSL 2.0).
If an attacker knows the IV (or the previous block of ciphertext) before he specifies the next plaintext, he can check his guess about plaintext of some block that was encrypted with the same key before.
This is known as the TLS CBC IV attack, also called the BEAST attack.
See also
Cryptographic nonce
Padding (cryptography)
Random seed
Salt (cryptography)
Block cipher modes of operation
CipherSaber (RC4 with IV)
References
Further reading
Block cipher modes of operation
Cryptography |
49182501 | https://en.wikipedia.org/wiki/Music%20technology | Music technology | Music technology is the study or the use of any device, mechanism, machine or tool by a musician or composer to make or perform music; to compose, notate, play back or record songs or pieces; or to analyze or edit music.
History
The earliest known applications of technology to music was prehistoric peoples' use of a tool to hand-drill holes in bones to make simple flutes.
Ancient Egyptians developed stringed instruments, such as harps, lyres and lutes, which required making thin strings and some type of peg system for adjusting the pitch of the strings. Ancient Egyptians also used wind instruments such as double clarinets and percussion instruments such as cymbals.
In Ancient Greece, instruments included the double-reed aulos and the lyre.
Numerous instruments are referred to in the Bible, including the horn, pipe, lyre, harp, and bagpipe. During Biblical times, the cornet, flute, horn, organ, pipe, and trumpet were also used.
During the Middle Ages, music notation was used to create a written record of the notes of plainchant melodies.
During the Renaissance music era (c. 1400-1600), the printing press was invented, allowing for sheet music to be mass-produced (previously having been hand-copied). This helped to spread musical styles more quickly and across a larger area.
During the Baroque era (c. 1600–1750), technologies for keyboard instruments developed, which led to improvements in the designs of pipe organs and harpsichords, and the development of a new keyboard instrument in approximately 1700, the piano.
In the Classical era, Beethoven added new instruments to the orchestra such as the piccolo, contrabassoon, trombones, and untuned percussion in his Ninth Symphony.
During the Romantic music era (c. 1810–1900), one of the key ways that new compositions became known to the public was by the sales of sheet music, which amateur music lovers would perform at home on their piano or other instruments. In the 19th century, new instruments such as saxophones, euphoniums, Wagner tubas, and cornets were added to the orchestra.
Around the turn of the 20th century, with the invention and popularization of the gramophone record (commercialized in 1892), and radio broadcasting (starting on a commercial basis ca. 1919-1920), there was a vast increase in music listening, and it was easier to distribute music to a wider public.
The development of sound recording had a major influence on the development of popular music genres, because it enabled recordings of songs and bands to be widely distributed. The invention of sound recording also gave rise to a new sub genre of classical music: the Musique concrete style of electronic composition.
The invention of multitrack recording enabled pop bands to overdub many layers of instrument tracks and vocals, creating new sounds that would not be possible in a live performance.
In the early 20th century, electric technologies such as electromagnetic pickups, amplifiers and loudspeakers were used to develop new electric instruments such as the electric piano (1929), electric guitar (1931), electro-mechanical organ (1934) and electric bass (1935). The 20th-century orchestra gained new instruments and new sounds. Some orchestra pieces used the electric guitar, electric bass or the Theremin.
The invention of the miniature transistor in 1947 enabled the creation of a new generation of synthesizers, which were used first in pop music in the 1960s. Unlike prior keyboard instrument technologies, synthesizer keyboards do not have strings, pipes, or metal tines. A synthesizer keyboard creates musical sounds using electronic circuitry, or, later, computer chips and software. Synthesizers became popular in the mass market in the early 1980s.
With the development of powerful microchips, a number of new electronic or digital music technologies were introduced in the 1980s and subsequent decades, including drum machines and music sequencers. Electronic and digital music technologies are any device, such as a computer, an electronic effects unit or software, that is used by a musician or composer to help make or perform music. The term usually refers to the use of electronic devices, computer hardware and computer software that is used in the performance, playback, recording, composition, sound recording and reproduction, mixing, analysis and editing of music.
Mechanical technologies
Prehistoric eras
Findings from paleolithic archaeology sites suggest that prehistoric people used carving and piercing tools to create instruments. Archeologists have found Paleolithic flutes carved from bones in which lateral holes have been pierced. The Divje Babe flute, carved from a cave bear femur, is thought to be at least 40,000 years old. Instruments such as the seven-holed flute and various types of stringed instruments, such as the Ravanahatha, have been recovered from the Indus Valley Civilization archaeological sites. India has one of the oldest musical traditions in the world—references to Indian classical music (marga) are found in the Vedas, ancient scriptures of the Hindu tradition. The earliest and largest collection of prehistoric musical instruments was found in China and dates back to between 7000 and 6600 BC.
Ancient Egypt
In prehistoric Egypt, music and chanting were commonly used in magic and rituals, and small shells were used as whistles. Evidence of Egyptian musical instruments dates to the Predynastic period, when funerary chants played an important role in Egyptian religion and were accompanied by clappers and possibly the flute. The most reliable evidence of instrument technologies dates from the Old Kingdom, when technologies for constructing harps, flutes and double clarinets were developed. Percussion instruments, lyres and lutes were used by the Middle Kingdom. Metal cymbals were used by ancient Egyptians. In the early 21st century, interest in the music of the pharaonic period began to grow, inspired by the research of such foreign-born musicologists as Hans Hickmann. By the early 21st century, Egyptian musicians and musicologists led by the musicology professor Khairy El-Malt at Helwan University in Cairo had begun to reconstruct musical instruments of Ancient Egypt, a project that is ongoing.
Indus Valley
The Indus Valley civilization has sculptures that show old musical instruments, like the seven-holed flute. Various types of stringed instruments and drums have been recovered from Harappa and Mohenjo Daro by excavations carried out by Sir Mortimer Wheeler.
References in the Bible
According to the Scriptures, Jubal was the father of harpists and organists (Gen. 4:20–21). The harp was among the chief instruments and the favorite of David, and it is referred to more than fifty times in the Bible. It was used at both joyful and mournful ceremonies, and its use was "raised to its highest perfection under David" (1 Sam. 16:23). Lockyer adds that "It was the sweet music of the harp that often dispossessed Saul of his melancholy (1 Sam. 16:14–23; 18:10–11). When the Jews were captive in Babylon they hung their harps up and refused to use them while in exile, earlier being part of the instruments used in the Temple (1 Kgs. 10:12). Another stringed instrument of the harp class, and one also used by the ancient Greeks, was the lyre. A similar instrument was the lute, which had a large pear-shaped body, long neck, and fretted fingerboard with head screws for tuning. Coins displaying musical instruments, the Bar Kochba Revolt coinage, were issued by the Jews during the Second Jewish Revolt against the Roman Empire of 132–135 AD. In addition to those, there was the psaltery, another stringed instrument which is referred to almost thirty times in Scripture. According to Josephus, it had twelve strings and was played with a quill, not with the hand. Another writer suggested that it was like a guitar, but with a flat triangular form and strung from side to side.
Among the wind instruments used in the biblical period were the cornet, flute, horn, organ, pipe, and trumpet. There were also silver trumpets and the double oboe. Werner concludes that from the measurements taken of the trumpets on the Arch of Titus in Rome and from coins, that "the trumpets were very high pitched with thin body and shrill sound." He adds that in War of the Sons of Light Against the Sons of Darkness, a manual for military organization and strategy discovered among the Dead Sea Scrolls, these trumpets "appear clearly capable of regulating their pitch pretty accurately, as they are supposed to blow rather complicated signals in unison." Whitcomb writes that the pair of silver trumpets were fashioned according to Mosaic law and were probably among the trophies which the Emperor Titus brought to Rome when he conquered Jerusalem. She adds that on the Arch raised to the victorious Titus, "there is a sculptured relief of these trumpets, showing their ancient form. (see photo)
The flute was commonly used for festal and mourning occasions, according to Whitcomb. "Even the poorest Hebrew was obliged to employ two flute-players to perform at his wife's funeral." The shofar (the horn of a ram) is still used for special liturgical purposes such as the Jewish New Year services in orthodox communities. As such, it is not considered a musical instrument but an instrument of theological symbolism which has been intentionally kept to its primitive character. In ancient times it was used for warning of danger, to announce the new moon or beginning of Sabbath, or to announce the death of a notable. "In its strictly ritual usage it carried the cries of the multitude to God," writes Werner.
Among the percussion instruments were bells, cymbals, sistrum, tabret, hand drums, and tambourines. The tabret, or timbrel, was a small hand-drum used for festive occasions, and was considered a woman's instrument. In modern times it was often used by the Salvation Army. According to the Bible, when the children of Israel came out of Egypt and crossed the Red Sea, "Miriam took a timbrel in her hands; and all the women went out after her with timbrels and with dance."
Ancient Greece
In Ancient Greece, instruments in all music can be divided into three categories, based on how sound is produced: string, wind, and percussion. The following were among the instruments used in the music of ancient Greece:
the lyre: a strummed and occasionally plucked string instrument, essentially a hand-held zither built on a tortoise-shell frame, generally with seven or more strings tuned to the notes of one of the modes. The lyre was used to accompany others or even oneself for recitation and song.
the kithara, also a strummed string instrument, more complicated than the lyre. It had a box-type frame with strings stretched from the cross-bar at the top to the sounding box at the bottom; it was held upright and played with a plectrum. The strings were tunable by adjusting wooden wedges along the cross-bar.
the aulos, usually double, consisting of two double-reed (like an oboe) pipes, not joined but generally played with a mouth-band to hold both pipes steadily between the player's lips. Modern reconstructions indicate that they produced a low, clarinet-like sound. There is some confusion about the exact nature of the instrument; alternate descriptions indicate single-reeds instead of double reeds.
the Pan pipes, also known as panflute and syrinx (Greek συριγξ), (so-called for the nymph who was changed into a reed in order to hide from Pan) is an ancient musical instrument based on the principle of the stopped pipe, consisting of a series of such pipes of gradually increasing length, tuned (by cutting) to a desired scale. Sound is produced by blowing across the top of the open pipe (like blowing across a bottle top).
the hydraulis, a keyboard instrument, the forerunner of the modern organ. As the name indicates, the instrument used water to supply a constant flow of pressure to the pipes. Two detailed descriptions have survived: that of Vitruvius and Heron of Alexandria. These descriptions deal primarily with the keyboard mechanism and with the device by which the instrument was supplied with air. A well-preserved model in pottery was found at Carthage in 1885. Essentially, the air to the pipes that produce the sound comes from a wind-chest connected by a pipe to a dome; air is pumped in to compress water, and the water rises in the dome, compressing the air, and causing a steady supply of air to the pipes.
In the Aeneid, Virgil makes numerous references to the trumpet. The lyre, kithara, aulos, hydraulis (water organ) and trumpet all found their way into the music of ancient Rome.
Roman Empire
The Romans may have borrowed the Greek method of 'enchiriadic notation' to record their music, if they used any notation at all. Four letters (in English notation 'A', 'G', 'F' and 'C') indicated a series of four succeeding tones. Rhythm signs, written above the letters, indicated the duration of each note. Roman art depicts various woodwinds, "brass", percussion and stringed instruments. Roman-style instruments are found in parts of the Empire where they did not originate, and indicate that music was among the aspects of Roman culture that spread throughout the provinces.
Roman instruments include:
The Roman tuba was a long, straight bronze trumpet with a detachable, conical mouthpiece. Extant examples are about 1.3 metres long, and have a cylindrical bore from the mouthpiece to the point where the bell flares abruptly, similar to the modern straight trumpet seen in presentations of 'period music'. Since there were no valves, the tuba was capable only of a single overtone series. In the military, it was used for "bugle calls". The tuba is also depicted in art such as mosaics accompanying games (ludi) and spectacle events.
The cornu (Latin "horn") was a long tubular metal wind instrument that curved around the musician's body, shaped rather like an uppercase G. It had a conical bore (again like a French horn) and a conical mouthpiece. It may be hard to distinguish from the buccina. The cornu was used for military signals and on parade. The cornicen was a military signal officer who translated orders into calls. Like the tuba, the cornu also appears as accompaniment for public events and spectacle entertainments.
The tibia (Greek aulos – αὐλός), usually double, had two double-reed (as in a modern oboe) pipes, not joined but generally played with a mouth-band capistrum to hold both pipes steadily between the player's lips.
The askaules — a bagpipe.
Versions of the modern flute and panpipes.
The lyre, borrowed from the Greeks, was not a harp, but instead had a sounding body of wood or a tortoise shell covered with skin, and arms of animal horn or wood, with strings stretched from a cross bar to the sounding body.
The cithara was the premier musical instrument of ancient Rome and was played both in popular and elevated forms of music. Larger and heavier than a lyre, the cithara was a loud, sweet and piercing instrument with precision tuning ability.
The lute (pandura or monochord) was known by several names among the Greeks and Romans. In construction, the lute differs from the lyre in having fewer strings stretched over a solid neck or fret-board, on which the strings can be stopped to produce graduated notes. Each lute string is thereby capable of producing a greater range of notes than a lyre string. Although long-necked lutes are depicted in art from Mesopotamia as early as 2340–2198 BC, and also occur in Egyptian iconography, the lute in the Greco-Roman world was far less common than the lyre and cithara. The lute of the medieval West is thought to owe more to the Arab oud, from which its name derives (al ʿūd).
The hydraulic pipe organ (hydraulis), which worked by water pressure, was "one of the most significant technical and musical achievements of antiquity". Essentially, the air to the pipes that produce the sound comes from a mechanism of a wind-chest connected by a pipe to a dome; air is pumped in to compress water, and the water rises in the dome, compressing the air and causing a steady supply to reach the pipes (also see Pipe organ#History). The hydraulis accompanied gladiator contests and events in the arena, as well as stage performances.
Variations of a hinged wooden or metal device, called a scabellum used to beat time. Also, there were various rattles, bells and tambourines.
Drum and percussion instruments like timpani and castanets, the Egyptian sistrum, and brazen pans, served various musical and other purposes in ancient Rome, including backgrounds for rhythmic dance, celebratory rites like those of the Bacchantes and military uses.
The sistrum was a rattle consisting of rings strung across the cross-bars of a metal frame, which was often used for ritual purposes.
Cymbala (Lat. plural of cymbalum, from the Greek kymbalon) were small cymbals: metal discs with concave centres and turned rims, used in pairs which were clashed together.
Islamic world
A number of musical instruments later used in medieval European music were influenced by Arabic musical instruments, including the rebec (an ancestor of the violin) from the rebab and the naker from naqareh. Many European instruments have roots in earlier Eastern instruments that were adopted from the Islamic world. The Arabic rabāb, also known as the spiked fiddle, is the earliest known bowed string instrument and the ancestor of all European bowed instruments, including the rebec, the Byzantine lyra, and the violin.
The plucked and bowed versions of the rebab existed alongside each other. The bowed instruments became the rebec or rabel and the plucked instruments became the gittern. Curt Sachs linked this instrument with the mandola, the kopuz and the gambus, and named the bowed version rabâb.
The Arabic oud in Islamic music was the direct ancestor of the European lute. The oud is also cited as a precursor to the modern guitar. The guitar has roots in the four-string oud, brought to Iberia by the Moors in the 8th century. A direct ancestor of the modern guitar is the (Moorish guitar), which was in use in Spain by 1200. By the 14th century, it was simply referred to as a guitar.
The origin of automatic musical instruments dates back to the 9th century, when the Persian Banū Mūsā brothers invented a hydropowered organ using exchangeable cylinders with pins, and also an automatic flute playing machine using steam power. These were the earliest automated mechanical musical instruments. The Banu Musa brothers' automatic flute player was the first programmable musical device, the first music sequencer, and the first example of repetitive music technology, powered by hydraulics.
In 1206, the Arab engineer Al-Jazari invented a programmable humanoid automata band. According to Charles B. Fowler, the automata were a "robot band" which performed "more than fifty facial and body actions during each musical selection." It was also the first programmable drum machine. Among the four automaton musicians, two were drummers. It was a drum machine where pegs (cams) bumped into little levers that operated the percussion. The drummers could be made to play different rhythms and different drum patterns if the pegs were moved around.
Middle Ages
During the medieval music era (476 to 1400) the plainchant tunes used for religious songs were primarily monophonic (a single line, unaccompanied melody). In the early centuries of the medieval era, these chants were taught and spread by oral tradition ("by ear"). The earliest Medieval music did not have any kind of notational system for writing down melodies. As Rome tried to standardize the various chants across vast distances of its empire, a form of music notation was needed to write down the melodies. Various signs written above the chant texts, called neumes were introduced. By the ninth century, it was firmly established as the primary method of musical notation. The next development in musical notation was "heighted neumes", in which neumes were carefully placed at different heights in relation to each other. This allowed the neumes to give a rough indication of the size of a given interval as well as the direction.
This quickly led to one or two lines, each representing a particular note, being placed on the music with all of the neumes relating back to them. The line or lines acted as a reference point to help the singer gauge which notes were higher or lower. At first, these lines had no particular meaning and instead had a letter placed at the beginning indicating which note was represented. However, the lines indicating middle C and the F a fifth below slowly became most common. The completion of the four-line staff is usually credited to Guido d' Arezzo (c. 1000-1050), one of the most important musical theorists of the Middle Ages. The neumatic notational system, even in its fully developed state, did not clearly define any kind of rhythm for the singing of notes or playing of melodies. The development of music notation made it faster and easier to teach melodies to new people, and facilitated the spread of music over long geographic distances.
Instruments used to perform medieval music include earlier, less mechanically sophisticated versions of a number of instruments that continue to be used in the 2010s. Medieval instruments include the flute, which was made of wood and could be made as a side-blown or end-blown instrument (it lacked the complex metal keys and airtight pads of 2010s-era metal flutes); the wooden recorder and the related instrument called the gemshorn; and the pan flute (a group of air columns attached together). Medieval music used many plucked string instruments like the lute, mandore, gittern and psaltery. The dulcimers, similar in structure to the psaltery and zither, were originally plucked, but became struck by hammers in the 14th century after the arrival of new technology that made metal strings possible.
Bowed strings were used as well. The bowed lyra of the Byzantine Empire was the first recorded European bowed string instrument. The Persian geographer Ibn Khurradadhbih of the 9th century (d. 911) cited the Byzantine lyra as a bowed instrument equivalent to the Arab rabāb and typical instrument of the Byzantines along with the urghun (organ), shilyani (probably a type of harp or lyre) and the salandj (probably a bagpipe). The hurdy-gurdy was a mechanical violin using a rosined wooden wheel attached to a crank to "bow" its strings. Instruments without sound boxes like the jaw harp were also popular in the time. Early versions of the organ, fiddle (or vielle), and trombone (called the sackbut) existed in the medieval era.
Renaissance
The Renaissance music era (c. 1400 to 1600) saw the development of many new technologies that affected the performance and distribution of songs and musical pieces. Around 1450, the printing press was invented, which made printed sheet music much less expensive and easier to mass-produce (prior to the invention of the printing press, all notated music was laboriously hand-copied). The increased availability of printed sheet music helped to spread musical styles more quickly and across a larger geographic area.
Many instruments originated during the Renaissance; others were variations of, or improvements upon, instruments that had existed previously in the medieval era. Brass instruments in the Renaissance were traditionally played by professionals. Some of the more common brass instruments that were played included:
Slide trumpet: Similar to the trombone of today except that instead of a section of the body sliding, only a small part of the body near the mouthpiece and the mouthpiece itself is stationary.
Cornett: Made of wood and was played like the recorder, but blown like a trumpet.
Trumpet: Early trumpets from the Renaissance era had no valves, and were limited to the tones present in the overtone series. They were also made in different sizes.
Sackbut: A different name for the trombone, which replaced the slide trumpet by the middle of the 15th century
Stringed instruments included:
Viol: This instrument, developed in the 15th century, commonly has six strings. It was usually played with a bow.
Lyre: Its construction is similar to a small harp, although instead of being plucked, it is strummed with a plectrum. Its strings varied in quantity from four, seven, and ten, depending on the era. It was played with the right hand, while the left hand silenced the notes that were not desired. Newer lyres were modified to be played with a bow.
Hurdy-gurdy: (Also known as the wheel fiddle), in which the strings are sounded by a wheel which the strings pass over. Its functionality can be compared to that of a mechanical violin, in that its bow (wheel) is turned by a crank. Its distinctive sound is mainly because of its "drone strings" which provide a constant pitch similar in their sound to that of bagpipes.
Gittern and mandore: these instruments were used throughout Europe. Forerunners of modern instruments including the mandolin and acoustic guitar.
Percussion instruments included:
Tambourine: The tambourine is a frame drum equipped with jingles that produce a sound when the drum is struck.
Jew's harp: An instrument that produces sound using shapes of the mouth and attempting to pronounce different vowels with ones mouth.
Woodwind instruments included:
Shawm: A typical shawm is keyless and is about a foot long with seven finger holes and a thumb hole. The pipes were also most commonly made of wood and many of them had carvings and decorations on them. It was the most popular double reed instrument of the Renaissance period; it was commonly used in the streets with drums and trumpets because of its brilliant, piercing, and often deafening sound. To play the shawm a person puts the entire reed in their mouth, puffs out their cheeks, and blows into the pipe whilst breathing through their nose.
Reed pipe: Made from a single short length of cane with a mouthpiece, four or five finger holes, and reed fashioned from it. The reed is made by cutting out a small tongue, but leaving the base attached. It is the predecessor of the saxophone and the clarinet.
Hornpipe: Same as reed pipe but with a bell at the end.
Bagpipe/Bladderpipe: It used a bag made out of sheep or goat skin that would provide air pressure for a pipe. When its player takes a breath, the player only needs to squeeze the bag tucked underneath their arm to continue the tone. The mouth pipe has a simple round piece of leather hinged on to the bag end of the pipe and acts like a non-return valve. The reed is located inside the long metal mouthpiece, known as a bocal.
Panpipe: Designed to have sixteen wooden tubes with a stopper at one end and open on the other. Each tube is a different size (thereby producing a different tone), giving it a range of an octave and a half. The player can then place their lips against the desired tube and blow across it.
Transverse flute: The transverse flute is similar to the modern flute with a mouth hole near the stoppered end and finger holes along the body. The player blows in the side and holds the flute to the right side.
Recorder: It uses a whistle mouth piece, which is a beak shaped mouth piece, as its main source of sound production. It is usually made with seven finger holes and a thumb hole.
Baroque
During the Baroque era of music (ca. 1600-1750), technologies for keyboard instruments developed, which led to improvements in the designs of pipe organs and harpsichords, and to the development of the first pianos. During the Baroque period, organ builders developed new types of pipes and reeds that created new tonal colors. Organ builders fashioned new stops that imitated various instruments, such as the viola da gamba. The Baroque period is often thought of as organ building's "golden age," as virtually every important refinement to the instrument was brought to a peak. Builders such as Arp Schnitger, Jasper Johannsen, Zacharias Hildebrandt and Gottfried Silbermann constructed instruments that displayed both exquisite craftsmanship and beautiful sound. These organs featured well-balanced mechanical key actions, giving the organist precise control over the pipe speech. Schnitger's organs featured particularly distinctive reed timbres and large Pedal and Rückpositiv divisions.
Harpsichord builders in the Southern Netherlands built instruments with two keyboards which could be used for transposition. These Flemish instruments served as the model for Baroque-era harpsichord construction in other nations. In France, the double keyboards were adapted to control different choirs of strings, making a more musically flexible instrument (e.g., the upper manual could be set to a quiet lute stop, while the lower manual could be set to a stop with multiple string choirs, for a louder sound). Instruments from the peak of the French tradition, by makers such as the Blanchet family and Pascal Taskin, are among the most widely admired of all harpsichords, and are frequently used as models for the construction of modern instruments. In England, the Kirkman and Shudi firms produced sophisticated harpsichords of great power and sonority. German builders extended the sound repertoire of the instrument by adding sixteen foot choirs, adding to the lower register and two foot choirs, which added to the upper register.
The piano was invented during the Baroque era by the expert harpsichord maker Bartolomeo Cristofori (1655–1731) of Padua, Italy, who was employed by Ferdinando de' Medici, Grand Prince of Tuscany. Cristofori invented the piano at some point before 1700. While the clavichord allowed expressive control of volume, with harder or louder key presses creating louder sound (and vice versa) and fairly sustained notes, it was too quiet for large performances. The harpsichord produced a sufficiently loud sound, but offered little expressive control over each note. Pressing a harpsichord key harder or softer had no effect on the instrument's loudness. The piano offered the best of both, combining loudness with dynamic control. Cristofori's great success was solving, with no prior example, the fundamental mechanical problem of piano design: the hammer must strike the string, but not remain in contact with it (as a tangent remains in contact with a clavichord string) because this would damp the sound. Moreover, the hammer must return to its rest position without bouncing violently, and it must be possible to repeat the same note rapidly. Cristofori's piano action was a model for the many approaches to piano actions that followed. Cristofori's early instruments were much louder and had more sustain than the clavichord. Even though the piano was invented in 1700, the harpsichord and pipe organ continued to be widely used in orchestra and chamber music concerts until the end of the 1700s. It took time for the new piano to gain in popularity. By 1800, though, the piano generally was used in place of the harpsichord (although pipe organ continued to be used in church music such as Masses).
Classicism
From about 1790 onward, the Mozart-era piano underwent tremendous changes that led to the modern form of the instrument. This revolution was in response to a preference by composers and pianists for a more powerful, sustained piano sound, and made possible by the ongoing Industrial Revolution with resources such as high-quality steel piano wire for strings, and precision casting for the production of iron frames. Over time, the tonal range of the piano was also increased from the five octaves of Mozart's day to the 7-plus range found on modern pianos.
Early technological progress owed much to the firm of Broadwood. John Broadwood joined with another Scot, Robert Stodart, and a Dutchman, Americus Backers, to design a piano in the harpsichord case—the origin of the "grand". They achieved this in about 1777. They quickly gained a reputation for the splendour and powerful tone of their instruments, with Broadwood constructing ones that were progressively larger, louder, and more robustly constructed.
They sent pianos to both Joseph Haydn and Ludwig van Beethoven, and were the first firm to build pianos with a range of more than five octaves: five octaves and a fifth (interval) during the 1790s, six octaves by 1810 (Beethoven used the extra notes in his later works), and seven octaves by 1820. The Viennese makers similarly followed these trends; however the two schools used different piano actions: Broadwoods were more robust, Viennese instruments were more sensitive.
Beethoven's instrumentation for orchestra added piccolo, contrabassoon, and trombones to the triumphal finale of his Symphony No. 5. A piccolo and a pair of trombones help deliver storm and sunshine in the Sixth. Beethoven's use of piccolo, contrabassoon, trombones, and untuned percussion in his Ninth Symphony expanded the sound of the orchestra.
Romanticism
During the Romantic music era (c. 1810 to 1900), one of the key ways that new compositions became known to the public was by the sales of sheet music, which amateur music lovers would perform at home on their piano or in chamber music groups, such as string quartets. Saxophones began to appear in some 19th-century orchestra scores. While appearing only as featured solo instruments in some works, for example Maurice Ravel's orchestration of Modest Mussorgsky's Pictures at an Exhibition and Sergei Rachmaninoff's Symphonic Dances, the saxophone is included in other works, such as Ravel's Boléro, Sergei Prokofiev's Romeo and Juliet Suites 1 and 2. The euphonium is featured in a few late Romantic and 20th-century works, usually playing parts marked "tenor tuba", including Gustav Holst's The Planets, and Richard Strauss's Ein Heldenleben. The Wagner tuba, a modified member of the horn family, appears in Richard Wagner's cycle Der Ring des Nibelungen and several other works by Strauss, Béla Bartók, and others; it has a prominent role in Anton Bruckner's Symphony No. 7 in E Major. Cornets appear in Pyotr Ilyich Tchaikovsky's ballet Swan Lake, Claude Debussy's La Mer, and several orchestral works by Hector Berlioz.
The piano continued to undergo technological developments in the Romantic era, up until the 1860s. By the 1820s, the center of piano building innovation had shifted to Paris, where the Pleyel firm manufactured pianos used by Frédéric Chopin and the Érard firm manufactured those used by Franz Liszt. In 1821, Sébastien Érard invented the double escapement action, which incorporated a repetition lever (also called the balancier) that permitted repeating a note even if the key had not yet risen to its maximum vertical position. This facilitated rapid playing of repeated notes, a musical device exploited by Liszt. When the invention became public, as revised by Henri Herz, the double escapement action gradually became standard in grand pianos, and is still incorporated into all grand pianos currently produced.
Other improvements of the mechanism included the use of felt hammer coverings instead of layered leather or cotton. Felt, which was first introduced by Jean-Henri Pape in 1826, was a more consistent material, permitting wider dynamic ranges as hammer weights and string tension increased. The sostenuto pedal, invented in 1844 by Jean-Louis Boisselot and copied by the Steinway firm in 1874, allowed a wider range of effects.
One innovation that helped create the sound of the modern piano was the use of a strong iron frame. Also called the "plate", the iron frame sits atop the soundboard, and serves as the primary bulwark against the force of string tension that can exceed 20 tons in a modern grand. The single piece cast iron frame was patented in 1825 in Boston by Alpheus Babcock, combining the metal hitch pin plate (1821, claimed by Broadwood on behalf of Samuel Hervé) and resisting bars (Thom and Allen, 1820, but also claimed by Broadwood and Érard). The increased structural integrity of the iron frame allowed the use of thicker, tenser, and more numerous strings. In 1834, the Webster & Horsfal firm of Birmingham brought out a form of piano wire made from cast steel; according to Dolge it was "so superior to the iron wire that the English firm soon had a monopoly."
Other important advances included changes to the way the piano is strung, such as the use of a "choir" of three strings rather than two for all but the lowest notes, and the implementation of an over-strung scale, in which the strings are placed in two separate planes, each with its own bridge height. The mechanical action structure of the upright piano was invented in London, England in 1826 by Robert Wornum, and upright models became the most popular model, also amplifying the sound.
20th- and 21st-century music
With 20th-century music, there was a vast increase in music listening, as the radio gained popularity and phonographs were used to replay and distribute music. The invention of sound recording and the ability to edit music gave rise to new subgenre of classical music, including the acousmatic and Musique concrète schools of electronic composition. Sound recording was also a major influence on the development of popular music genres, because it enabled recordings of songs and bands to be widely distributed. The introduction of the multitrack recording system had a major influence on rock music, because it could do much more than record a band's performance. Using a multitrack system, a band and their music producer could overdub many layers of instrument tracks and vocals, creating new sounds that would not be possible in a live performance.
The 20th-century orchestra was far more flexible than its predecessors. In Beethoven's and Felix Mendelssohn's time, the orchestra was composed of a fairly standard core of instruments which was very rarely modified. As time progressed, and as the Romantic period saw changes in accepted modification with composers such as Berlioz and Mahler, the 20th century saw that instrumentation could practically be hand-picked by the composer. Saxophones were used in some 20th-century orchestra scores such as Vaughan Williams' Symphonies No. 6 and 9 and William Walton's Belshazzar's Feast, and many other works as a member of the orchestral ensemble. In the 2000s, the modern orchestra became standardized with the modern instrumentation that includes a string section, woodwinds, brass instruments, percussion, piano, celeste, and even, for some 20th century or 21st century works, electric instruments such as electric guitar, electric bass and/or electronic instruments such as the Theremin or synthesizer.
Electric and electro-mechanical
Electric music technology refers to musical instruments and recording devices that use electrical circuits, which are often combined with mechanical technologies. Examples of electric musical instruments include the electro-mechanical electric piano (invented in 1929), the electric guitar (invented in 1931), the electro-mechanical Hammond organ (developed in 1934) and the electric bass (invented in 1935). None of these electric instruments produce a sound that is audible by the performer or audience in a performance setting unless they are connected to instrument amplifiers and loudspeaker cabinets, which made them sound loud enough for performers and the audience to hear. Amplifiers and loudspeakers are separate from the instrument in the case of the electric guitar (which uses a guitar amplifier), electric bass (which uses a bass amplifier) and some electric organs (which use a Leslie speaker or similar cabinet) and electric pianos. Some electric organs and electric pianos include the amplifier and speaker cabinet within the main housing for the instrument.
Electric piano
An electric piano is an electric musical instrument which produces sounds when a performer presses the keys of the piano-style musical keyboard. Pressing keys causes mechanical hammers to strike metal strings or tines, leading to vibrations which are converted into electrical signals by magnetic pickups, which are then connected to an instrument amplifier and loudspeaker to make a sound loud enough for the performer and audience to hear. Unlike a synthesizer, the electric piano is not an electronic instrument. Instead, it is an electro-mechanical instrument. Some early electric pianos used lengths of wire to produce the tone, like a traditional piano. Smaller electric pianos used short slivers of steel, metal tines or short wires to produce the tone. The earliest electric pianos were invented in the late 1920s.
Electric guitar
An electric guitar is a guitar that uses a pickup to convert the vibration of its strings into electrical impulses. The most common guitar pickup uses the principle of direct electromagnetic induction. The signal generated by an electric guitar is too weak to drive a loudspeaker, so it is amplified before being sent to a loudspeaker. The output of an electric guitar is an electric signal, and the signal can easily be altered by electronic circuits to add "color" to the sound. Often the signal is modified using electronic effects such as reverb and distortion. Invented in 1931, the electric guitar became a necessity as jazz guitarists sought to amplify their sound in the big band format.
Hammond organ
The Hammond organ is an electric organ, invented by Laurens Hammond and John M. Hanert and first manufactured in 1935. Various models have been produced, most of which use sliding drawbars to create a variety of sounds. Until 1975, Hammond organs generated sound by creating an electric current from rotating a metal tonewheel near an electromagnetic pickup. Around two million Hammond organs have been manufactured, and it has been described as one of the most successful organs. The organ is commonly used with, and associated with, the Leslie speaker. The organ was originally marketed and sold by the Hammond Organ Company to churches as a lower-cost alternative to the wind-driven pipe organ, or instead of a piano. It quickly became popular with professional jazz bandleaders, who found that the room-filling sound of a Hammond organ could form small bands such as organ trios which were less costly than paying an entire big band.
Electric bass
The electric bass (or bass guitar) was invented in the 1930s, but it did not become commercially successful or widely used until the 1950s. It is a stringed instrument played primarily with the fingers or thumb, by plucking, slapping, popping, strumming, tapping, thumping, or picking with a plectrum, often known as a pick. The bass guitar is similar in appearance and construction to an electric guitar, but with a longer neck and scale length, and four to six strings or courses. The electric bass usually uses metal strings and an electromagnetic pickup which senses the vibrations in the strings. Like the electric guitar, the bass guitar is plugged into an amplifier and speaker for live performances.
Electronic or digital
Electronic or digital music technology is any device, such as a computer, an electronic effects unit or software, that is used by a musician or composer to help make or perform music. The term usually refers to the use of electronic devices, computer hardware and computer software that is used in the performance, playback, recording, composition, sound recording and reproduction, mixing, analysis and editing of music. Electronic or digital music technology is connected to both artistic and technological creativity. Musicians and music technology experts are constantly striving to devise new forms of expression through music, and they are physically creating new devices and software to enable them to do so. Although in the 2010s, the term is most commonly used in reference to modern electronic devices and computer software such as digital audio workstations and Pro Tools digital sound recording software, electronic and digital musical technologies have precursors in the electric music technologies of the early 20th century, such as the electromechanical Hammond organ, which was invented in 1929. In the 2010s, the ontological range of music technology has greatly increased, and it may now be electronic, digital, software-based or indeed even purely conceptual.
A synthesizer is an electronic musical instrument that generates electric signals that are converted to sound through instrument amplifiers and loudspeakers or headphones. Synthesizers may either imitate existing sounds (instruments, vocal, natural sounds, etc.), or generate new electronic timbres or sounds that did not exist before. They are often played with an electronic musical keyboard, but they can be controlled via a variety of other input devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled using a controller device.
References
Sources
Further reading
Sound recording
Audio electronics
Music history
Musical instruments |
7210261 | https://en.wikipedia.org/wiki/Gnulib | Gnulib | Gnulib, also called the GNU portability library, is a collection of software subroutines which are designed to be usable on many operating systems. The goal of the project is to make it easy for free software authors to make their software run on many operating systems. Since source is designed to be copied from gnulib, it is not a library per-se, as much as a collection of portable idioms to be used in other projects.
Making a software package work on a system other than the original system it worked on is usually called "porting" the software to the new system, and a library is a collection of subroutines which can be added to new programs. Thus, Gnulib is the GNU project's portability library.
It is primarily written for use by the GNU Project, but can be used by any free software project.
See also
GLib
libiberty
References
External links
The official Gnulib homepage
Free computer libraries
GNU Project software |
60946901 | https://en.wikipedia.org/wiki/Music%20%28software%29 | Music (software) | Music (alternatively called the Music app; formerly iPod) is a media player application developed for the iOS, iPadOS, tvOS, watchOS, and macOS operating systems by Apple Inc. It can play music files stored locally on devices, as well as stream from the iTunes Store and Apple Music.
The iOS version was introduced with iOS 5 on October 12, 2011, replacing the iPod app. It was included in the initial releases of tvOS, watchOS, and iPadOS. It was released with macOS Catalina on October 7, 2019 as one of three applications created to replace iTunes. The Music app is differentiated from iTunes by its concentration on streaming media and lesser focus on the iTunes Store, where content may be purchased outright.
iOS, tvOS, and watchOS versions
The Music app on iOS was preceded by the iPod app, initially released in iPhone OS 1. It was renamed Music with the release of iOS 5 on October 12, 2011. It was updated with a redesign and functionality for Apple Music with iOS 8.4 in 2015. It is a standard app on CarPlay.
The Music app is available on 2nd and 3rd generation Apple TVs to stream music purchased from the iTunes Store or synced with iTunes Match, but was never updated with support for Apple Music. Apple Music support was added in the tvOS version on the 4th generation Apple TV in early November 2015.
The Music app has been included in every version of watchOS on the Apple Watch. Music can be downloaded directly to an Apple Watch for use without a paired iPhone.
macOS version
The Music app on macOS was preceded by the iTunes app launched on January 9, 2001. Video support within the iTunes app was enabled in May 2005; podcast and books support followed in June 2005 and January 2010, respectively. By the 2010s, the application had been criticized for software bloat with features that extended well beyond the original scope of music.
Apple announced at the 2019 Worldwide Developers Conference that iTunes would be replaced with the specific Music app, Podcasts, and TV applications with the release of macOS Catalina. Apple describes the Music app as a "music streaming experience," whereas the company described iTunes as a digital library and online music store. Previous iTunes versions designed for older macOS versions, as well as iTunes for Windows, will remain unaffected. Music, TV shows and movies, and podcasts on the iTunes Store will be accessible through the Music, TV, and Podcasts apps, respectively, compared to the standalone iTunes Store app that is featured on iOS.
Android
Music app was released for devices running Android Lollipop and later on November 10, 2015, where it is referred to as Apple Music. It marked the first time music from the iTunes Store was available on non-Apple mobile devices since the Rokr E1 during a brief partnership with Motorola in 2005.
References
IOS software
WatchOS software
MacOS software
Apple Inc. software
IOS-based software made by Apple Inc. |
147089 | https://en.wikipedia.org/wiki/Red%20envelope | Red envelope | In East and Southeast Asian cultures, a red envelope, red packet or red pocket () is a monetary gift given during holidays or for special occasions such as a wedding, a graduation, or the birth of a baby. Although the red envelope was popularised by Chinese traditions, other cultures also share similar traditional customs. The red packet is also called “money warding off old age” () for Chinese New Year.
First originating in China, these customs have also been adopted across parts of East and Southeast Asia, and other countries that have a sizable ethnic Chinese population around the world. This custom is also practiced by many non-Chinese Asians in these regions due to Chinese cultural influence. In the mid-2010's, a digital equivalent to the practice emerged within messaging apps with mobile wallet systems localised for Chinese New Year.
Usage
Red envelopes are gifts presented at social and family gatherings such as weddings or holidays such as Chinese New Year. The red color of the envelope symbolizes good luck and is a symbol to ward off evil spirits. It is also gifted when a person is visiting as a gesture of kindness for visiting. The act of requesting red packets is normally called tao hongbao () or yao lishi (), and in the south of China, dou li shi (). Red envelopes are usually given out to the younger generation who are normally still in school or unmarried.
The amount of money contained in the envelope usually ends with an even digit, in accordance with Chinese beliefs; odd-numbered money gifts are traditionally associated with funerals. The exception being the number 9 as the pronunciation of nine () is homophonous to the word long () and is the largest digit. Still in some regions of China and in its diaspora community, odd numbers are favored for weddings because they are difficult to divide. There is also a widespread tradition that money should not be given in fours, or the number four should not appear in the amount, such as in 40, 400 and 444, as the pronunciation of the word four () is homophonous to the word death (). When giving money, new crispy bills are normally given instead of old dirty bills. It is common to see long queues outside of banks before Chinese New Year with people waiting to get new bills.
At wedding banquets, the amount offered is usually intended to cover the cost of the attendees as well as signify goodwill to the newlyweds. Amounts given are often recorded in ceremonial ledgers for the new couple to keep.
During the Chinese New Year, in Southern China, red envelopes are typically given by the married to the unmarried, most of whom are children. In northern and southern China, red envelopes are typically given by the elders to the younger under 25 (30 in most of the three northeastern provinces), regardless of marital status. The amount of money is usually notes to avoid heavy coins and to make it difficult to judge the amount inside before opening. In Malaysia it is common to add a coin to the notes, particularly in hong baos given to children, signifying even more luck.
It is traditional to put brand new notes inside red envelopes and also to avoid opening the envelopes in front of the relatives out of courtesy. However, to get the money, the younger generation needs to kowtow to thank their elders.
It is also given during the Chinese New Year in workplace from a person of authority (supervisors or owner of the business) out of his own fund to employees as a token of good fortune for the upcoming year.
In Suzhou, the children kept the red envelope in their bedroom after they received. They believed that putting the red envelope under their bed can protect the children. The action how they holding down the red envelope refer to the Chinese meaning "壓". Those ya sui qian would not be used until the end of Chinese New Year. They also received fruit or cake during the new year.
In acting, it is also conventional to give an actor a red packet when he or she is to play a dead character, or pose for a picture for an obituary or a grave stone.
Red packets are also used to deliver payment for favorable service to lion dance performers, religious practitioners, teachers, and doctors.
Red packets as a form of bribery in China's film industry were revealed in 2014's Sony hack.
Virtual red envelopes
A contemporary interpretation of the practice comes in the form of virtual red envelopes, implemented as part of mobile payment platforms. During the Chinese New Year holiday in 2014, the messaging app WeChat introduced the ability to distribute virtual red envelopes of money to contacts and groups via its WeChat Pay platform. The launch included an on-air promotion during the CCTV New Year's Gala—China's most-watched television special—where viewers could win red envelopes as prizes.
Adoption of WeChat Pay saw a major increase following the launch, and two years later, over 32 billion virtual envelopes were sent over the Chinese New Year holiday in 2016 (itself a tenfold increase over 2015). The popularity of the feature spawned imitations from other vendors; a "red envelope war" emerged between WeChat owner Tencent and its historic rival, Alibaba Group, who added a similar function to its competing messaging service and has held similar giveaway promotions. Analysts estimated that over 100 billion digital red envelopes would be sent over the New Year holiday in 2017. A research study shows that this popularization of virtual red packets comes from their contagious feature --- users who receive red packets feel obligated to follow suit to send another one.
Origin
Some say that the history of the red packet dates back to Han Dynasty (202 BC – 220 AD). People created a type of coin to ward off evil spirits, "ya sheng qian"(), and it was inscribed with auspicious words on the front, such as "May you live a long and successful life". It is not real money, but a real blessing item. It was believed to protect people from sickness and death.
In the Tang Dynasty, the Chinese New Year was considered to be the beginning of spring, and in addition to congratulations, elders gave money to children to ward off evil spirits.
After the Song and Yuan Dynasties, the custom of giving money in the Spring Festival evolved into the custom of giving children lucky money. The elderly would thread coins with a red string.
In the Ming and Qing Dynasties, there were two kinds of lucky money. One was made of red string and coins, sometimes placed at the foot of the bed in the shape of a dragon. The other is a colourful pouch filled with coins.
In Qing dynasty, the name “ya sui qian (压岁钱)” came up. The book ‘’Qing Jia Lu’’ (清嘉录) recorded “elders give children coins threaded together by a red string, the money is called Ya Sui Qian.”
From the Republic of China (1912–1949) era, it evolved into a hundred coins wrapped in red paper, meaning "May you live a hundred years!". Due to the lack of holes in modern-day coins, the use of red envelopes became more prevalent—because one could no longer thread the coins with string. Later on, people adopted banknotes instead of coins in red envelopes.
After the founding of the People's Republic of China in 1949, the custom of the elders giving the younger generation money continued.
Other customs
Other similar traditions also exist in other countries in Asia.
Ethnic Chinese
In Thailand, Myanmar (Burma), Cambodia and Singapore, the Chinese diaspora and immigrants have introduced the culture of red envelopes.
Cambodia
In Cambodia, red envelopes are called ang pav or tae ea ("give ang pav"). Ang pav are delivered with best wishes from elder to younger generations. The money amount in ang pav makes young children happy and is a most important gift which traditionally reflects the best wishes as a symbol of good luck for the elders. Ang pav can be presented on the day of Chinese New Year or Saen Chen, when relatives gather together. The gift is kept as a worship item in or under the pillowcase, or somewhere else, especially near the bed of young while they are sleeping in New Year time. Gift in ang pav can be either money or a cheque, and more or less according to the charity of the donors.
The tradition of the delivery of ang pav traditionally descended from one generation to another a long time ago. Ang pav will not be given to some one in family who has got a career, but this person has to, in return, deliver it to their parents and/or their younger children or siblings.
At weddings, the amount offered is usually intended to cover the cost of the attendees as well as help the newly married couple.
Vietnam
In Vietnam, red envelopes are a traditional part of Vietnamese culture considered to be lucky money and are typically given to children during Vietnamese Lunar New Year. They are generally given by the elders and adults, where a greeting or offering health and longevity is exchanged by the younger generation. Common greetings include "Sống lâu trăm tuổi", "An khang thịnh vượng" (), "Vạn sự như ý" () and "Sức khỏe dồi dào", which all relate back to the idea of wishing health and prosperity as age besets everyone in Vietnam on the Lunar New Year. The typical name for lucky money is lì xì or, less commonly, mừng tuổi.
South Korea
In South Korea, a monetary gift is given to children by their relatives during the New Year period. However, white envelopes are used instead of red, with the name of the receiver written on the back.
Japan
A monetary gift otoshidama () is given to children by their relatives during the New Year period. White or decorated envelopes (otoshidama-bukuro ()) are used instead of red, with the name of the receiver usually written on the front side. A similar practice, shūgi-bukuro, is observed for Japanese weddings, but the envelope is folded rather than sealed, and decorated with an elaborate bow.
Philippines
In the Philippines, Chinese Filipinos exchange red envelopes (termed ang pao) during the Lunar New Year, which is an easily recognisable symbol. The red envelope has gained wider acceptance among non-Chinese Filipinos, who have appropriated the custom for other occasions such as birthdays, and in giving monetary aguinaldo during Christmas.
Green envelope
Malay Muslims in Malaysia, Brunei, Indonesia, and Singapore have adopted the Chinese custom of handing out monetary gifts in envelopes as part of their Eid al-Fitr (Malay: Hari Raya Aidilfitri) celebrations, but instead of red packets, green envelopes are used. Customarily a family will have (usually small) amounts of money in green envelopes ready for visitors, and may send them to friends and family unable to visit. Green is used for its traditional association with Islam, and the adaptation of the red envelope is based on the Muslim custom of sadaqah, or voluntary charity. This is not necessarily true as envelopes of any or multi colors are available with contemporary designs incorporated. While present in the Qur'an, sadaqah is much less formally established than the sometimes similar practice of zakat, and in many cultures this takes a form closer to gift-giving and generosity among friends than charity in the strict sense, i.e. no attempt is made to give more to guests "in need", nor is it as a religious obligation as Islamic charity is often viewed. Among the Sundanese people, a boy who had been recently circumcised is given monetary gifts known as panyecep or uang sunatan in the national language of Indonesia.
Purple envelope
The tradition of ang pao has also been adopted by the local Indian Hindu populations of Singapore and Malaysia for Deepavali. They are known as Deepavali ang pow (in Malaysia), purple ang pow or simply ang pow (in Singapore). Yellow coloured envelopes for Deepavali have also been available at times in the past.
See also
Chinese marriage
Chinese social relations
Color in Chinese culture
Eidi, Islamic
Hell money
References
Sources
Chengan Sun, "Les enveloppes rouges : évolution et permanence des thèmes d'une image populaire chinoise" [Red envelopes : evolution and permanence of the themes of a Chinese popular image], PhD, Paris, 2011.
Chengan Sun, Les enveloppes rouges (Le Moulin de l'Etoile, 2011) .
Helen Wang, "Cultural Revolution Style Red Packets", Chinese Money Matters, 15 May 2018.
External links
How to Give Lai See in Hong Kong
Red Packet: Sign of Prosperity
Gallery: Chinese New Year Red Envelopes
Will The Paper Red Packet Be Replaced By An Electronic Red Envelope?
A red envelope with a collection of value-Lai See
Money envelopes in the British Museum Collection
Chinese culture
Chinese inventions
Chinese-Malaysian culture
East Asia
Southeast Asia
Envelopes
Giving
Indian-Malaysian culture
Indonesian culture
Japanese culture
Korean culture
Luck
Malay culture
Malaysian culture
Paper products
Chinese-Singaporean culture
Singaporean culture
Vietnamese culture
Wedding gifts |
22204641 | https://en.wikipedia.org/wiki/Auriga%20%28company%29 | Auriga (company) | Auriga is a software R&D and IT outsourcing services provider. The company is a privately held C-corporation, incorporated in the US in 1993, while the development centers are in Russia (Moscow, Saint Petersburg, Nizhny Novgorod, Rostov-on-Don) and EU (Vilnius, Lithuania). Founded in 1990, Auriga is one of the oldest companies in the Russian software R&D and IT outsourcing industry.
Auriga is the full member of RUSSOFT, the association of software developing companies from Russia, Belarus, and Ukraine.
Auriga provides its services to customers in such industries as medical device manufacturers, consumer electronics, digital health, semiconductors, retail & logistics, energy & utility, software vendors (ISVs), industrial automation, etc.
The list of Auriga's clients includes such companies as Chrysler, Draeger Medical, IBM, Lynx Software Technologies, Inc., Fresenius Medical Care, MedLumics, nVent, Stada and many more.
History
Auriga was founded on December 29, 1990, by Dr. Alexis Sukharev, a professor of the Moscow State University, in Moscow, Russia. Named Infort initially, the company was later rebranded as Auriga. In 1993, Auriga Inc. was incorporated in the U.S.
In 1991–1992, Auriga launched the first offshore project for ADE Corp. in Statistical Process Control and Quality Assurance and the first offshore system programming projects in real-time area for Aquila Technologies Group Inc. and Encore, Inc. In 1995–1996, Auriga launched the first projects for Interleaf, Inc. (in 2000, Interleaf, Inc. became a Division of BroadVision, Inc.) and Lynx RTS, Inc. (in 2000, renamed LynuxWorks, Inc.; in 2014, renamed Lynx Software Technologies, Inc.). Both companies remain Auriga's clients even today.
In 2002-2003, Auriga became a member of Bitkom, German Association for Information Technology, Telecommunications and New Media and a member of New Hampshire High Tech Council (NHHTC); Massachusetts Telecommunications Council (MTC); MIT Enterprise Forum of Cambridge; Secure Digital Association (SDA).
2010 - Auriga opens a Representative Office in Vilnus, Lithuania.
2013 - Auriga engineers ported and enhanced proprietary face-recognition algorithm on Android platform. Over 40 engineers received Certificates of training in IEC 62304 in July 2013. Auriga was included in Global Outsourcing 100 list by IAOP and placed in eight sub-lists, including: Healthcare; Technology and Research & Development Services.
2014 - Auriga performed its first project for Google Glass and developed the second generation of the EnergyMeasure® software suite for Conservation Services Group (CSG).
2019 - Auriga announced the appointment of a new CEO, Vyacheslav Vanyulin.
References
Software companies based in Massachusetts
Companies based in Boston
Boston Area |
55935 | https://en.wikipedia.org/wiki/2600%3A%20The%20Hacker%20Quarterly | 2600: The Hacker Quarterly | 2600: The Hacker Quarterly is an American seasonal publication of technical information and articles, many of which are written and submitted by the readership, on a variety of subjects including hacking, telephone switching systems, Internet protocols and services, as well as general news concerning the computer "underground."
With origins in the phone phreaking community and late 20th-century counterculture, 2600 and its associated conference transitioned to coverage of modern hacker culture, and the magazine has become a platform for speaking out against increased digital surveillance and advocacy of personal and digital freedoms.
Publication history
The magazine's name comes from the phreaker discovery in the 1960s that the transmission of a 2600 hertz tone – which could be produced perfectly with a plastic toy whistle given away free with Cap'n Crunch cereal, discovered by friends of John Draper – over a long-distance trunk connection gained access to "operator mode," and allowed the user to explore aspects of the telephone system that were not otherwise accessible. The magazine was given its name by David Ruderman, who co-founded the magazine with his college friend, Eric Corley. Ruderman ended his direct involvement with the magazine three years later.
The magazine traces its origins to early Bulletin Board Systems as a place for hackers to share information and stories with each other. It was launched in 1984, coinciding with the book of the same name and the break-up of AT&T. It is published and edited by its co-founder Emmanuel Goldstein (a pen name of Corley which is an allusion to George Orwell's Nineteen Eighty-Four) and his company 2600 Enterprises, Inc. 2600 is released on the first Friday of the month following a season change, usually January, April, July, and October.
Goldstein has published a compilation of articles from the magazine entitled The Best of 2600: A Hacker Odyssey. The book, an 888-page hardcover, has been available from July 28, 2008 in the US and August 8, 2008 in the UK and is published by Wiley.
"Hacker" term
In the usage of 2600 and affiliates, the often loaded term "hacking" refers to grey hat hacking, which is generally understood to be any sort of technological utilization or manipulation of technology which goes above and beyond the capabilities inherent to the design of a given application. This usage attempts to maintain neutrality, as opposed to the politically charged and often contentious terms white hat hacking (which some consider hacking motivated exclusively by good, benevolent intentions—such as hardware modding or penetration testing), and black hat hacking – which some consider to be hacking motivated exclusively by malicious or selfish intentions, such as electronic theft, vandalism, hijacking of websites, and other types of cybercrime.) Other hackers believe that hat-color labels are an oversimplification and unnecessary designation, best suited for use by the media, and suggest that people who use hacking to commit crimes already have a label, that of criminal.
Conferences and meetings
2600 established the H.O.P.E. (Hackers on Planet Earth) conference in 1994, marking the publication's tenth anniversary. The conference is held at the Hotel Pennsylvania, in Manhattan, New York City, and has occurred every two years with the exception of the second HOPE in 1997, held at the Puck Building in Manhattan. The convention features events such as presentations, talks, concerts, and workshops. Speakers have included computer security figures and hackers such as Kevin Mitnick, Steven Levy, Richard Stallman, and Steve Wozniak, as well as whistleblowers William Binney, Daniel Ellsberg, and Edward Snowden, and countercultural figures like Jello Biafra and The Yes Men.
There are monthly meetings in over 24 countries. The meetings are listed in the back of the magazine, and are advertised as being open to anyone regardless of age or level of expertise.
In other media
2600 Films has made a feature-length documentary about famed hacker Kevin Mitnick, the Free Kevin movement and the hacker world, entitled Freedom Downtime, and is currently working on one titled Speakers' World.
Corley is also host of Off The Wall and Off the Hook, two New York talk radio shows. Both shows can be downloaded or streamed via the 2600 site, and are also broadcast on various radio stations:
Off the Hook is broadcast on WBAI (99.5 FM)
Off The Wall is broadcast on WUSB (90.1 FM).
In the 1995 movie Hackers, a character named Emmanuel Goldstein, also known as "Cereal Killer" was portrayed by Matthew Lillard.
Court cases
2600 has been involved in many court cases related to technology and freedom of speech alongside the Electronic Frontier Foundation, perhaps most significantly Universal v. Reimerdes involving the distribution of DVD copy protection tool DeCSS, where courts upheld the constitutionality of the Digital Millennium Copyright Act anti-circumvention provisions.
The magazine itself received a copyright claim for the ink spatter stock image featured on the Spring 2012 issue from Trunk Archive, an image licensing agency, using an automated image tracking toolkit. While Trunk Archive identified its own image that featured the ink spatter as the source, it was discovered that the original ink spatter was created by the Finnish artist Jukka Korhonen, on DeviantArt, who had released it into the public domain. Trunk Archive later retracted the claim and sent a letter to 2600 apologizing for the mistake.
See also
References
External links
2600 Index or 2600 Index mirror a searchable index of 2600 The Hacker Quarterly magazine article information.
1984 establishments in New York (state)
Computer magazines published in the United States
Quarterly magazines published in the United States
Hacker magazines
Magazines established in 1984
Magazines published in New York (state)
Works about computer hacking |
1206197 | https://en.wikipedia.org/wiki/Reaktor | Reaktor | Reaktor is a graphical modular software music studio developed by Native Instruments (NI). It allows musicians and sound specialists to design and build their own instruments, samplers, effects and sound design tools. It is supplied with many ready-to-use instruments and effects, from emulations of classic synthesizers to futuristic sound design tools. In addition, more than 3000 free instruments can be downloaded from the growing User Library. One of Reaktor's unique selling points is that all of its instruments can be freely examined, customized or taken apart; Reaktor is a tool that effectively encourages reverse engineering. Reaktor Player is a free limited version of the software that allows musicians to play NI-released Reaktor instruments, but not edit or reverse-engineer them.
Development History
Early development
In 1996, Native Instruments released Generator version 0.96 - a modular synthesizer for PC, requiring a proprietary audio card for low-latency operation. By 1998, Native Instruments redesigned the program to include new hierarchy, and integrated third-party drivers for use with any standard Windows sound card. By 1999, Reaktor 2.0 (a.k.a. Generator/Transformator) was released for Windows and Macintosh. Integrated real-time display of filters and envelopes and granular synthesis are among most notable features. Plug-in support for VST, VSTi, Direct Connect, MOTU, and DirectX formats is integrated by 2000 (software version 2.3).
With version 3.0 (released in 2001), Native Instruments introduced a redesigned audio engine and new graphic design. Further expansion of synthesis and sampling modules, addition of new control-based modules (XY control) and data management (event tables) greatly expands the abilities of the program. The earliest version to really resemble the modern incarnation of the software is version 3.5, which improved greatly in VST performance and sample handling. Reaktor 3.5 is the first release that features full cross-platform compatibility.
Reaktor 4 was a major enhancement in terms of stability, instrument library, GUI, and VSTi ease-of-use in external sequencers. It shipped almost six months behind schedule.
Version 5
In 2003 Native Instruments hired Vadim Zavalishin, developer of the Sync Modular software package. Zavalishin ceased the development of his software, yet integrated a deeper DSP-level operation within Reaktor, known as Reaktor Core Technology. His contributions, along with those of Reaktor Core developer Martijn Zwartjes, were released within Reaktor 5 in April 2005. Core Technology initially confused a lot of instrument designers because of its complexity, but is now steadily making its way into new instruments and ensembles.
Reaktor 5.1, released on 22 December 2005, and presented as a Christmas present, features new Core Cell modules, and a new series of FX and ensembles. Also a number of bug fixes were implemented.
The release of Reaktor 5.5 was announced for 1 September 2010. It features a revised interface as well as other changes.
Version 6
Reaktor 6.0 was released on September 9, 2015. It features many new improvements for advanced programmers. A new "Blocks" feature allowed for the development of rackmount style modular "patches" for creating synthesizers and effects.
Functionality
From the end-user standpoint, Reaktor is a sound creation/manipulation tool with a modular interface. Its patches consist of modules, connected by lines to provide a visual interpretation of signal flow. The building blocks used give Reaktor users freedom of choice to help shape their sound design. The modules are categorized into particular hierarchy to aid clarity in patching.
The patcher window allows one to navigate the inner structure of user's models. Many factory-shipped objects within Reaktor can be accessed and edited, and new objects can be generated on the fly. Each of the Reaktor modules is defined by its inner workings, and expansion thereof to the users' specification comes with relative ease.
The objects that are available within Reaktor range from simple math operators to large sound modules. Implementation of Core Technology with version 5 enables user to view and edit the structure of any "Core Module" building block. Although such editing can be an exceptionally powerful tool, successful manipulation of Core Cells with predictable results requires in-depth knowledge of algorithmic implementation of signal generation and processing. Native Instruments promote this functionality with online side-by-side comparison of Core implementation of simple DSP algorithm against C++ pseudocode.
Reaktor enables a user to implement variables (static or dynamic) which are used as defining properties of the patch. Users have an ability to generate a GUI of their own to provide dynamic control to their systems. Starting with version 4, Reaktor supports user-generated graphical content, enabling many users to generate original look and feel of their instruments.
A finished Reaktor ensemble may be loaded into a host sequencer (such as Steinberg Cubase or Ableton Live), and used as a stand-alone software plug-in for audio generation or processing (a multi-format proprietary loader is included with the software). Each panel control in the ensemble is capable of MIDI automation in the host sequencer.
Reaktor Ensembles
The Reaktor Library is one of the prominent features of the software, featuring a large variety of sound generators and effects that can be used as stand-alone instruments, or as an educational resource for reverse engineering. Reaktor 4 featured a library of 31 Reaktor ensembles. The fifth generation of software came with 32 new modules (though some were upgrades of Reaktor 4 Library tools). The libraries provide a mixture of conventional implementation of software synthesizers, samplers, and effects, along with a few ensembles of experimental nature, with emphasis on parametric algorithmic composition and extensive sound processing. Due to complete backwards-compatibility between later versions of the software, Reaktor 5 users have access to all 63 proprietary ensembles in Reaktor Library.
Furthermore, home-brew Reaktor ensembles can be shared by its users. Such exchange is encouraged by Native Instruments, characterized by the company's dedication for providing web-based tools and webspace for individual and third-party Reaktor extensions (this includes user Ensembles and presets for Reaktor Instruments and Effects).
See also
Comparison of audio synthesis environments
List of music software
References
External links
Reaktor 6 homepage
Native Instruments
Software synthesizers
Visual programming languages |
19153656 | https://en.wikipedia.org/wiki/Sugar%20packet | Sugar packet | A sugar packet is a delivery method for one serving of sugar or other sweetener. Sugar packets are commonly supplied in restaurants, coffeehouses, and tea houses, where they are preferred to sugar bowls or sugar dispensers for reasons of neatness, sanitation, spill control, and to some extent portion control.
Statistics
A typical sugar packet in the United States contains 2 to 4 grams of sugar. Some sugar packets in countries such as Poland contain 5 to 10 grams of sugar. Sugar packet sizes, shapes, and weights differ by brand, region, and other factors. Because a gram of any carbohydrate contains 4 nutritional calories (also referred to as "food calories" or kilo-calories), a typical four gram sugar packet has 16 nutritional calories.
The amount of sugar substitute in a packet generally differs from the volume and weight of sugar in a packet. Packets of sugar substitutes typically contain an amount of sweetener that provides an amount of sweetness comparable to a packet containing sugar.
Packets are often colored to provide simple identification of the type of sweetener in the pack.
History
The sugar cube was used in restaurants until it began to be replaced directly after World War II. At this time, machines were made that could produce small packets of sugar for nearly half the cost.
The sugar packet was invented by Benjamin Eisenstadt, the founder of Cumberland Packing best known as the manufacturer, distributor and marketer of Sweet 'N Low. Eisenstadt had been a tea bag factory worker, and became irritated by the task of refilling and unclogging all the sugar dispensers in his Brooklyn cafeteria across from the Brooklyn Navy Yard. He did not patent the idea and lost market share after discussions with larger sugar companies.
Collecting
The hobby of collecting sugar packets is called sucrology. Collectors can, for example, focus on the variety of types of sugar or brand names. Sugar packets are also handy forms of advertisement for businesses.
References
Sugar
American inventions
Food packaging |
19906325 | https://en.wikipedia.org/wiki/MindMapper | MindMapper | MindMapper (also known as ThinkWise) is a mind mapping software and mental organization tool developed by SimTech Systems. It allows users to create a mid map from thoughts in the brain and convert it into software programs such as Hangul, Word, or PowerPoint. As of 2020, the software was used by more than 10,000 organizations and companies in 96 countries.
History and overview
MindMapper was first developed as an in-house tool to help with industrial simulation projects for SimTech Systems in 1997. The tool combines the versatility of a mind mapping program with a reliable dashboard and a built-in planner. Some of its main features include knowledge management, brainstorming, creative thinking, visual thinking, clear communication, problem-solving, and project management.
Usage
Mind Mapper is used by corporates, the Army, the Supreme Court, along with education institutes and universities. It can also be used for research review process and synthesis.
Reviews
As per Predictive Analytics Today, the software can be used by students, professionals and artists for information visualizing, analyzing and implementing plans.
References
External links
Concept mapping software
Mind-mapping software |
10076223 | https://en.wikipedia.org/wiki/Lm%20sensors | Lm sensors | lm_sensors (Linux-monitoring sensors) is a free open-source software-tool for Linux that provides tools and drivers for monitoring temperatures, voltage, humidity, and fans. It can also detect chassis intrusions.
Issues
During 2001/2004, the lm_sensors package was not recommended for use on IBM ThinkPads due to potential EEPROM corruption issues on some models when aggressively probing for I2C devices. This has since been dealt with, and the separate README file dedicated to ThinkPads was removed in 2007.
In 2013, the command of lm-sensors began disrupting the gamma correction settings of some laptop display screens. This occurs while it is probing the I2C/SMBus adapters for connected hardware monitoring devices. Probing of these devices was disabled by default.
See also
Computer fan control
envsys on NetBSD
hw.sensors on OpenBSD / DragonFly BSD
I2C
Intelligent Platform Management Interface (IPMI)
Super I/O
Embedded Controller
Advanced Configuration and Power Interface (ACPI)
System Management Bus (SMBus)
References
External links
https://web.archive.org/web/20120429233433/http://www.lm-sensors.org/
HWMon, the new parent project of lm_sensors
Articles with underscores in the title
Free software programmed in C
Free system software
Software related to embedded Linux
System monitors |
15032 | https://en.wikipedia.org/wiki/IBM%20Personal%20Computer | IBM Personal Computer | The IBM Personal Computer (model 5150, commonly known as the IBM PC) is the first microcomputer released in the IBM PC model line and the basis for the IBM PC compatible de facto standard. Released on August 12, 1981, it was created by a team of engineers and designers directed by Don Estridge in Boca Raton, Florida.
The machine was based on open architecture and third-party peripherals. Over time, expansion cards and software technology increased to support it.
The PC had a substantial influence on the personal computer market. The specifications of the IBM PC became one of the most popular computer design standards in the world. The only significant competition it faced from a non-compatible platform throughout the 1980s was from the Apple Macintosh product line. The majority of modern personal computers are distant descendants of the IBM PC.
History
Prior to the 1980s, IBM had largely been known as a provider of business computer systems. As the 1980s opened, their market share in the growing minicomputer market failed to keep up with competitors, while other manufacturers were beginning to see impressive profits in the microcomputer space. The market for personal computers was dominated at the time by Tandy, Commodore and Apple, whose machines sold for several hundred dollars each and had become very popular. The microcomputer market was large enough for IBM's attention, with $15 billion in sales by 1979 and projected annual growth of more than 40% during the early 1980s. Other large technology companies had entered it, such as Hewlett-Packard, Texas Instruments and Data General, and some large IBM customers were buying Apples.
As early as 1980 there were rumors of IBM developing a personal computer, possibly a miniaturized version of the IBM System/370, and Matsushita acknowledged publicly that it had discussed with IBM the possibility of manufacturing a personal computer in partnership, although this project was abandoned. The public responded to these rumors with skepticism, owing to IBM's tendency towards slow-moving, bureaucratic business practices tailored towards the production of large, sophisticated and expensive business systems. As with other large computer companies, its new products typically required about four to five years for development, and a well publicized quote from an industry analyst was, "IBM bringing out a personal computer would be like teaching an elephant to tap dance."
IBM had previously produced microcomputers, such as 1975's IBM 5100, but targeted them towards businesses; the 5100 had a price tag as high as $20,000. Their entry into the home computer market needed to be competitively priced.
In 1980, IBM president John Opel, recognizing the value of entering this growing market, assigned William C. Lowe to the new Entry Level Systems unit in Boca Raton, Florida. Market research found that computer dealers were very interested in selling an IBM product, but they insisted the company use a design based on standard parts, not IBM-designed ones so that stores could perform their own repairs rather than requiring customers to send machines back to IBM for service.
Atari proposed to IBM in 1980 that it act as original equipment manufacturer for an IBM microcomputer, a potential solution to IBM's known inability to move quickly to meet a rapidly changing market. The idea of acquiring Atari was considered but rejected in favor of a proposal by Lowe that by forming an independent internal working group and abandoning all traditional IBM methods, a design could be delivered within a year and a prototype within 30 days. The prototype worked poorly but was presented with a detailed business plan which proposed that the new computer have an open architecture, use non-proprietary components and software, and be sold through retail stores, all contrary to IBM practice. It also estimated sales of 220,000 computers over three years, more than IBM's entire installed base.
This swayed the Corporate Management Committee, which converted the group into a business unit named "Project Chess", and provided the necessary funding and authority to do whatever was needed to develop the computer in the given timeframe. The team received permission to expand to 150 people by the end of 1980, and one day more than 500 IBM employees called in asking to join.
Design process
The design process was kept under a policy of strict secrecy, with none of the other IBM divisions knowing what was going on.
Several CPUs were considered, including the Texas Instruments TMS9900, Motorola 68000 and Intel 8088. The 68000 was considered the best choice, but was not production-ready like the others. The IBM 801 RISC processor was also considered, since it was considerably more powerful than the other options, but rejected due to the design constraint to use off-the-shelf parts.
IBM chose the 8088 over the similar but superior 8086 because Intel offered a better price for the former and could provide more units, and the 8088's 8-bit bus reduced the cost of the rest of the computer. The 8088 had the advantage that IBM already had familiarity with it from designing the IBM System/23 Datamaster. The 62-pin expansion bus slots were also designed to be similar to the Datamaster slots, and its keyboard design and layout became the Model F keyboard shipped with the PC, but otherwise the PC design differed in many ways.
The 8088 motherboard was designed in 40 days, with a working prototype created in four months, demonstrated in January 1981. The design was essentially complete by April 1981, when it was handed off to the manufacturing team. PCs were assembled in an IBM plant in Boca Raton, with components made at various IBM and third party factories. The monitor was an existing design from IBM Japan, the printer was manufactured by Epson. Because none of the functional components were designed by IBM, they obtained no patents on the PC.
Many of the designers were computer hobbyists who owned their own computers, including many Apple II owners, which influenced the decisions to design the computer with an open architecture and publish technical information so others could create software and expansion slot peripherals.
During the design process IBM avoided vertical integration as much as possible, choosing for example to license Microsoft BASIC despite having a version of BASIC of its own for mainframes, due to the better existing public familiarity with the Microsoft version.
Debut
The IBM PC debuted on August 12, 1981 after a twelve-month development. Pricing started at $1,565 for a configuration with 16 kB RAM, Color Graphics Adapter, and no disk drives. The price was designed to compete with comparable machines in the market. For comparison, the Datamaster, announced two weeks earlier as IBM's least expensive computer, cost $10,000.
IBM's marketing campaign licensed the likeness of Charlie Chaplin's character "The Little Tramp" for a series of advertisements based on Chaplin's movies, played by Billy Scudder.
The PC was IBM's first attempt to sell a computer through retail channels rather than directly to customers. Because IBM did not have retail experience, they partnered with the retail chains ComputerLand and Sears Roebuck, who provided important knowledge of the marketplace and became the main outlets for the PC. More than 190 ComputerLand stores already existed, while Sears was in the process of creating a handful of in-store computer centers for sale of the new product.
Reception was overwhelmingly positive, with sales estimates from analysts suggesting billions of dollars in sales over the next few years, and the IBM PC immediately became the talk of the entire computing industry. Dealers were overwhelmed with orders, including customers offering pre-payment for machines with no guaranteed delivery date. By the time the machine was shipping, the term "PC" was becoming a household name.
Success
Sales exceeded IBM's expectations by as much as 800%, shipping 40,000 PCs a month at one point. The company estimated that 50 to 70% of PCs sold in retail stores went to the home. In 1983 they sold more than 750,000 machines, while Digital Equipment Corporation, a competitor whose success among others had spurred them to enter the market, had sold only 69,000 machines in that period.
Software support from the industry grew rapidly, with the IBM nearly instantly becoming the primary target for most microcomputer software development. One publication counted 753 software packages available a year after the PC's release, four times as many as the Macintosh had a year after release. Hardware support also grew rapidly, with 30–40 companies competing to sell memory expansion cards within a year.
By 1984, IBM's revenue from the PC market was $4 billion, more than twice that of Apple. A 1983 study of corporate customers found that two thirds of large customers standardizing on one computer chose the PC, compared to 9% for Apple. A 1985 Fortune survey found that 56% of American companies with personal computers used PCs, compared to Apple's 16%.
Almost as soon as the PC reached the market, rumors of clones began, and the first PC compatible clone was released in June 1982, less than a year after the PC's debut.
Hardware
For low cost and a quick design turnaround time, the hardware design of the IBM PC used entirely "off-the-shelf" parts from third party manufacturers, rather than unique hardware designed by IBM.
The PC is housed in a wide, short steel chassis intended to support the weight of a CRT monitor. The front panel is made of plastic, with an opening where one or two disk drives can be installed. The back panel houses a power inlet and switch, a keyboard connector, a cassette connector and a series of tall vertical slots with blank metal panels which can be removed in order to install expansion cards.
Internally, the chassis is dominated by a motherboard which houses the CPU, built-in RAM, expansion RAM sockets, and slots for expansion cards.
The IBM PC was highly expandable and upgradeable, but the base factory configuration included:
Motherboard
The PC is built around a single large circuit board called a motherboard which carries the processor, built-in RAM, expansion slots, keyboard and cassette ports, and the various peripheral integrated circuits that connected and controlled the components of the machine.
The peripheral chips included an Intel 8259 PIC, an Intel 8237 DMA controller, and an Intel 8253 PIT. The PIT provides clock "ticks" and dynamic memory refresh timing.
CPU and RAM
The CPU is an Intel 8088, a cost-reduced form of the Intel 8086 which largely retains the 8086's internal 16-bit logic, but exposes only an 8-bit bus. The CPU is clocked at 4.77 MHz, which would eventually become an issue when clones and later PC models offered higher CPU speeds that broke compatibility with software developed for the original PC. The single base clock frequency for the system was 14.31818 MHz, which when divided by 3, yielded the 4.77 MHz for the CPU (which was considered close enough to the then 5 MHz limit of the 8088), and when divided by 4, yielded the required 3.579545 MHz for the NTSC color carrier frequency.
The PC motherboard included a second, empty socket, described by IBM simply as an "auxiliary processor socket", although the most obvious use was the addition of an Intel 8087 math coprocessor, which improved floating-point math performance.
From the factory the PC was equipped with either 16 kB or 64 kB of RAM. RAM upgrades were provided both by IBM and third parties as expansion cards, and could upgrade the machine to a maximum of 256 kB.
ROM BIOS
The BIOS is the firmware of the IBM PC, occupying four 2 kB ROM chips on the motherboard. It provides bootstrap code and a library of common functions that all software can use for many purposes, such as video output, keyboard input, disk access, interrupt handling, testing memory, and other functions. IBM shipped several versions of the BIOS throughout the PC's lifespan.
Display
While most home computers had built-in video output hardware, IBM took the unusual approach of offering two different graphics options, the MDA and CGA cards. The former provided high-resolution monochrome text, but could not display anything except text, while the latter provided medium- and low-resolution color graphics and text.
CGA used the same scan rate as NTSC television, allowing it to provide a composite video output which could be used with any compatible television or composite monitor, as well as a direct-drive TTL output suitable for use with any RGBI monitor using an NTSC scan rate. IBM also sold the 5153 color monitor for this purpose, but it was not available at release and was not released until March 1983.
MDA scanned at a higher frequency and required a proprietary monitor, the IBM 5151. The card also included a built-in printer port.
Both cards could also be installed simultaneously for mixed graphics and text applications. For instance, AutoCAD, Lotus 1-2-3 and other software allowed use of a CGA Monitor for graphics and a separate monochrome monitor for text menus. Third parties went on to provide an enormous variety of aftermarket graphics adapters, such as the Hercules Graphics Card.
The software and hardware of the PC, at release, was designed around a single 8-bit adaptation of the ASCII character set, now known as code page 437.
Storage
The two bays in the front of the machine could be populated with one or two 5.25″ floppy disk drives, storing 160 kB per disk side for a total of 320 kB of storage on one disk. The floppy drives require a controller card inserted in an expansion slot, and connect with a single ribbon cable with two edge connectors. The IBM floppy controller card provides an external 37-pin D-sub connector for attachment of an external disk drive, although IBM did not offer one for purchase until 1986.
As was common for home computers of the era, the IBM PC offered a port for connecting a cassette data recorder. Unlike the typical home computer however, this was never a major avenue for software distribution, probably because very few PCs were sold without floppy drives. The port was removed on the very next PC model, the XT.
At release, IBM did not offer any hard disk drive option and adding one was difficult - the PC's stock power supply had inadequate power to run a hard drive, the motherboard did not support BIOS expansion ROMs which was needed to support a hard drive controller, and both PC DOS and the BIOS had no support for hard disks. After the XT was released, IBM altered the design of the 5150 to add most of these capabilities, except for the upgraded power supply. At this point adding a hard drive was possible, but required the purchase of the IBM 5161 Expansion Unit, which contained a dedicated power supply and included a hard drive.
Although official hard drive support did not exist, the third party market did provide early hard drives that connected to the floppy disk controller, but required a patched version of PC DOS to support the larger disk sizes.
Human interface
The only option for human interface provided in the base PC was the built-in keyboard port, meant to connect to the included IBM Model F keyboard. The Model F was initially developed for the IBM Datamaster, and was substantially better than the keyboards provided with virtually all home computers on the market at that time in many regards - number of keys, reliability and ergonomics. While some home computers of the time utilized chiclet keyboards or inexpensive mechanical designs, the IBM keyboard provided good ergonomics, reliable and positive tactile key mechanisms and flip-up feet to adjust its angle.
Public reception of the keyboard was extremely positive, with some sources describing it as a major selling point of the PC and even as "the best keyboard available on any microcomputer."
At release, IBM provided a Game Control Adapter which offered a 15-pin port intended for the connection of up to two joysticks, each having two analog axes and two buttons.
Communications
Connectivity to other computers and peripherals was initially provided through serial and parallel ports.
IBM provided a serial card based on an 8250 UART. The BIOS supports up to two serial ports.
IBM provided two different options for connecting Centronics-compatible parallel printers. One was the IBM Printer Adapter, and the other was integrated into the MDA as the IBM Monochrome Display and Printer Adapter.
Expansion
The expansion capability of the IBM PC was very significant to its success in the market. Some publications highlighted IBM's uncharacteristic decision to publish complete, thorough specifications of the system bus and memory map immediately on release, with the intention of fostering a market of compatible third-party hardware and software.
The motherboard includes five 62-pin card edge connectors which are connected to the CPU's I/O lines. IBM referred to these as "I/O slots," but after the expansion of the PC clone industry they became retroactively known as the ISA bus. At the back of the machine is a metal panel, integrated into the steel chassis of the system unit, with a series of vertical slots lined up with each card slot.
Most expansion cards have a matching metal bracket which slots into one of these openings, serving two purposes. First, a screw inserted through a tab on the bracket into the chassis fastens the card securely in place, preventing the card from wiggling out of place. Second, any ports the card provides for external attachment are bolted to the bracket, keeping them secured in place as well.
The PC expansion slots can accept an enormous variety of expansion hardware, adding capabilities such as:
Graphics
Sound
Mouse support
Expanded memory
Additional serial or parallel ports
Networking
Connection to proprietary industrial or scientific equipment
The market reacted as IBM had intended, and within a year or two of the PC's release the available options for expansion hardware were immense.
5161 Expansion Unit
The expandability of the PC was important, but had significant limitations.
One major limitation was the inability to install a hard drive, as described above. Another was that there were only five expansion slots, which tended to get filled up by essential hardware - a PC with a graphics card, memory expansion, parallel card and serial card was left with only one open slot, for instance.
IBM rectified these problems in the later XT, which included more slots and support for an internal hard drive, but at the same time released the 5161 Expansion Unit, which could be used with either the XT or the original PC. The 5161 connected to the PC system unit using a cable and a card plugged into an expansion slot, and provided a second system chassis with more expansion slots and a hard drive.
Software
IBM initially announced intent to support multiple operating systems: CP/M-86, UCSD p-System, and an in-house product called IBM PC DOS, developed by Microsoft. In practice, IBM's expectation and intent was for the market to primarily use PC-DOS, CP/M-86 was not available for six months after the PC's release and received extremely few orders once it was, and p-System was also not available at release. PC DOS rapidly established itself as the standard OS for the PC and remained the standard for over a decade, with a variant being sold by Microsoft themselves as MS-DOS.
The PC included BASIC in ROM, a common feature of 1980s home computers. Its ROM BASIC supported the cassette tape interface, but PC DOS did not, limiting use of that interface to BASIC only.
PC DOS version 1.00 supported only 160 kB SSDD floppies, but version 1.1, which was released nine months after the PC's introduction, supported 160 kB SSDD and 320 kB DSDD floppies. Support for the slightly larger nine sector per track 180 kB and 360 kB formats was added in March 1983.
Third-party software support grew extremely quickly, and within a year the PC platform was supplied with a vast array of titles for any conceivable purpose.(It used CUI based operating system BASIC to run the computer)
Reception
Reception of the IBM PC was extremely positive. Even before its release reviewers were impressed by the advertised specifications of the machine, and upon its release reviews praised virtually every aspect of its design both in comparison to contemporary machines and with regards to new and unexpected features.
Praise was directed at the build quality of the PC, in particular its keyboard, IBM's decision to use open specifications to encourage third party software and hardware development, their speed at delivering documentation and the quality therein, the quality of the video display, and the use of commodity components from established suppliers in the electronics industry. The price was considered extremely competitive compared to the value per dollar of competing machines.
Two years after its release, BYTE Magazine retrospectively concluded that the PC had succeeded both because of its features – an 80-column screen, open architecture, and high-quality keyboard – and the failure of other computer manufacturers to achieve these features first:
Creative Computing that year named the PC the best desktop computer between $2000 and $4000, praising its vast hardware and software selection, manufacturer support, and resale value.
Many IBM PCs remained in service long after their technology became largely obsolete. For instance, as of June 2006 (23–25 years after release) IBM PC and XT models were still in use at the majority of U.S. National Weather Service upper-air observing sites, processing data returned from radiosondes attached to weather balloons.
Due to its status as the first entry in the extremely influential PC industry, the original IBM PC remains valuable as a collector's item. , the system had a market value of $50–$500.
Model line
IBM sold a number of computers under the "Personal Computer" or "PC" name throughout the 80s. The name was not used for several years before being reused for the IBM PC Series in the 90s and early 2000s.
As with all PC-derived systems, all IBM PC models are nominally software-compatible, although some timing-sensitive software will not run correctly on models with faster CPUs.
Clones
Because the IBM PC was based on commodity hardware rather than unique IBM components, and because its operation was extensively documented by IBM, creating machines that were fully compatible with the PC offered few challenges other than the creation of a compatible BIOS ROM.
Simple duplication of the IBM PC BIOS was a direct violation of copyright law, but soon into the PC's life the BIOS was reverse-engineered by companies like Compaq, Phoenix Software Associates, American Megatrends and Award, who either built their own computers that could run the same software and use the same expansion hardware as the PC, or sold their BIOS code to other manufacturers who wished to build their own machines.
These machines became known as IBM compatibles or "clones", and software was widely marketed as compatible with "IBM PC or 100% compatible". Shortly thereafter, clone manufacturers began to make improvements and extensions to the hardware, such as by using faster processors like the NEC V20, which executed the same software as the 8088 at a higher speed up to 10 MHz.
The clone market eventually became so large that it lost its associations with the original PC and became a set of de facto standards established by various hardware manufacturers.
References
Cited references
External links
IBM SCAMP
IBM 5150 information at www.minuszerodegrees.net
IBM PC 5150 System Disks and ROMs
IBM PC from IT Dictionary
IBM PC history and technical information
What a legacy! The IBM PC's 25 year legacy
CNN.com - IBM PC turns 25
IBM-5150 and collection of old digital and analog computers at oldcomputermuseum.com
IBM PC images and information
A brochure from November, 1982 advertising the IBM PC
A Picture of the XT/370 cards, showing the dual 68000 processors
Personal Computer
Computer-related introductions in 1981
Products introduced in 1981
16-bit computers |
697150 | https://en.wikipedia.org/wiki/Uninstaller | Uninstaller | An uninstaller, also called a deinstaller, is a variety of utility software designed to remove other software or parts of it from a computer. It is the opposite of an installer. Uninstallers are useful primarily when software components are installed in multiple directories, or where some software components might be shared between the system being uninstalled and other systems that remain in use.
Generic uninstallers flourished in the 1990s due to the popularity of shared libraries and the constraints of then-current operating systems, especially Microsoft Windows XP. Declining storage costs and increasing capacity subsequently made reclaiming disk space less urgent, while end-user applications have increasingly relied on simpler installation architectures that consolidate all components to facilitate removal.
Components
Typical uninstallers contain the following components:
Logger: The Logger is used to log installations (e.g., log which files were added or changed, which registry entries were added or changed, etc. at the time of installation). This log is used when the user decides to uninstall the logged installation at later date (in that case, the log is "reversed" — i.e., the log is read, but opposite actions are taken in reverse order).
Uninstaller: The Uninstaller is used to reverse changes in the log. This way, the applications can be uninstalled because all changes that were made at the times of installation are reversed.
Analyzer (optional): The Analyzer is used to uninstall programs of which installation is not logged. In that case, the program analyzes the program and finds (and deletes, if the user decided to uninstall the program) all related components.
Watcher (optional): The Watcher watches running programs for installation programs (and usually offers to start the logger when such programs are detected). Usually, this works by watching the tasklist for any names that are commonly used by installation programs (e.g., SETUP.EXE, INSTALL.EXE, etc.)
Other tools (optional): Some uninstallers may also contain other related tools for clearing caches or removing unwanted files.
History
Uninstall was invented by Jack Bicer. While he was working on Norton Desktop for Windows at Symantec, Bicer came up with the Uninstall concept and developed the first Uninstall program in 1991. When published on March 23, 1992, NORTON DESKTOP FOR WINDOWS V.20 (see the official - README.TXT) was the first software package ever to include an Uninstaller, shown under the "UNINSTALLING NORTON DESKTOP" section.
After the release of Norton Desktop for Windows 2.0, Ken Spreitzer, who was a tester for the product at Symantec, capitalized on the uninstall idea and wrote the first widely used PC program called "UnInstaller", initially licensed to MicroHelp and then by February 1998 sold by CyberMedia. Microhelp published Spreitzer's program as Uninstaller version 1. In 1995, Spreitzer told The New York Times that the royalties he received from Microhelp for Uninstaller made him a millionaire by age 30. Tim O'Pry, while president of Microhelp, substantially rewrote the code for Uninstaller version 2, which became a best-selling program.
See also
IObit Uninstaller
Revo Uninstaller
Should I Remove It?
ZSoft Uninstaller
References
Utility software types
Package management systems |
44038279 | https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Pachet | François Pachet | François Pachet (born 10 January 1964) is a French scientist, composer and director of the Spotify Creator Technology Research Lab. Before joining Spotify he led Sony Computer Science Laboratory in Paris. He is one of the pioneers of computer music closely linked to artificial intelligence, especially in the field of machine improvisation and style modelling. He has been elected ECCAI Fellow in 2014.
Education
Pachet graduated from École des ponts ParisTech in Civil Engineering, and Computer Science in 1987, majoring Applied Mathematics. He spent 18 months as lecturer at Kuala Lumpur at the University of Malaya in 1987–1988. He obtained a PhD from Pierre and Marie Curie University
in Computer Science, (His thesis was "Knowledge representation with objects and rules: the NéOpus system", supervised by Jean-François Perrot).
He spent 1 year as post-doc in Montréal at Université du Québec à Montréal, where he worked on the Cyc project Common sense representation, Douglas Lenat, MCC), with the help of Hafedh Mili professor at UQAM.
In 1997, he got his habilitation diploma on the subject: "Object-oriented languages and knowledge representation" at University Pierre et Marie Curie.
He was auditeur at the 58th national session of Institut des Hautes Etudes en Défense Nationale, in 2006, and was appointed Colonel in 2007 in the "réserve citoyenne" (French Air Force).
Experiences
In 1993, he was appointed Assistant Professor (in French, "Maitre de conférences"), at Pierre and Marie Curie University until 1997 in Computer Science, Research and Teaching.
In 1997, Pachet moved to Sony-CSL (Computer Science Laboratory) Paris. He started a research activity on music and artificial intelligence. His team has authored and pioneered many technologies (about 35 patents) about electronic music distribution, audio feature extraction and music interaction.
He was appointed director of Sony Computer Science Laboratories in 2014. The CSL (the branch of Sony-CSL Tokyo) is dedicated to basic research in computer science; it was created by Luc Steels and Mario Tokoro in 1996.
Since 2017, he is director of Spotify's Creator Research Technology Lab in Paris, where he develops tools for assisting music creation.
Achievements
The Music team at Sony Computer Science Laboratory Paris was founded in 1997 by Pachet, where he developed the vision that metadata can greatly enhance the musical experience, from listening to performance.
The Flow Composer is his second achievement, a system to compose lead sheets in the style of arbitrary composers. It was followed by LSDB, the first collecting lead sheets in electronic format with a large-scale effort (Over 11,000 lead sheets collected); and Virtuoso, a solo jazz detector. The "Popular Music Browser" project, which started in 1998, at Sony Computer Science Laboratories This research project covers all areas of the music‐to‐listener chain, from music description, descriptor extraction from the music signal, or data mining techniques, Similarity‐based access, and novel music retrieval methods such as automatic sequence generation, and to user interface issues.
Moreover, he has designed the Continuator, a system allowing real-time musical improvisation with an algorithm. He is now the beneficiary of the ERC Grant Flow Machines for investigating how machines can boost creativity in humans and be able to continue a work in the same musical style. Pachet wants a future in which consumers could buy the unique style of an artist and apply it to their own material; he says, "I call it 'Stylistic Cryogenics' -- to freeze the style into an object that can be reused and made alive again".
The MusicSpace is a spatialization control system created with O. Delerue in 2000.
Another achievement is CUIDADO (Content-based Unified Interfaces and Descriptors for Audio/music Databases available Online), a two-year project ended in 2003, on developing content-based, audio modules and applications; the project includes the analysis, the navigation and creative process. This project is satisfying the needs of record labels and copyright societies for Information management methods, for marketing and for protecting their informations, using an Authoring system using content features for professional musicians and studios. Moreover, in 2014, Pachet presented two music tutorials on Brazilian guitar and Jazz.
His most notable achievement is the Continuator, an interactive music improvisation system. Experimented with many
professional musicians, presented notably at the SIGGRAPH’03 conference and
considered a reference in the domain of music interaction, an example of a Musical Turing test with the Continuator on VPRO Channel with Jazz Pianist jazz Albert van Veenendaal (Amsterdam).
ARTE presents Pachet on Square Idée "Demain, devenir Wagner ou Daft Punk?", (Tomorrow, become Wagner or Daft Punk?) October 2015. Pachet writes about using CP techniques to model style in music and text for ACP (Association for Constraints Programming), in September 2015.
In 2017 he produced and released a multi-artist album, Hello World, composed with Artificial Intelligence.
See also
Active listening
Machine learning
Bibliography
Best paper award.
Personal Writings
Pachet has written several non-scientific books about music:
- Histoire d'une oreille. An augmented book about the ontogenesis of a musical ear.
- Comment je n'ai pas rencontré Paul McCartney. A short story.
- Max Order ou l'invention du style (a comic book realized in the context of the ERCcOMICS project).
References
External links
François Pachet (Sony CSL Paris)
François Pachet's website
Flow Machine's website
Can being a jazz musician make you a better decision maker? Hot Topics 2015
1964 births
Living people
French computer scientists
Artificial intelligence researchers
European Research Council grantees
Spotify people |
14002429 | https://en.wikipedia.org/wiki/Fail2ban | Fail2ban | Fail2Ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper.
Functionality
Fail2Ban operates by monitoring log files (e.g. , , etc.) for selected entries and running scripts based on them. Most commonly this is used to block selected IP addresses that may belong to hosts that are trying to breach the system's security. It can ban any host IP address that makes too many login attempts or performs any other unwanted action within a time frame defined by the administrator. Includes support for both IPv4 and IPv6. Optionally longer bans can be custom-configure for "recidivist" abusers that keep coming back. Fail2Ban is typically set up to unban a blocked host within a certain period, so as to not "lock out" any genuine connections that may have been temporarily misconfigured. However, an unban time of several minutes is usually enough to stop a network connection being flooded by malicious connections, as well as reducing the likelihood of a successful dictionary attack.
Fail2Ban can perform multiple actions whenever an abusive IP address is detected: update Netfilter/iptables or PF firewall rules, TCP Wrapper's table, to reject an abuser's IP address; email notifications; or any user-defined action that can be carried out by a Python script.
The standard configuration ships with filters for Apache, Lighttpd, sshd, vsftpd, qmail, Postfix and Courier Mail Server. Filters are defined by Python regexes, which may be conveniently customized by an administrator familiar with regular expressions. A combination of a filter and an action is known as a "jail" and is what causes a malicious host to be blocked from accessing specified network services. As well as the examples that are distributed with the software, a "jail" may be created for any network-facing process that creates a log file of access.
Integrations
Fail2ban can be integrated with many APIs, including blocklist.de and AbuseIPDB.
Shortcomings
Fail2Ban fails to protect against a distributed brute-force attack.
There is no interaction with application-specific APIs/AGIs.
See also
IPBan, a log-based intrusion-prevention security tool for Windows
DenyHosts, a log-based intrusion-prevention security tool
Stockade, a rate-limiting approach to spam mitigation.
OSSEC, an open-source host-based intrusion-detection system.
References
External links
Debian popularity contest results for fail2ban
Free software programmed in Python
Computer network security
Computer security software
Internet Protocol based network software
Free network-related software
Free security software
Linux security software
Brute force blocking software |
50955454 | https://en.wikipedia.org/wiki/Aaron%20Clauset | Aaron Clauset | Aaron Clauset is an American computer scientist who works in the areas of Network Science, Machine Learning, and Complex Systems. He is currently a professor of Computer Science at the University of Colorado Boulder and is external faculty at the Santa Fe Institute.
Education
Clauset completed his undergraduate studies in Physics and Computer Science at Haverford College in 2001. He earned his Ph.D. in Computer Science in 2006 from the University of New Mexico under the supervision of Cristopher Moore. He was then an Omidyar Fellow at the Santa Fe Institute until 2010.
Career
In 2010, he joined the University of Colorado Boulder as an Assistant Professor, with primary appointments in the Computer Science Department and the BioFrontiers Institute, an interdisciplinary institute focused on quantitative systems biology. He joined the founding editorial board of Science Advances as an Associate Editor in 2014, and became the Deputy Editor responsible for social and interdisciplinary sciences in 2017. In 2018, he was award tenure and promoted to Associate Professor at the University of Colorado Boulder.
Clauset is best known for work done with Cosma Shalizi and Mark Newman on developing rigorous statistics tests for the presence of a power law pattern in empirical data, and for showing that many distributions that were claimed to be power laws actually were not. He is also known for his work on developing algorithms for detecting community structure in complex networks, particularly a model of hierarchical clustering in networks developed with Cristopher Moore and Mark Newman. In other work, Clauset is known for his specific discovery, with Maxwell Young and Kristian Skrede Gleditsch, that the frequency and severity of terrorist events worldwide follows a power-law distribution. This discovery was summarized by Nate Silver in his popular science book The Signal and the Noise.
In January 2020, Clauset's work on scale-free networks and the distribution of terrorist events garnered public attention after two of his papers were cited in the blog of British political advisor Dominic Cummings. The blog post was released as part of an advertisement searching for "data scientists, project managers, policy experts, assorted weirdos", with Clauset's papers being cited as examples of work potential candidates should be aware of for use in public policy. In response, Clauset stated that the "paper on scale-free networks is not directly relevant to government policy … Cummings is using our paper as an example of using careful statistical and computational analyses of large and diverse data sets to reassess ideas that may be accepted as conventional wisdom." Clauset added that "in many cases, we don’t understand causality well enough to formulate a policy that will not do more damage than good."
Awards and honors
In 2015, Clauset received a prestigious CAREER Award from the National Science Foundation to develop and evaluate new methods for characterizing the structure of networks. In 2016, Clauset received the Erdös-Rényi Prize in Network Science from the Network Science Society for his contributions to the study of network structure, including Internet mapping, inference of missing links, and community structure, and for his provocative analyses of human conflicts and social stratification.
Personal life
Aaron Clauset was a contestant on the fourth season of the NBC reality television show Average Joe: The Joe Strikes Back, which aired in 2005. From 2002 to 2016, he wrote a blog Structure+Strangeness on science, complex systems, and computation.
Selected publications
.
.
.
.
.
.
.
References
External links
Home page at the Santa Fe Institute
Citations in Google Scholar
Living people
American computer scientists
Haverford College alumni
University of New Mexico alumni
University of Colorado Boulder faculty
Science bloggers
Santa Fe Institute people
Year of birth missing (living people)
Network scientists |
20570497 | https://en.wikipedia.org/wiki/Network%20Abstraction%20Layer | Network Abstraction Layer | The Network Abstraction Layer (NAL) is a part of the H.264/AVC and HEVC video coding standards. The main goal of the NAL is the provision of a "network-friendly" video representation addressing "conversational" (video telephony) and "non conversational" (storage, broadcast, or streaming) applications. NAL has achieved a significant improvement in application flexibility relative to prior video coding standards.
Introduction
An increasing number of services and growing popularity of high definition TV are creating greater needs for higher coding efficiency. Moreover, other transmission media such as cable modem, xDSL, or UMTS offer much lower data rates than broadcast channels, and enhanced coding efficiency can enable the transmission of more video channels or higher quality video representations within existing digital transmission capacities.
Video coding for telecommunication applications has diversified from ISDN and T1/E1 service to embrace PSTN, mobile wireless networks, and LAN/Internet network delivery. Throughout this evolution, continued efforts have been made to maximize coding efficiency while dealing with the diversification of network types and their characteristic formatting and loss/error robustness requirements.
The H.264/AVC and HEVC standards are designed for technical solutions including areas like broadcasting (over cable, satellite, cable modem, DSL, terrestrial, etc.) interactive or serial storage on optical and magnetic devices, conversational services, video-on-demand or multimedia streaming, multimedia messaging services, etc. Moreover, new applications may be deployed over existing and future networks. This raises the question about how to handle this variety of applications and networks.
To address this need for flexibility and customizability, the design covers a NAL that formats the Video Coding Layer (VCL) representation of the video and provides header information in a manner appropriate for conveyance by a variety of transport layers or storage media.
The NAL is designed in order to provide "network friendliness" to enable simple and effective customization of the use of VCL for a broad variety of systems.
The NAL facilitates the ability to map VCL data to transport layers such as:
RTP/IP for any kind of real-time wire-line and wireless Internet services.
File formats, e.g., ISO MP4 for storage and MMS.
H.32X for wireline and wireless conversational services.
MPEG-2 systems for broadcasting services, etc.
The full degree of customization of the video content to fit the needs of each particular application is outside the scope of the video coding standardization effort, but the design of the NAL anticipates a variety of such mappings. Some key concepts of the NAL are NAL units, byte stream, and packet formats uses of NAL units, parameter sets, and access units. A short description of these concepts is given below.
NAL units
The coded video data is organized into NAL units, each of which is effectively a packet that contains an integer number of bytes. The first byte of each H.264/AVC NAL unit is a header byte that contains an indication of the type of data in the NAL unit. For HEVC the header was extended to two bytes. All the remaining bytes contain payload data of the type indicated by the header.
The NAL unit structure definition specifies a generic format for use in both packet-oriented and bitstream-oriented transport systems, and a series of NAL units generated by an encoder is referred to as a NAL unit stream.
NAL Units in Byte-Stream Format Use
Some systems require delivery of the entire or partial NAL unit stream as an ordered stream of bytes or bits within which the locations of NAL unit boundaries need to be identifiable from patterns within the coded data itself.
For use in such systems, the H.264/AVC and HEVC specifications define a byte stream format. In the byte stream format, each NAL unit is prefixed by a specific pattern of three bytes called a start code prefix. The boundaries of the NAL unit can then be identified by searching the coded data for the unique start code prefix pattern. The use of emulation prevention bytes guarantees that start code prefixes are unique identifiers of the start of a new NAL unit.
A small amount of additional data (one byte per video picture) is also added to allow decoders that operate in systems that provide streams of bits without alignment to byte boundaries to recover the necessary alignment from the data in the stream.
Additional data can also be inserted in the byte stream format that allows expansion of the amount of data to be sent and can aid in achieving more rapid byte alignment recovery, if desired.
NAL Units in Packet-Transport System Use
In other systems (e.g., IP/RTP systems), the coded data is carried in packets that are framed by the system transport protocol, and identification of the boundaries of NAL units within the packets can be established without use of start code prefix patterns. In such systems, the inclusion of start code prefixes in the data would be a waste of data carrying capacity, so instead the NAL units can be carried in data packets without start code prefixes.
VCL and Non-VCL NAL Units
NAL units are classified into VCL and non-VCL NAL units. The VCL NAL units contain the data that represents the values of the samples in the video pictures, and the non-VCL NAL units contain any associated additional information such as parameter sets (important header data that can apply to a large number of VCL NAL units) and supplemental enhancement information (timing information and other supplemental data that may enhance usability of the decoded video signal but are not necessary for decoding the values of the samples in the video pictures).
Parameter Sets
A parameter set is supposed to contain information that is expected to rarely change and offers the decoding of a large number of VCL NAL units. There are two types of parameter sets:
sequence parameter sets (SPS), which apply to a series of consecutive coded video pictures called a coded video sequence
picture parameter sets (PPS), which apply to the decoding of one or more individual pictures within a coded video sequence
The sequence and picture parameter-set mechanism decouples the transmission of infrequently changing information from the transmission of coded representations of the values of the samples in the video pictures. Each VCL NAL unit contains an identifier that refers to the content of the relevant picture parameter set and each picture parameter set contains an identifier that refers to the content of the relevant sequence parameter set. In this manner, a small amount of data (the identifier) can be used to refer to a larger amount of information (the parameter set) without repeating that information within each VCL NAL unit.
Sequence and picture parameter sets can be sent well ahead of the VCL NAL units that they apply to, and can be repeated to provide robustness against data loss. In some applications, parameter sets may be sent within the channel that carries the VCL NAL units (termed "in-band" transmission). In other applications, it can be advantageous to convey the parameter sets "out-of-band" using a more reliable transport mechanism than the video channel itself.
Access Units
A set of NAL units in a specified form is referred to as an access unit. The decoding of each access unit results in one decoded picture.
Each access unit contains a set of VCL NAL units that together compose a primary coded picture. It may also be prefixed with an access unit delimiter to aid in locating the start of the access unit. Some supplemental enhancement information containing data such as picture timing information may also precede the primary coded picture.
The primary coded picture consists of a set of VCL NAL units consisting of slices or slice data partitions that represent the samples of the video picture.
Following the primary coded picture may be some additional VCL NAL units that contain redundant representations of areas of the same video picture. These are referred to as redundant coded pictures, and are available for use by a decoder in recovering from loss or corruption of the data in the primary coded pictures. Decoders are not required to decode redundant coded pictures if they are present.
Finally, if the coded picture is the last picture of a coded video sequence (a sequence of pictures that is independently decodable and uses only one sequence parameter set), an end of sequence NAL unit may be present to indicate the end of the sequence; and if the coded picture is the last coded picture in the entire NAL unit stream, an end of stream NAL unit may be present to indicate that the stream is ending.
Coded Video Sequences
A coded video sequence consists of a series of access units that are sequential in the NAL unit stream and use only one sequence parameter set. Each coded video sequence can be decoded independently of any other coded video sequence, given the necessary parameter set information, which may be conveyed "in-band" or "out-of-band". At the beginning of a coded video sequence is an instantaneous decoding refresh (IDR) access unit. An IDR access unit contains an intra picture which is a coded picture that can be decoded without decoding any previous pictures in the NAL unit stream, and the presence of an IDR access unit indicates that no subsequent picture in the stream will require reference to pictures prior to the intra picture it contains in order to be decoded.
A NAL unit stream may contain one or more coded video sequence.
References
Sources
Overview of the H.264/AVC Video Coding Standard, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 13, NO. 7, JULY 2003
Overview of the High Efficiency Video Coding (HEVC) Standard, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 22, NO. 12, DECEMBER 2012
ITU recommendation H.264 : Advanced video coding for generic audiovisual services
Video
Image processing |
174151 | https://en.wikipedia.org/wiki/Serial%20ATA | Serial ATA | Serial ATA (SATA, abbreviated from Serial AT Attachment) is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives, optical drives, and solid-state drives. Serial ATA succeeded the earlier Parallel ATA (PATA) standard to become the predominant interface for storage devices.
Serial ATA industry compatibility specifications originate from the Serial ATA International Organization (SATA-IO) which are then promulgated by the INCITS Technical Committee T13, AT Attachment (INCITS T13).
History
SATA was announced in 2000 in order to provide several advantages over the earlier PATA interface such as reduced cable size and cost (seven conductors instead of 40 or 80), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. Revision 1.0 of the specification was released in January 2003.
Serial ATA industry compatibility specifications originate from the Serial ATA International Organization (SATA-IO). The SATA-IO group collaboratively creates, reviews, ratifies, and publishes the interoperability specifications, the test cases and plugfests. As with many other industry compatibility standards, the SATA content ownership is transferred to other industry bodies: primarily INCITS T13 and an INCITS T10 subcommittee (SCSI), a subgroup of T10 responsible for Serial Attached SCSI (SAS). The remainder of this article strives to use the SATA-IO terminology and specifications.
Before SATA's introduction in 2000, PATA was simply known as ATA. The "AT Attachment" (ATA) name originated after the 1984 release of the IBM Personal Computer AT, more commonly known as the IBM AT. The IBM AT's controller interface became a de facto industry interface for the inclusion of hard disks. "AT" was IBM's abbreviation for "Advanced Technology"; thus, many companies and organizations indicate SATA is an abbreviation of "Serial Advanced Technology Attachment". However, the ATA specifications simply use the name "AT Attachment", to avoid possible trademark issues with IBM.
SATA host adapters and devices communicate via a high-speed serial cable over two pairs of conductors. In contrast, parallel ATA (the redesignation for the legacy ATA specifications) uses a 16-bit wide data bus with many additional support and control signals, all operating at a much lower frequency. To ensure backward compatibility with legacy ATA software and applications, SATA uses the same basic ATA and ATAPI command sets as legacy ATA devices.
The world's first SATA hard disk drive is the Seagate Barracuda SATA V, which was released in Jan 2003.
SATA has replaced parallel ATA in consumer desktop and laptop computers; SATA's market share in the desktop PC market was 99% in 2008. PATA has mostly been replaced by SATA for any use; with PATA in declining use in industrial and embedded applications that use CompactFlash (CF) storage, which was designed around the legacy PATA standard. A 2008 standard, CFast, to replace CompactFlash is based on SATA.
Features
Hot plug
The Serial ATA spec requires SATA device hot plugging; that is, devices that meet the specification are capable of insertion or removal of a device into or from a backplane connector (combined signal and power) that has power on. After insertion, the device initializes and then operates normally. Depending upon the operating system, the host may also initialize, resulting in a hot swap. The powered host and device do not need to be in an idle state for safe insertion and removal, although unwritten data may be lost when power is removed.
Unlike PATA, both SATA and eSATA support hot plugging by design. However, this feature requires proper support at the host, device (drive), and operating-system levels. In general, SATA devices fulfill the device-side hot-plugging requirements, and most SATA host adapters support this function.
For eSATA, hot plugging is supported in AHCI mode only. IDE mode does not support hot plugging.
Advanced Host Controller Interface
Advanced Host Controller Interface (AHCI) is an open host controller interface published and used by Intel, which has become a de facto standard. It allows the use of advanced features of SATA such as hotplug and native command queuing (NCQ). If AHCI is not enabled by the motherboard and chipset, SATA controllers typically operate in "IDE emulation" mode, which does not allow access to device features not supported by the ATA (also called IDE) standard.
Windows device drivers that are labeled as SATA are often running in IDE emulation mode unless they explicitly state that they are AHCI mode, in RAID mode, or a mode provided by a proprietary driver and command set that allowed access to SATA's advanced features before AHCI became popular. Modern versions of Microsoft Windows, Mac OS X, FreeBSD, Linux with version 2.6.19 onward, as well as Solaris and OpenSolaris, include support for AHCI, but earlier operating systems such as Windows XP do not. Even in those instances, a proprietary driver may have been created for a specific chipset, such as Intel's.
Revisions
SATA revisions are typically designated with a dash followed by Roman numerals, e.g. "SATA-III", to avoid confusion with the speed, which is always displayed in Arabic numerals, e.g. "SATA 6 Gbit/s". The speeds given are the raw interface rate in Gbit/s including line code overhead, and the usable data rate in MB/s without overhead.
SATA revision 1.0 (1.5 Gbit/s, 150 MB/s, Serial ATA-150)
Revision 1.0a was released on January 7, 2003. First-generation SATA interfaces, now known as SATA 1.5 Gbit/s, communicate at a rate of 1.5 Gbit/s, and do not support Native Command Queuing (NCQ). Taking 8b/10b encoding overhead into account, they have an actual uncoded transfer rate of 1.2 Gbit/s (150 MB/s). The theoretical burst throughput of SATA 1.5 Gbit/s is similar to that of PATA/133, but newer SATA devices offer enhancements such as NCQ, which improve performance in a multitasking environment.
During the initial period after SATA 1.5 Gbit/s finalization, adapter and drive manufacturers used a "bridge chip" to convert existing PATA designs for use with the SATA interface. Bridged drives have a SATA connector, may include either or both kinds of power connectors, and, in general, perform identically to their native-SATA equivalents. However, most bridged drives lack support for some SATA-specific features such as NCQ. Native SATA products quickly took over the bridged products with the introduction of the second generation of SATA drives.
, the fastest 10,000 rpm SATA hard disk drives could transfer data at maximum (not average) rates of up to 157 MB/s, which is beyond the capabilities of the older PATA/133 specification and also exceeds the capabilities of SATA 1.5 Gbit/s.
SATA revision 2.0 (3 Gbit/s, 300 MB/s, Serial ATA-300)
SATA revision 2.0 was released in April 2004, introducing Native Command Queuing (NCQ). It is backward compatible with SATA 1.5 Gbit/s.
Second-generation SATA interfaces run with a native transfer rate of 3.0 Gbit/s that, when accounted for the 8b/10b encoding scheme, equals to the maximum uncoded transfer rate of 2.4 Gbit/s (300 MB/s). The theoretical burst throughput of the SATA revision 2.0, which is also known as the SATA 3 Gbit/s, doubles the throughput of SATA revision 1.0.
All SATA data cables meeting the SATA spec are rated for 3.0 Gbit/s and handle modern mechanical drives without any loss of sustained and burst data transfer performance. However, high-performance flash-based drives can exceed the SATA 3 Gbit/s transfer rate; this is addressed with the SATA 6 Gbit/s interoperability standard.
SATA revision 2.5
Announced in August 2005, SATA revision 2.5 consolidated the specification to a single document.
SATA revision 2.6
Announced in February 2007, SATA revision 2.6 introduced the following features:
Slimline connector.
Micro connector (initially for 1.8” HDD).
Mini Internal Multilane cable and connector.
Mini External Multilane cable and connector.
NCQ Priority.
NCQ Unload.
Enhancements to the BIST Activate FIS.
Enhancements for robust reception of the Signature FIS.
SATA revision 3.0 (6 Gbit/s, 600 MB/s, Serial ATA-600)
Serial ATA International Organization (SATA-IO) presented the draft specification of SATA 6 Gbit/s physical layer in July 2008, and ratified its physical layer specification on August 18, 2008. The full 3.0 standard was released on May 27, 2009.
Third-generation SATA interfaces run with a native transfer rate of 6.0 Gbit/s; taking 8b/10b encoding into account, the maximum uncoded transfer rate is 4.8 Gbit/s (600 MB/s). The theoretical burst throughput of SATA 6.0 Gbit/s is double that of SATA revision 2.0. It is backward compatible with SATA 3 Gbit/s and SATA 1.5 Gbit/s.
The SATA 3.0 specification contains the following changes:
6 Gbit/s for scalable performance.
Continued compatibility with SAS, including SAS 6 Gbit/s, as per "a SAS domain may support attachment to and control of unmodified SATA devices connected directly into the SAS domain using the Serial ATA Tunneled Protocol (STP)" from the SATA Revision 3.0 Gold specification.
Isochronous Native Command Queuing (NCQ) streaming command to enable isochronous quality of service data transfers for streaming digital content applications.
An NCQ management feature that helps optimize performance by enabling host processing and management of outstanding NCQ commands.
Improved power management capabilities.
A small low insertion force (LIF) connector for more compact 1.8-inch storage devices.
A 7 mm optical disk drive profile for the slimline SATA connector (in addition to the existing 12.7 mm and 9.5 mm profiles).
Alignment with the INCITS ATA8-ACS standard.
In general, the enhancements are aimed at improving quality of service for video streaming and high-priority interrupts. In addition, the standard continues to support distances up to one meter. The newer speeds may require higher power consumption for supporting chips, though improved process technologies and power management techniques may mitigate this. The later specification can use existing SATA cables and connectors, though it was reported in 2008 that some OEMs were expected to upgrade host connectors for the higher speeds.
SATA revision 3.1
Released in July 2011, SATA revision 3.1 introduced or changed the following features:
mSATA, SATA for solid-state drives in mobile computing devices, a PCI Express Mini Card-like connector that is electrically SATA.
Zero-power optical disk drive, idle SATA optical drive draws no power.
Queued TRIM Command, improves solid-state drive performance.
Required Link Power Management, reduces overall system power demand of several SATA devices.
Hardware Control Features, enable host identification of device capabilities.
Universal Storage Module (USM), a new standard for cableless plug-in (slot) powered storage for consumer electronics devices.
SATA revision 3.2
Released in August 2013, SATA revision 3.2 introduced the following features:
The SATA Express specification defines an interface that combines both SATA and PCI Express buses, making it possible for both types of storage devices to coexist. By employing PCI Express, a much higher theoretical throughput of 1969 MB/s is possible.
The SATA M.2 standard is a small form factor implementation of the SATA Express interface, with the addition of an internal USB 3.0 port; see the M.2 (NGFF) section below for a more detailed summary.
microSSD introduces a ball grid array electrical interface for miniaturized, embedded SATA storage.
USM Slim reduces thickness of Universal Storage Module (USM) from to .
DevSleep enables lower power consumption for always-on devices while they are in low-power modes such as InstantGo (which used to be known as Connected Standby).
Hybrid Information provides higher performance for solid-state hybrid drives.
SATA revision 3.3
Released in February 2016, SATA revision 3.3 introduced the following features:
Shingled magnetic recording (SMR) support that provides a 25 percent or greater increase in hard disk drive capacity by overlapping tracks on the media.
Power Disable feature (see PWDIS pin) allows for remote power cycling of SATA drives and a Rebuild Assist function that speeds up the rebuild process to help ease maintenance in the data center.
Transmitter Emphasis Specification increases interoperability and reliability between host and devices in electrically demanding environments.
An activity indicator and staggered spin-up can be controlled by the same pin, adding flexibility and providing users with more choices.
The new Power Disable feature (similar to the SAS Power Disable feature) uses Pin 3 of the SATA power connector. Some legacy power supplies that provide 3.3 V power on Pin 3 would force drives with Power Disable feature to get stuck in a hard reset condition preventing them from spinning up. The problem can usually be eliminated by using a simple “Molex to SATA” power adaptor to supply power to these drives.
SATA revision 3.4
Released in June 2018, SATA revision 3.4 introduced the following features that enable monitoring of device conditions and execution of housekeeping tasks, both with minimal impact on performance:
Durable/Ordered Write Notification: enables writing selected critical cache data to the media, minimizing impact on normal operations.
Device Temperature Monitoring: allows for active monitoring of SATA device temperature and other conditions without impacting normal operation by utilizing the SFF-8609 standard for out-of-band (OOB) communications.
Device Sleep Signal Timing: provides additional definition to enhance compatibility between manufacturers’ implementations.
SATA revision 3.5
Released in July 2020, SATA revision 3.5 Introduces features that enable increased performance benefits and promote greater integration of SATA devices and products with other industry I/O standards:
Device Transmit Emphasis for Gen 3 PHY: aligns SATA with other characteristics of other I/O measurement solutions to help SATA-IO members with testing and integration.
Defined Ordered NCQ Commands: allows the host to specify the processing relationships among queued commands and sets the order in which commands are processed in the queue.
Command Duration Limit Features: reduces latency by allowing the host to define quality of service categories, giving the host more granularity in controlling command properties. The feature helps align SATA with the "Fast Fail" requirements established by the Open Compute Project (OCP) and specified in the INCITS T13 Technical Committee standard.
Cables, connectors, and ports
Connectors and cables present the most visible differences between SATA and parallel ATA drives. Unlike PATA, the same connectors are used on SATA hard disks (for desktop and server computers) and disks (for portable or small computers).
Standard SATA connectors for both data and power have a conductor pitch of . Low insertion force is required to mate a SATA connector. A smaller mini-SATA or mSATA connector is used by smaller devices such as 1.8-inch SATA drives, some DVD and Blu-ray drives, and mini SSDs.
A special eSATA connector is specified for external devices, and an optionally implemented provision for clips to hold internal connectors firmly in place. SATA drives may be plugged into SAS controllers and communicate on the same physical cable as native SAS disks, but SATA controllers cannot handle SAS disks.
Female SATA ports (on motherboards for example) are for use with SATA data cables that have locks or clips to prevent accidental unplugging. Some SATA cables have right- or left-angled connectors to ease connection to circuit boards.
Data connector
The SATA standard defines a data cable with seven conductors (three grounds and four active data lines in two pairs) and 8 mm wide wafer connectors on each end. SATA cables can have lengths up to , and connect one motherboard socket to one hard drive. PATA ribbon cables, in comparison, connect one motherboard socket to one or two hard drives, carry either 40 or 80 wires, and are limited to in length by the PATA specification; however, cables up to are readily available. Thus, SATA connectors and cables are easier to fit in closed spaces and reduce obstructions to air cooling. Although they are more susceptible to accidental unplugging and breakage than PATA, users can purchase cables that have a locking feature, whereby a small (usually metal) spring holds the plug in the socket.
SATA connectors may be straight, right-angled, or left-angled. Angled connectors allow lower-profile connections. Right-angled (also called 90-degree) connectors lead the cable immediately away from the drive, on the circuit-board side. Left-angled (also called 270-degree) connectors lead the cable across the drive towards its top.
One of the problems associated with the transmission of data at high speed over electrical connections is described as noise, which is due to electrical coupling between data circuits and other circuits. As a result, the data circuits can both affect other circuits and be affected by them. Designers use a number of techniques to reduce the undesirable effects of such unintentional coupling. One such technique used in SATA links is differential signaling. This is an enhancement over PATA, which uses single-ended signaling. The use of fully shielded, dual coax conductors, with multiple ground connections, for each differential pair improves isolation between the channels and reduces the chances of lost data in difficult electrical environments.
Power connectors
Standard connector
SATA specifies a different power connector than the four-pin Molex connector used on Parallel ATA (PATA) devices (and earlier small storage devices, going back to ST-506 hard disk drives and even to floppy disk drives that predated the IBM PC). It is a wafer-type connector, like the SATA data connector, but much wider (fifteen pins versus seven) to avoid confusion between the two. Some early SATA drives included the four-pin Molex power connector together with the new fifteen-pin connector, but most SATA drives now have only the latter.
The new SATA power connector contains many more pins for several reasons:
3.3 V is supplied along with the traditional 5 V and 12 V supplies. However, very few drives actually use it, so they may be powered from a four-pin Molex connector with an adapter.
Pin 3 in SATA revision 3.3 has been redefined as PWDIS and is used to enter and exit the POWER DISABLE mode for compatibility with SAS specification. If Pin 3 is driven HIGH (2.1–3.6 V max), power to the drive circuitry is disabled. Drives with this feature do not power up in systems designed to SATA revision 3.1 or earlier. This is because Pin 3 driven HIGH prevents the drive from powering up.
To reduce resistance and increase current capability, each voltage is supplied by three pins in parallel, though one pin in each group is intended for precharging (see below). Each pin should be able to carry 1.5 A.
Five parallel pins provide a low-resistance ground connection.
Two ground pins and one pin for each supplied voltage support hot-plug precharging. Ground pins 4 and 12 in a hot-swap cable are the longest, so they make contact first when the connectors are mated. Drive power connector pins 3, 7, and 13 are longer than the others, so they make contact next. The drive uses them to charge its internal bypass capacitors through current-limiting resistances. Finally, the remaining power pins make contact, bypassing the resistances and providing a low-resistance source of each voltage. This two-step mating process avoids glitches to other loads and possible arcing or erosion of the SATA power-connector contacts.
Pin 11 can function for staggered spinup, activity indication, both, or nothing. It is an open-collector signal, which may be pulled down by the connector or the drive. If pulled down at the connector (as it is on most cable-style SATA power connectors), the drive spins up as soon as power is applied. If left floating, the drive waits until it is spoken to. This prevents many drives from spinning up simultaneously, which might draw too much power. The pin is also pulled low by the drive to indicate drive activity. This may be used to give feedback to the user through an LED.
Passive adapters are available that convert a four-pin Molex connector to a SATA power connector, providing the 5 V and 12 V lines available on the Molex connector, but not 3.3 V. There are also four-pin Molex-to-SATA power adapters that include electronics to additionally provide the 3.3 V power supply. However, most drives do not require the 3.3 V power line.
Slimline connector
SATA 2.6 is the first revision that defined the slimline connector, intended for smaller form-factors such as notebook optical drives. Pin 1 of the slimline power connector, denoting device presence, is shorter than the others to allow hot-swapping. The slimline signal connector is identical and compatible with the standard version, while the power connector is reduced to six pins so it supplies only +5 V, and not +12 V or +3.3 V.
Low-cost adapters exist to convert from standard SATA to slimline SATA.
Micro connector
The micro SATA connector (sometimes called uSATA or μSATA) originated with SATA 2.6, and is intended for hard disk drives. There is also a micro data connector, similar in appearance but slightly thinner than the standard data connector.
Additional pins
SATA drives, in particular mechanical ones, come with an extra 4 or more pin interface which isn't uniformly standardised but nevertheless serves similar purpose defined by each drive manufacturer. As IDE drives used those extra pins for setting up Master and Slave drives, on SATA drives, those pins are generally used to select different Power modes for use in USB-SATA bridges or enables additional features like Spread Spectrum Clocking, SATA Speed Limit or Factory Mode for Diagnostics and Recovery, by the use of a jumper.
eSATA
Standardized in 2004, eSATA (e standing for external) provides a variant of SATA meant for external connectivity. It uses a more robust connector, longer shielded cables, and stricter (but backward-compatible) electrical standards. The protocol and logical signaling (link/transport layers and above) are identical to internal SATA. The differences are:
Minimum transmit amplitude increased: Range is 500–600 mV instead of 400–600 mV.
Minimum receive amplitude decreased: Range is 240–600 mV instead of 325–600 mV.
Maximum cable length increased to from .
The eSATA cable and connector is similar to the SATA 1.0a cable and connector, with these exceptions:
The eSATA connector is mechanically different to prevent unshielded internal cables from being used externally. The eSATA connector discards the "L"-shaped key and changes the position and size of the guides.
The eSATA insertion depth is deeper: 6.6 mm instead of 5 mm. The contact positions are also changed.
The eSATA cable has an extra shield to reduce EMI to FCC and CE requirements. Internal cables do not need the extra shield to satisfy EMI requirements because they are inside a shielded case.
The eSATA connector uses metal springs for shield contact and mechanical retention.
The eSATA connector has a design-life of 5,000 matings; the ordinary SATA connector is only specified for 50.
Aimed at the consumer market, eSATA enters an external storage market served also by the USB and FireWire interfaces. The SATA interface has certain advantages. Most external hard-disk-drive cases with FireWire or USB interfaces use either PATA or SATA drives and "bridges" to translate between the drives' interfaces and the enclosures' external ports; this bridging incurs some inefficiency. Some single disks can transfer 157 MB/s during real use, about four times the maximum transfer rate of USB 2.0 or FireWire 400 (IEEE 1394a) and almost twice as fast as the maximum transfer rate of FireWire 800. The S3200 FireWire 1394b specification reaches around 400 MB/s (3.2 Gbit/s), and USB 3.0 has a nominal speed of 5 Gbit/s. Some low-level drive features, such as S.M.A.R.T., may not operate through some USB or FireWire or USB+FireWire bridges; eSATA does not suffer from these issues provided that the controller manufacturer (and its drivers) presents eSATA drives as ATA devices, rather than as SCSI devices, as has been common with Silicon Image, JMicron, and NVIDIA nForce drivers for Windows Vista. In those cases SATA drives do not have low-level features accessible.
The eSATA version of SATA 6G operates at 6.0 Gbit/s (the term "SATA III" is avoided by the SATA-IO organization to prevent confusion with SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bit/s] or "SATA 300" [MB/s] since the 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [bit/s] or "SATA 150" [MB/s]). Therefore, eSATA connections operate with negligible differences between them. Once an interface can transfer data as fast as a drive can handle them, increasing the interface speed does not improve data transfer.
There are some disadvantages, however, to the eSATA interface:
Devices built before the eSATA interface became popular lack external SATA connectors.
For small form-factor devices (such as external disks), a PC-hosted USB or FireWire link can usually supply sufficient power to operate the device. However, eSATA connectors cannot supply power, and require a power supply for the external device. The related eSATAp (but mechanically incompatible, sometimes called eSATA/USB) connector adds power to an external SATA connection, so that an additional power supply is not needed.
As of mid 2017 few new computers have dedicated external SATA (eSATA) connectors, with USB3 dominating and USB3 Type C, often with the Thunderbolt alternate mode, starting to replace the earlier USB connectors. Still sometimes present are single ports supporting both USB3 and eSATA.
Desktop computers without a built-in eSATA interface can install an eSATA host bus adapter (HBA); if the motherboard supports SATA, an externally available eSATA connector can be added. Notebook computers with the now rare Cardbus or ExpressCard could add an eSATA HBA. With passive adapters, the maximum cable length is reduced to due to the absence of compliant eSATA signal-levels.
eSATAp
eSATAp stands for powered eSATA. It is also known as Power over eSATA, Power eSATA, eSATA/USB Combo, or eSATA USB Hybrid Port (EUHP). An eSATAp port combines the four pins of the USB 2.0 (or earlier) port, the seven pins of the eSATA port, and optionally two 12 V power pins. Both SATA traffic and device power are integrated in a single cable, as is the case with USB but not eSATA. The 5 V power is provided through two USB pins, while the 12 V power may optionally be provided. Typically desktop, but not notebook, computers provide 12 V power, so can power devices requiring this voltage, typically 3.5-inch disk and CD/DVD drives, in addition to 5 V devices such as 2.5-inch drives.
Both USB and eSATA devices can be used with an eSATAp port, when plugged in with a USB or eSATA cable, respectively. An eSATA device cannot be powered via an eSATAp cable, but a special cable can make both SATA or eSATA and power connectors available from an eSATAp port.
An eSATAp connector can be built into a computer with internal SATA and USB, by fitting a bracket with connections for internal SATA, USB, and power connectors and an externally accessible eSATAp port. Though eSATAp connectors have been built into several devices, manufacturers do not refer to an official standard.
Pre-standard implementations
Prior to the final eSATA 3 Gbit/s specification, a number of products were designed for external connection of SATA drives. Some of these use the internal SATA connector, or even connectors designed for other interface specifications, such as FireWire. These products are not eSATA compliant. The final eSATA specification features a specific connector designed for rough handling, similar to the regular SATA connector, but with reinforcements in both the male and female sides, inspired by the USB connector. eSATA resists inadvertent unplugging, and can withstand yanking or wiggling, which could break a male SATA connector (the hard-drive or host adapter, usually fitted inside the computer). With an eSATA connector, considerably more force is needed to damage the connector—and if it does break, it is likely to be the female side, on the cable itself, which is relatively easy to replace.
Prior to the final eSATA 6 Gbit/s specification many add-on cards and some motherboards advertised eSATA 6 Gbit/s support because they had 6 Gbit/s SATA 3.0 controllers for internal-only solutions. Those implementations are non-standard, and eSATA 6 Gbit/s requirements were ratified in the July 18, 2011 SATA 3.1 specification. Some products might not be fully eSATA 6 Gbit/s compliant.
Mini-SATA (mSATA)
Mini-SATA (abbreviated as mSATA), which is distinct from the micro connector, was announced by the Serial ATA International Organization on September 21, 2009. Applications include netbooks, laptops and other devices that require a solid-state drive in a small footprint.
The physical dimensions of the mSATA connector are identical to those of the PCI Express Mini Card interface, but the interfaces are electrically incompatible; the data signals (TX±/RX± SATA, PETn0 PETp0 PERn0 PERp0 PCI Express) need a connection to the SATA host controller instead of the PCI Express host controller.
The M.2 specification has superseded both mSATA and mini-PCIe.
SFF-8784 connector
Slim 2.5-inch SATA devices, in height, use the twenty-pin SFF-8784 edge connector to save space. By combining the data signals and power lines into a slim connector that effectively enables direct connection to the device's printed circuit board (PCB) without additional space-consuming connectors, SFF-8784 allows further internal layout compaction for portable devices such as ultrabooks.
Pins 1 to 10 are on the connector's bottom side, while pins 11 to 20 are on the top side.
SATA Express
SATA Express, initially standardized in the SATA 3.2 specification, is an interface that supports either SATA or PCI Express storage devices. The host connector is backward compatible with the standard 3.5-inch SATA data connector, allowing up to two legacy SATA devices to connect. At the same time, the host connector provides up to two PCI Express 3.0 lanes as a pure PCI Express connection to the storage device, allowing bandwidths of up to 2 GB/s.
Instead of the otherwise usual approach of doubling the native speed of the SATA interface, PCI Express was selected for achieving data transfer speeds greater than 6 Gbit/s. It was concluded that doubling the native SATA speed would take too much time, too many changes would be required to the SATA standard, and would result in a much greater power consumption when compared to the existing PCI Express bus.
In addition to supporting legacy Advanced Host Controller Interface (AHCI), SATA Express also makes it possible for NVM Express (NVMe) to be used as the logical device interface for connected PCI Express storage devices.
As M.2 form factor, described below, achieved much larger popularity, SATA Express is considered as a failed standard and dedicated ports quickly disappeared from motherboards.
M.2 (NGFF)
M.2, formerly known as the Next Generation Form Factor (NGFF), is a specification for computer expansion cards and associated connectors. It replaces the mSATA standard, which uses the PCI Express Mini Card physical layout. Having a smaller and more flexible physical specification, together with more advanced features, the M.2 is more suitable for solid-state storage applications in general, especially when used in small devices such as ultrabooks or tablets.
The M.2 standard is designed as a revision and improvement to the mSATA standard, so that larger printed circuit boards (PCBs) can be manufactured. While mSATA took advantage of the existing PCI Express Mini Card form factor and connector, M.2 has been designed to maximize usage of the card space, while minimizing the footprint.
Supported host controller interfaces and internally provided ports are a superset to those defined by the SATA Express interface. Essentially, the M.2 standard is a small form factor implementation of the SATA Express interface, with the addition of an internal USB 3.0 port.
U.2 (SFF-8639)
U.2, formerly known as SFF-8639. Like M.2, it carries a PCI Express electrical signal, however U.2 uses a PCIe 3.0 ×4 link providing a higher bandwidth of 32 Gbit/s in each direction. In order to provide maximum backward compatibility the U.2 connector also supports SATA and multi-path SAS.
Protocol
The SATA specification defines three distinct protocol layers: physical, link, and transport.
Physical layer
The physical layer defines SATA's electrical and physical characteristics (such as cable dimensions and parasitics, driver voltage level and receiver operating range), as well as the physical coding subsystem (bit-level encoding, device detection on the wire, and link initialization).
Physical transmission uses differential signaling. The SATA PHY contains a transmit pair and receive pair. When the SATA-link is not in use (example: no device attached), the transmitter allows the transmit pins to float to their common-mode voltage level. When the SATA-link is either active or in the link-initialization phase, the transmitter drives the transmit pins at the specified differential voltage (1.5 V in SATA/I).
SATA physical coding uses a line encoding system known as 8b/10b encoding. This scheme serves multiple functions required to sustain a differential serial link. First, the stream contains necessary synchronization information that allows the SATA host/drive to extract clocking. The 8b/10b encoded sequence embeds periodic edge transitions to allow the receiver to achieve bit-alignment without the use of a separately transmitted reference clock waveform. The sequence also maintains a neutral (DC-balanced) bitstream, which lets transmit drivers and receiver inputs be AC-coupled. Generally, the actual SATA signalling is half-duplex, meaning that it can only read or write data at any one time.
Also, SATA uses some of the special characters defined in 8b/10b. In particular, the PHY layer uses the comma (K28.5) character to maintain symbol-alignment. A specific four-symbol sequence, the ALIGN primitive, is used for clock rate-matching between the two devices on the link. Other special symbols communicate flow control information produced and consumed in the higher layers (link and transport).
Separate point-to-point AC-coupled low-voltage differential signaling (LVDS) links are used for physical transmission between host and drive.
The PHY layer is responsible for detecting the other SATA/device on a cable, and link initialization. During the link-initialization process, the PHY is responsible for locally generating special out-of-band signals by switching the transmitter between electrical-idle and specific 10b-characters in a defined pattern, negotiating a mutually supported signalling rate (1.5, 3.0, or 6.0 Gbit/s), and finally synchronizing to the far-end device's PHY-layer data stream. During this time, no data is sent from the link-layer.
Once link-initialization has completed, the link-layer takes over data-transmission, with the PHY providing only the 8b/10b conversion before bit transmission.
Link layer
After the PHY-layer has established a link, the link layer is responsible for transmission and reception of Frame Information Structures (FISs) over the SATA link. FISs are packets containing control information or payload data. Each packet contains a header (identifying its type), and payload whose contents are dependent on the type. The link layer also manages flow control over the link.
Transport layer
Layer number three in the serial ATA specification is the transport layer. This layer has the responsibility of acting on the frames and transmitting/receiving the frames in an appropriate sequence. The transport layer handles the assembly and disassembly of FIS structures, which includes, for example, extracting content from register FISs into the task-file and informing the command layer. In an abstract fashion, the transport layer is responsible for creating and encoding FIS structures requested by the command layer, and removing those structures when the frames are received.
When DMA data is to be transmitted and is received from the higher command layer, the transport layer appends the FIS control header to the payload, and informs the link layer to prepare for transmission. The same procedure is performed when data is received, but in reverse order. The link layer signals to the transport layer that there is incoming data available. Once the data is processed by the link layer, the transport layer inspects the FIS header and removes it before forwarding the data to the command layer.
Topology
SATA uses a point-to-point architecture. The physical connection between a controller and a storage device is not shared among other controllers and storage devices. SATA defines multipliers, which allows a single SATA controller port to drive up to fifteen storage devices. The multiplier performs the function of a hub; the controller and each storage device is connected to the hub. This is conceptually similar to SAS expanders.
PC systems have SATA controllers built into the motherboard, typically featuring two to eight ports. Additional ports can be installed through add-in SATA host adapters (available in variety of bus-interfaces: USB, PCI, PCIe).
Backward and forward compatibility
SATA and PATA
At the hardware interface level, SATA and PATA (Parallel AT Attachment) devices are completely incompatible: they cannot be interconnected without an adapter.
At the application level, SATA devices can be specified to look and act like PATA devices.
Many motherboards offer a "Legacy Mode" option, which makes SATA drives appear to the OS like PATA drives on a standard controller. This Legacy Mode eases OS installation by not requiring that a specific driver be loaded during setup, but sacrifices support for some (vendor specific) features of SATA. Legacy Mode often if not always disables some of the boards' PATA or SATA ports, since the standard PATA controller interface supports only four drives. (Often, which ports are disabled is configurable.)
The common heritage of the ATA command set has enabled the proliferation of low-cost PATA to SATA bridge chips. Bridge chips were widely used on PATA drives (before the completion of native SATA drives) as well in standalone converters. When attached to a PATA drive, a device-side converter allows the PATA drive to function as a SATA drive. Host-side converters allow a motherboard PATA port to connect to a SATA drive.
The market has produced powered enclosures for both PATA and SATA drives that interface to the PC through USB, Firewire or eSATA, with the restrictions noted above. PCI cards with a SATA connector exist that allow SATA drives to connect to legacy systems without SATA connectors.
SATA 1.5 Gbit/s and SATA 3 Gbit/s
The designers of SATA standard as an overall goal aimed for backward and forward compatibility with future revisions of the SATA standard.
To prevent interoperability problems that could occur when next generation SATA drives are installed on motherboards with standard legacy SATA 1.5 Gbit/s host controllers, many manufacturers have made it easy to switch those newer drives to the previous standard's mode.
Examples of such provisions include:
Seagate/Maxtor has added a user-accessible jumper-switch, known as the "force 150", to enable the drive switch between forced 1.5 Gbit/s and 1.5/3 Gbit/s negotiated operation.
Western Digital uses a jumper setting called OPT1 enabled to force 1.5 Gbit/s data transfer speed (OPT1 is enabled by putting the jumper on pins 5 and 6).
Samsung drives can be forced to 1.5 Gbit/s mode using software that may be downloaded from the manufacturer's website. Configuring some Samsung drives in this manner requires the temporary use of a SATA-2 (SATA 3.0 Gbit/s) controller while programming the drive.
The "force 150" switch (or equivalent) is also useful for attaching SATA 3 Gbit/s hard drives to SATA controllers on PCI cards, since many of these controllers (such as the Silicon Image chips) run at 3 Gbit/s, even though the PCI bus cannot reach 1.5 Gbit/s speeds. This can cause data corruption in operating systems that do not specifically test for this condition and limit the disk transfer speed.
SATA 3 Gbit/s and SATA 6 Gbit/s
SATA 3 Gbit/s and SATA 6 Gbit/s are compatible with each other. Most devices that are only SATA 3 Gbit/s can connect with devices that are SATA 6 Gbit/s, and vice versa, though SATA 3 Gbit/s devices only connect with SATA 6 Gbit/s devices at the slower 3 Gbit/s speed.
SATA 1.5 Gbit/s and SATA 6 Gbit/s
SATA 1.5 Gbit/s and SATA 6 Gbit/s are compatible with each other. Most devices that are only SATA 1.5 Gbit/s can connect with devices that are SATA 6 Gbit/s, and vice versa, though SATA 1.5 Gbit/s devices only connect with SATA 6 Gbit/s devices at the slower 1.5 Gbit/s speed.
Comparison to other interfaces
SATA and SCSI
Parallel SCSI uses a more complex bus than SATA, usually resulting in higher manufacturing costs. SCSI buses also allow connection of several drives on one shared channel, whereas SATA allows one drive per channel, unless using a port multiplier. Serial Attached SCSI uses the same physical interconnects as SATA, and most SAS HBAs also support 3 and 6 Gbit/s SATA devices (an HBA requires support for Serial ATA Tunneling Protocol).
SATA 3 Gbit/s theoretically offers a maximum bandwidth of 300 MB/s per device, which is only slightly lower than the rated speed for SCSI Ultra 320 with a maximum of 320 MB/s total for all devices on a bus. SCSI drives provide greater sustained throughput than multiple SATA drives connected via a simple (i.e., command-based) port multiplier because of disconnect-reconnect and aggregating performance. In general, SATA devices link compatibly to SAS enclosures and adapters, whereas SCSI devices cannot be directly connected to a SATA bus.
SCSI, SAS, and fibre-channel (FC) drives are more expensive than SATA, so they are used in servers and disk arrays where the better performance justifies the additional cost. Inexpensive ATA and SATA drives evolved in the home-computer market, hence there is a view that they are less reliable. As those two worlds overlapped, the subject of reliability became somewhat controversial. Note that, in general, the failure rate of a disk drive is related to the quality of its heads, platters and supporting manufacturing processes, not to its interface.
Use of serial ATA in the business market increased from 22% in 2006 to 28% in 2008.
Comparison with other buses
SCSI-3 devices with SCA-2 connectors are designed for hot swapping. Many server and RAID systems provide hardware support for transparent hot swapping. The designers of the SCSI standard prior to SCA-2 connectors did not target hot swapping, but in practice, most RAID implementations support hot swapping of hard disks.
See also
FATA (hard disk drive)
libATA
List of device bit rates
Notes
References
External links
Serial ATA International Organization (SATA-IO)
EETimes Serial ATA and the evolution in data storage technology, Mohamed A. Salem
"SATA-1" specification, as a zipped pdf; Serial ATA: High Speed Serialized AT Attachment, Revision 1.0a, 7-January-2003.
Errata and Engineering Change Notices to above "SATA-1" specification, as a zip of pdfs
515 kB – on eSATA
Serial ATA server and storage use cases
How to Install and Troubleshoot SATA Hard Drives
Serial ATA and the 7 Deadly Sins of Parallel ATA
Everything You Need to Know About Serial ATA
USB 3.0 vs. eSATA: Is faster better?
Universal ATA driver for Windows NT3.51/NT4/2000/XP/2003/Vista/7/ReactOS: With PATA/SATA/AHCI support a universal, free and open-source ATA driver with PATA/SATA support
Computer-related introductions in 2003
Computer connectors
Serial buses |
44354728 | https://en.wikipedia.org/wiki/Chaos%20engineering | Chaos engineering | Chaos engineering is the discipline of experimenting on a software system in production in order to build confidence in the system's capability to withstand turbulent and unexpected conditions.
Concept
In software development, a given software system's ability to tolerate failures while still ensuring adequate quality of service—often generalized as resiliency—is typically specified as a requirement. However, development teams often fail to meet this requirement due to factors such as short deadlines or lack of knowledge of the field. Chaos engineering is a technique to meet the resilience requirement.
Chaos engineering can be used to achieve resilience against infrastructure failures, network failures, and application failures.
History
While overseeing Netflix's migration to the cloud in 2011, Greg Orzell had the idea to address the lack of adequate resilience testing by setting up a tool that would cause breakdowns in their production environment, the environment used by Netflix customers. The intent was to move from a development model that assumed no breakdowns to a model where breakdowns were considered to be inevitable, driving developers to consider built-in resilience to be an obligation rather than an option:
"At Netflix, our culture of freedom and responsibility led us not to force engineers to design their code in a specific way. Instead, we discovered that we could align our teams around the notion of infrastructure resilience by isolating the problems created by server neutralization and pushing them to the extreme. We have created Chaos Monkey, a program that randomly chooses a server and disables it during its usual hours of activity. Some will find that crazy, but we could not depend on the random occurrence of an event to test our behavior in the face of the very consequences of this event. Knowing that this would happen frequently has created a strong alignment among engineers to build redundancy and process automation to survive such incidents, without impacting the millions of Netflix users. Chaos Monkey is one of our most effective tools to improve the quality of our services."
By regularly "killing" random instances of a software service, it was possible to test a redundant architecture to verify that a server failure did not noticeably impact customers.
The concept of chaos engineering is close to the one of Phoenix Servers, first introduced by Martin Fowler in 2012.
Perturbation models
A chaos engineering tool implements a perturbation model. The perturbations, also called turbulences, are meant to mimic rare or catastrophic events that can happen in production. To maximize the added value of chaos engineering, the perturbations are expected to be realistic.
Server shutdowns
One perturbation model consists of randomly shutting down servers. Netflix' Chaos Monkey is an implementation of this perturbation model.
Latency injection
Introduces communication delays to simulate degradation or outages in a network. For example, Chaos Mesh supports the injection of latency.
Resource exhaustion
Eats up a given resource. For instance, Gremlin can fill the disk up.
Chaos engineering tools
Chaos Monkey
Chaos Monkey is a tool invented in 2011 by Netflix to test the resilience of its IT infrastructure. It works by intentionally disabling computers in Netflix's production network to test how remaining systems respond to the outage. Chaos Monkey is now part of a larger suite of tools called the Simian Army designed to simulate and test responses to various system failures and edge cases.
The code behind Chaos Monkey was released by Netflix in 2012 under an Apache 2.0 license.
The name "Chaos Monkey" is explained in the book Chaos Monkeys by Antonio Garcia Martinez:
Imagine a monkey entering a 'data center', these 'farms' of servers that host all the critical functions of our online activities. The monkey randomly rips cables, destroys devices and returns everything that passes by the hand [i.e. flings excrement]. The challenge for IT managers is to design the information system they are responsible for so that it can work despite these monkeys, which no one ever knows when they arrive and what they will destroy.
Simian Army
The Simian Army is a suite of tools developed by Netflix to test the reliability, security, or resiliency of its Amazon Web Services infrastructure and includes the following tools:
At the very top of the Simian Army hierarchy, Chaos Kong drops a full AWS "Region". Though rare, loss of an entire region does happen and Chaos Kong simulates a systems response and recovery to this type of event.
Chaos Gorilla drops a full Amazon "Availability Zone" (one or more entire data centers serving a geographical region).
Chaos Machine
ChaosMachine is a tool that does chaos engineering at the application level in the JVM. It concentrates on analyzing the error-handling capability of each try-catch block involved in the application by injecting exceptions.
Proofdock Chaos Engineering Platform
Proofdock is a chaos engineering platform that focuses on and leverages the Microsoft Azure platform and the Azure DevOps services. Users can inject failures on the infrastructure, platform and application level.
Gremlin
Gremlin is a "failure-as-a-service" platform.
Facebook Storm
To prepare for the loss of a datacenter, Facebook regularly tests the resistance of its infrastructures to extreme events. Known as the Storm Project, the program simulates massive data center failures.
Days of Chaos
Voyages-sncf.com created a "Day of Chaos" in 2017, gamifying the simulation of pre-production failures. They presented their results at the 2017 DevOps REX conference.
See also
Fault injection
Fault tolerance
Fault-tolerant computer system
Data redundancy
Error detection and correction
Fall back and forward
Resilience (network)
Robustness (computer science)
Notes and references
External links
Principle of Chaos Engineering – The Chaos Engineering manifesto
Chaos Engineering – Adrian Hornsby
How Chaos Engineering Practices Will Help You Design Better Software – Mariano Calandra
Netflix
Software development |
7423545 | https://en.wikipedia.org/wiki/IBM%20App%20Connect%20Enterprise | IBM App Connect Enterprise | IBM App Connect Enterprise (abbreviated as IBM ACE, formerly known as IBM Integration Bus or WebSphere Message Broker) is IBM's integration broker from the WebSphere product family that allows business information to flow between disparate applications across multiple hardware and software platforms. Rules can be applied to the data flowing through the message broker to route and transform the information. The product is an Enterprise Service Bus supplying a communication channel between applications and services in a service-oriented architecture.
IBM ACE provides capabilities to build solutions needed to support diverse integration requirements through a set of connectors to a range of data sources, including packaged applications, files, mobile devices, messaging systems, and databases. A benefit of using IBM ACE is that the tool enables existing applications for Web Services without costly legacy application rewrites. ACE avoids the point-to-point strain on development resources by connecting any application or service over multiple protocols, including SOAP, HTTP and JMS. Modern secure authentication mechanisms, including the ability to perform actions on behalf of masquerading or delegate users, through MQ, HTTP and SOAP nodes are supported such as LDAP, X-AUTH, O-AUTH, and two-way SSL.
A major focus of IBM ACE in its latest release is the capability of the product's runtime to be fully hosted in a cloud. Hosting the runtime in the cloud provides certain advantages and potential cost savings compared to hosting the runtime on premises as it simplifies the maintenance and application of OS-level patches which can sometimes be disruptive to business continuity. Also, cloud hosting of IBM ACE runtime allows easy expansion of capacity by adding more horsepower to the CPU configuration of a cloud environment or by adding additional nodes in an Active-Active configuration. Another advantage of maintaining ACE runtimes in the cloud is the ability to configure access to your ACE functionality separate and apart from your internal network using DataPower or API Connect devices. This allows people or services on the public internet to access your Enterprise Service Bus without passing through your internal network, which can be a more secure configuration than if your ESB was deployed to your internal on premises network.
IBM ACE embeds a Common Language Runtime to invoke any .NET logic as part of an integration. It also includes full support for the Visual Studio development environment, including the integrated debugger and code templates. IBM Integration Bus includes a comprehensive set of patterns and samples that demonstrate bi-directional connectivity with both Microsoft Dynamics CRM and MSMQ. Several improvements have been made to this current release, among them the ability to configure runtime parameters using a property file that is part of the deployed artifacts contained in the BAR ('broker archive') file. Previously, the only way to configure runtime parameters was to run an MQSI command on the command line. This new way of configuration is referred to as a policy document and can be created with the new Policy Editor. Policy documents can be stored in a source code control system and a different policy can exist for different environments (DEV, INT, QA, PROD).
IBM ACE is compatible with several virtualization platforms right out-of-the-box, Docker being a prime example. With ACE, you can download from the global Docker repository a runtime of IBM ACE and run it locally. Because ACE has its administrative console built right into the runtime, once the Docker image is active on your local, you can do all the configuration and administration commands needed to fully activate any message flow or deploy any BAR file. In fact, you can construct message flows that are microservices and package these microservices into a Docker deployable object directly. Because message flows and BAR files can contain Policy files, this node configuration can be automatic and no or little human intervention is needed to complete the application deployment.
Features
IBM represents the following features as key differentiators of the IBM ACE product when compared to other industry products that provide the services of an Enterprise Service Bus:
Simplicity and productivity
Simplified process for installation : The process to deploy and configure IBM ACE so that an integration developer can use the IBM Integration Toolkit to start creating applications is simplified and quicker to complete.
Tutorials Gallery : From the Tutorials Gallery you can download, deploy, and test sample integration solutions.
Shared libraries : Shared libraries are introduced in V10 to share resources between multiple applications. Libraries in previous versions of IBM Integration Bus are static libraries.
Removal of the WebSphere MQ prerequisite : WebSphere MQ is no longer a prerequisite for using IBM Integration Bus on distributed platforms, which means that you can develop and deploy applications independently of WebSphere MQ.
Universal and independent
Graphical data mapping
Industry-specific and relevant
Dynamic and intelligent
High-performing and scalable
IBM delivers the IIB software either in traditional software install on your local premises, on premise IBM Cloud Private or by an IBM administered cloud environment. The Integration Bus in a cloud environment reduces capital expenditures, increases application and hardware availability, and offloads the skills for managing an Integration Bus environment to IBM cloud engineers. This promotes the ability of end users to focus on developing solutions rather than installing, configuring, and managing the IIB software. The offering is intended to be compatible with the on-premises product. Within the constraints of a cloud environment, users can use the same development tooling for both cloud and on-premises software, and the assets that are generated can be deployed to either.
History
Originally the product was developed by NEON (New Era of Networks) Inc., a company which was acquired by Sybase in 2001. The product was later re-branded as an IBM product called 'MQSeries Integrator' (or 'MQSI' for short). Versions of MQSI ran up to version 2.0.
The product was added to the WebSphere family and re-branded 'WebSphere MQ Integrator', at version 2.1.
After 2.1 the version numbers became more synchronized with the rest of the WebSphere family and jumped to version 5.0. The name changed to 'WebSphere Business Integration Message Broker' (WBIMB). In this version the development environment was redesigned using Eclipse and support for Web services was integrated into the product.
Since version 6.0 the product has been known as 'WebSphere Message Broker'. WebSphere Message Broker version 7.0 was announced in October 2009, and WebSphere Message Broker version 8.0 announced in October 2011
In April 2013, IBM announced that the WebSphere Message Broker product was undergoing another rebranding name change. IBM Integration Bus version 9 includes new nodes such as the Decision Service node which enables content based routing based on a rules engine and requires IBM WebSphere Operational Decision Management product. The IBM WebSphere Enterprise Service Bus product has been discontinued with the release of IBM Integration Bus and IBM is offering transitional licenses to move to IBM Integration Bus. The WebSphere Message Broker Transfer License for WebSphere Enterprise Service Bus enables customers to exchange some or all of their WebSphere Enterprise Service Bus license entitlements for WebSphere Message Broker license entitlements. Following the license transfer, entitlement to use WebSphere Enterprise Service Bus will be reduced or cease. This reflects the WebSphere Enterprise Service Bus license entitlements being relinquished during the exchange. IBM announced at Impact 2013 that WESB will be end-of-life in five years and no further feature development of the WESB product will occur.
Components
IBM Integration Bus consists of the following components:
IBM Integration Toolkit is an Eclipse-based tool that developers use to construct message flows and transformation artifacts using editors to work with specific types of resources. Context-sensitive help is available to developers throughout the Toolkit and various wizards provide quick-start capability on certain tasks. Application developers work in separate instances of the Toolkit to develop resources associated with message flows. The Toolkit connects to one or more integration nodes (formerly known as brokers) to which the message flows are deployed.
An Integration Node (formerly known as a broker) is a set of execution processes that hosts one or more message flows to route, transform, and enrich in flight messages. Application programs connect to and send messages to the integration node, and receive messages from the integration node.
IBM Integration Bus web user interface (UI) enables System Administrators to view and manage integration node resources through an HTTP client without any additional management software. It connects to a single port on an integration node, provides a view of all deployed integration solutions, and gives System Administrators access to important operational features such as the built-in data record and replay tool, and statistics and accounting data for deployed message flows. (The web UI supersedes the Eclipse-based Explorer from earlier versions).
How IBM Integration Bus works
A SOA developer defines message flows in the IBM Integration Toolkit by including a number of message flow nodes, each of which represents a set of actions that define a processing step. The way in which the message flow nodes are joined together determine which processing steps are carried out, in which order, and under which conditions. A message flow includes an input node that provides the source of the messages that are processed, which can be processed in one or more ways, and optionally deliver it through one or more output nodes. The message is received as a bit stream, without representational structure or format, and is converted by a parser into a tree structure that is used internally in the message flow. Before the message is delivered to a final destination, it is converted back into a bit stream.
IBM Integration Bus supports a wide variety of data formats, including standards-based formats (such as XML, DFDL, and JSON), industry formats (such as HL7, EDI and SWIFT), and custom formats. A comprehensive range of operations can be performed on data, including routing, filtering, enrichment, multicast for publish-subscribe, sequencing, and aggregation. These flexible integration capabilities are able to support the customer's choice of solution architecture, including service-oriented, event-oriented, data-driven, and file-based (batch or real-time). IBM Integration Bus unifies the Business Process Management grid, providing the workhorse behind how to do something, taking directions from other BPM tooling which tells IBM Integration Bus what to do.
IBM Integration Bus includes a set of performance monitoring tools that is visually portray current server throughput rates, showing various metrics such as elapsed and CPU time in ways that immediately draw attention to performance bottlenecks and spikes in demand. You can drill down into granular details, such as rates for individual connectors, and the tools enable you to correlate performance information with configuration changes so that you can quickly determine the performance impact of specific configuration changes.
In version 7 and earlier, the primary way general text and binary messages were modeled and parsed was through a container called a message set and associated 'MRM' parser. From version 8 onwards such messages are modeled and parsed using a new open technology called DFDL from the Open Grid Forum. This is IBM's strategic technology for modeling and parsing general text and binary data. The MRM parser and message sets remain a fully supported part of the product; in order to use message sets, a developer must enable them as they are disabled by default to encourage the adoption of the DFDL technology.
IBM Integration Bus supports policy-driven traffic shaping that enables greater visibility for system administrators and operational control over workload. Traffic shaping enables system administrators to meet the demands when the quantity of new endpoints (such as mobile and cloud applications) exponentially increases by adjusting available system resources to meet that new demand, delay or redirect the traffic to cope with load spikes. The traffic monitoring enables notifications to system administrators and other business stakeholders which increases business awareness and enables trend discovery.
Overview
IBM Integration Bus reduces cost and complexity of IT systems by unifying the method a company uses to implement interfaces between disparate systems. The integration node runtime forms the Enterprise Service Bus of a service-oriented architecture by efficiently increasing the flexibility of connecting unlike systems into a unified, homogeneous architecture. A key feature of IBM Integration Bus is the ability to abstract the business logic away from transport or protocol specifics.
The IBM Integration Bus Toolkit enables developers to graphically design mediations, known as message flows, and related artifacts. Once developed, these resources can be packaged into a broker archive (BAR) file and deployed to an integration node runtime environment. At this point, the integration node is able to continually process messages according to the logic described by the message flow. A wide variety of data formats are supported, and may be modeled using standard XML Schema and DFDL schema. After modeling, a developer can create transformations between various formats using nodes supplied in the Toolkit, either graphically using a Mapping node, or programmatically using a Compute node using Java, ESQL, or .Net.
IBM Integration Bus message flows can be used in a service-oriented architecture, and if properly designed by Middleware Analysts, integrated into event-driven SOA schemas, sometimes referred to as SOA 2.0. Businesses rely on the processing of events, which might be part of a business process, such as issuing a trade order, purchasing an insurance policy, reading data using a sensor, or monitoring information gathered about IT infrastructure performance. lex-event-processing capabilities that enable analysis of events to perform validation, enrichment, transformation and intelligent routing of messages based on a set of business rules.
A developer creates message flows in a cyclical workflow, probably more agile than most other software development. Developers will create a message flow, generate a BAR file, deploy the message flow contained in the BAR file, test the message flow and repeat as necessary to achieve reliable functionality.
Market position
Based on earnings reported for IBM's 1Q13, annualized revenue for IBM's middleware software unit increased to $14 billion (up $7 billion from 2011). License and maintenance revenue for IBM middleware products reached $7 billion in 2011. In 2012, IBM expected an increase in both market share and total market increase of ten percent. The worldwide application infrastructure and middleware software market grew 9.9 percent in 2011 to $19.4 billion, according to Gartner. Gartner reported that IBM continues to be number one in other growing and key areas including the Enterprise Service Bus Suites, Message Oriented Middleware Market, the Transaction Processing Monitor market and Integration Appliances.
Expected performance
IBM publishes performance reports for WebSphere Message Broker which provide sample throughput figures. Performance varies depending on message sizes, message volumes, processing complexity (such as complexity of message transformations), system capacities (CPU, memory, network, etc.), software version and patch levels, configuration settings, and other factors. Some published tests demonstrate message rates in excess of 10,000 per second in particular configurations.
With version 8 onwards, a new Global Cache feature enhances overall performance capability and throughput rates. This cache rides on top of IBM WebSphere eXtremeScale and is bundled with IBM Integration Bus. A dedicated message flow node is available to use in message flows, or access to the cache can be achieved through any of the compute nodes, from languages like Java, ESQL, or .Net.
Message flow nodes available
A developer can choose from many pre-designed message flow 'nodes', which are used to build up a message flow. Nodes have different purposes. Some nodes map data from one format to another (for instance, Cobol Copybook to canonical XML). Other nodes evaluate content of data and route the flow differently based on certain criteria
Message flow node types
There are many types of node that can be used in developing message flows; the following node transformation technology options are available:
Graphical Mapping content
eXtensible Stylesheet Language Transformations (XSLT)
Java
.NET
PHP
Extended Structured Query Language (ESQL)
JMS
Database
MQ's Managed File Transfer
Connect:Direct (Managed File Transfer)
File/FTP
SAP
PeopleSoft
JD Edwards
SCA
IBM Transformation Extender (formerly known as Ascential DataStage TX, DataStage TX and Mercator Integration Broker). Available as a separate licensing option
Email
Decision Support node. This node allows the Program to invoke business rules that run on a component of IBM Decision Server that is provided with the Program. Use of this component is supported only via Decision Service nodes. The Program license provides entitlement for the Licensee to make use of Decision Service nodes for development and functional test uses. Refer to the IBM Integration Bus License Information text for details about the program-unique terms.
Localization
IBM Integration Bus on distributed systems has been localized to the following cultures:
Brazilian Portuguese
Chinese (Simplified)
Chinese (Traditional)
French
German
Italian
Japanese
Korean
Spanish
US English
Polish
Russian
Turkish
Patterns
A pattern captures a commonly recurring solution to a problem (example: Request-Reply pattern). The specification of a pattern describes the problem being addressed, why the problem is important, and any constraints on the solution. Patterns typically emerge from common usage and the application of a particular product or technology. A pattern can be used to generate customized solutions to a recurring problem in an efficient way. We can do this pattern recognition or development through a process called service-oriented modeling.
Version 7 introduced patterns that:
Provide guidance in implementing solutions
Increase development efficiency because resources are generated from a set of predefined templates
Improve quality through asset reuse and common implementation of functions such as error handling and logging
The patterns cover a range of categories including file processing, application integration, and message based integration.
Pattern examples
Fire-and-Forget (FaF)
Request-Reply (RR)
Aggregation (Ag)
Sequential (Seq)
Supported platforms
Operating systems
Currently available platforms for IBM Integration Bus are:
AIX
HP-UX (IA-64)
Solaris (SPARC and x86-64)
Linux (IA-32, x86-64, PPC and IBM Z)
Microsoft Windows
z/OS
See also
Comparison of business integration software
References
Message Broker
Middleware |
45544924 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20S6 | Samsung Galaxy S6 | The Samsung Galaxy S6 is a line of Android-based smartphones manufactured, released and marketed by Samsung Electronics. Succeeding the Samsung Galaxy S5, the S6 was not released as a singular model, but instead in two variations unveiled and marketed together—the Galaxy S6 and Galaxy S6 Edge—with the latter differentiated primarily by having a display that is wrapped along the sides of the device. The S6 and S6 Edge were unveiled on March 1, 2015, during the Samsung Unpacked press event at MWC Barcelona, and released April 10, 2015, marking a counter-utilitarian and fashion-oriented course in the Galaxy S series. During the subsequent Samsung Unpacked event on August 13, 2015 (alongside the Galaxy Note 5), Samsung unveiled a third model, the Galaxy S6 Edge+, which features a larger phablet-sized display (5.7 inches instead of 5.1) and more memory (4 GB instead of 3), but lacks an infrared transmitter used for remote controlling.
Although the overall design of the Galaxy S6 still features characteristics from prior models, its construction was revamped to use a metal unibody frame and glass backing instead of plastic. Samsung also promoted an improved camera, streamlined user interface, support for major wireless charging standards, and support for a mobile payments platform that allows the device to emulate the magnetic strip from a credit card.
The Galaxy S6 received mostly positive reviews from critics, who praised the devices' upgraded build quality over prior models, along with improvements to their displays, performance, camera, and other changes. However, Samsung's decision to remove the ability for users to expand their storage using microSD cards or remove the battery, and the lack of water resistance were panned as being potentially alienating to power users, and the S6 Edge was also panned for not making enough use of its curved display to justify its increased cost over the standard model on-launch. It was succeeded by the Samsung Galaxy S7 in March 2016.
Development
Rumors
Rumors surrounding the Galaxy S5's successor began to surface in January 2015, it was reported that Samsung would be using an in-house Exynos system-on-chip rather than the Qualcomm Snapdragon 810 on the S6 due to concerns surrounding overheating. Later that month, Qualcomm affirmed in an earnings report that its products would not be included in "[a] large customer's flagship device". Fellow competitor LG Electronics disputed the allegations surrounding the 810; although demo units of the 810-equipped LG G Flex 2 at Consumer Electronics Show experienced signs of possible overheating, the company emphasized that they were pre-release models.
In early-February 2015, Bloomberg News reported that the S6 was to have a metal body, and was to be produced in a normal version, and a version with a screen curved along the left and right sides of the device, similarly to the Galaxy Note Edge. The S6's design was officially teased in a promotional webpage released by T-Mobile US on 22 February 2015, which showed a curved body and carried the tagline "Six Appeal".
Keynote
Samsung officially unveiled the Galaxy S6 and S6 Edge during the first Samsung Unpacked 2015 event at Mobile World Congress on 1 March 2015, for a release on 10 April 2015 in 20 countries. In Japan, the S6 and S6 Edge are marketed solely under the Galaxy brand, with most references to Samsung removed; sales of Samsung phones have been impacted by historical relations between Japan and South Korea, and a Samsung spokesperson stated that the Galaxy brand was "well-established" in the country.
Design changes
The Galaxy S6 models are designed to address criticisms and feedback received from prior models, and target the majority of users; during its unveiling, Samsung stated that it had placed a particular focus on design, its camera, and wireless charging. As part of these goals, a number of features and capabilities seen on the Galaxy S5 were removed, such as its waterproofing USB 3.0 port, MicroSD expandable internal storage and Mobile High-Definition Link (MHL).
Additionally, the Galaxy S6's battery is no longer user-replaceable; Samsung had been a major holdout from the trend against removable batteries, but argued that due to the S6's fast AC charging and its support of both major wireless charging standards, it no longer needs to provide the ability for users to remove and replace the Lithium-Ion battery.
The decision of denying users the replacement of their batteries is claimed to have been postponed throughout the Samsung Galaxy S series until users be "confident about charging their phones", claimed by Justin Denison, then vice president of product strategy at Samsung, on stage at the Unpacked 2015 Episode 1 keynote event, whereas a mockery commercial for the preceding Galaxy S5 against iPhones' non-replaceable batteries was aired the previous year, referring to its users as "wall huggers", citing their incessant dependence on wall charging.
The device's software was also simplified; a Samsung representative stated that 40% of the features in TouchWiz were removed or streamlined in comparison to the S5.
Specifications
Hardware and design
Design
The Galaxy S6 line retains similarities in design to previous models, but now uses a unibody frame made of aluminium alloy 6013 with a glass backing, a curved bezel with chamfered sides to improve grip, and the speaker grille, as well as the 3.5mm headphone connector was moved to the bottom next to the MicroUSB 2.0 type B charging port, located there for the first time in the Samsung Galaxy S series.
The devices are available in "White Pearl", "Black Sapphire", and "Gold Platinum" color finishes; additional "Blue Topaz" and "Emerald Green" finishes are exclusive to the S6 and S6 Edge respectively.
The Galaxy S6 was also available in a limited Iron Man edition with six additional colour options.
The S6 carries some regressions in its design over the S5; it does not contain a MicroSD card slot, and reverts to a micro-USB 2.0 port from USB 3.0. Both also lack water resistance and use non-removable batteries with decreased sizes.
Battery
The Galaxy S6 includes a 2550 mAh battery, while the S6 Edge includes a 2600 mAh battery, both non-user-replaceable.
The Samsung Galaxy S6 and S6 Edge are the first mobile phones in the Samsung Galaxy S series to support Qualcomm Quick Charge 2.0, allowing it to reach charging speeds of up to 15 Watts if used with a compatible charger. However, fast charging is deactivated while the device is in operation.
Hardware
The Galaxy S6 line is powered by a 64-Bit Exynos 7 Octa 7420 system-on-chip, consisting of four 2.1 GHz Cortex-A57 cores, and four 1.5 GHz Cortex-A53 cores, and 3 GB of LPDDR4 RAM for the S6 and S6 Edge while 4 GB of LPDDR4 for the S6 Edge+, being the first Samsung flagship phones to utilize 64-Bit processing. The processor is Samsung's first to use a 14 nm FinFET manufacturing process, which the company stated would improve its energy efficiency. It is available with 32, 64, or 128 GB of non-expandable storage, implementing Universal Flash Storage 2.0 standards. The S6 and S6 Edge features a 5.1-inch 1440p Super AMOLED display; similarly to the Galaxy Note Edge while the S6 Edge+ features a 5.7-inch 1440p Super AMOLED display, the S6 Edge and S6 Edge+'s display is slightly curved around the two lengthwise edges of the device, although not as aggressively as the Note Edge. It has a screen reflectance of 4.6% and a peak brightness of 784 nits.
The Galaxy S6 line supports both the Qi and Power Matters Alliance wireless charging standards.
Mobile High-Definition Link (MHL), a feature introduced with the Galaxy S2 in 2011, has been removed from the series with the Galaxy S6.
Cameras
For its rear-facing camera, Galaxy S6 uses the same image sensor (Sony Exmor RS IMX240) with optical image stabilization as the Galaxy Note 4, albeit with a f/1.9 aperture (compared to f/2.2), object tracking autofocus, real-time HDR, and using the infrared of the heart rate sensor for calibrating white balance. Samsung claimed that the camera upgrades would allow it to have better low-light performance.
For the first time in a Samsung Mobile flagship device, the Galaxy S6's camera software recorded slow-motion videos (720p@120fps) using the real-time method with audio.For the first time in a Samsung flagship device, a bundled slow-motion video editor allows viewing custom parts of the recorded footage at adjustable speeds and exporting it for sharing.
The front-facing camera was also upgraded from 2.1 megapixels to 5 megapixels, supports 1440p video recording for the first time on a Galaxy S series device, and also features an aperture of f/1.9 and being equipped with HDR viewfinder for the first time in a front camera. The fingerprint scanner in the home button now uses a touch-based scanning mechanism rather than swipe-based. Double-pressing the Home button launches the camera app, while it launched S Voice, then Samsung's digital assistant software, on Samsung flagship phones released between 2012 and 2014.
Both the front and the rear camera support burst shots at around eight pictures per second, limited to 30 pictures per row.
The design of the camera software has been changed. Changes include the introduction of swipe gesture controls for accessing camera modes and gallery and a camera mode selector that is a flat two-row icon+label grid. Additional camera modes can be downloaded. The Android Marshmallow update added manual camera controls.
The setting menu is no longer a four-column icon+label grid floating on top of the camera viewfinder, but a list on a separate page. Setting shortcuts are no longer customizable.
Software
Operating system
The S6 and S6 Edge were initially released running Android 5.0.2 "Lollipop" while the S6 Edge+ was initially released running Android 5.1.1 "Lollipop" with Samsung's TouchWiz software suite. TouchWiz has been streamlined on the S6 with a refreshed, minimal design and fewer bundled applications. Samsung newly allows users to download custom themes. Several Microsoft apps are bundled, including OneDrive, OneNote and Skype.
Curved edge
On the S6 Edge and S6 Edge+, users can designate up to five contacts for quick access by swiping from one of the edges of the screen. The five contacts are also color-coded; when the device is face down, the curved sides will also glow in the contact's color to signify phone calls. The Night Clock feature from the Galaxy Note Edge was carried over to the Edge models. The heart rate sensor can also be used as a button for dismissing calls and sending the caller a canned text message reply. The Sprint, Verizon, AT&T and T-Mobile versions remove some features available on the unbranded and unlocked version, like Smart Manager, Simple Sharing (only AT&T) and Download Booster (only Sprint and AT&T).
Accessibility
The single-handed operation mode, an accessibility feature introduced with the 2013 Galaxy Note 3 and available on the Galaxy S5 and Note 4, is present on the S6 Edge Plus while missing on the S6 and S6 Edge. If enabled in the settings, it is accessible with three home button presses rather than a previously used swipe gesture, and shrinks the screen's view port size to facilitate single-handed usage.
Unlike previously, the viewport size is fixed (not resizeable) and can only be located in two fixed spots in either lower corner rather than being freely movable. The on-screen navigation and volume keys, as well as selected app and contact shortcuts (S5 only) have been removed.
Samsung Pay
The S6 was the first Samsung device to include Samsung Pay, a mobile payments service developed from the intellectual property of LoopPay, a crowdfunded startup company that Samsung acquired in February 2015. Samsung Pay incorporates technology by LoopPay known as "magnetic secure transmission" (MST); it transmits card data to the pay terminal's swipe slot using an electromagnetic field, causing the terminal to register it as if it were a normally swiped card. LoopPay's developers noted that its system would not share the limitations of other mobile payment platforms, and would work with "nearly 90%" of all point-of-sale units in the United States. The service will also support NFC-based mobile payments alongside MST. Credit card information is stored in a secure token and payments must be authenticated using a fingerprint scan. Samsung Pay was not immediately available upon the release of the S6, but enabled in the middle of 2015.
Multi-window
While flagship devices since the Galaxy S3 support splitting the screen to show two applications simultaneously, the Galaxy S6 is the first Galaxy S series phone to support floating pop-up multi windows. The multi-window user interface is similar to that of the Galaxy Note 4, but applications are launched out of a dedicated split-screen app drawer (accessed through holding the task key or tapping an icon in recent apps drawer) instead of a floating side bar, which means that although floating pop-up windows can float on top of applications not supported by multi-window, the split-screen app launching interface requires an unsupported application to be minimized first in order to be accessed, to allow launching a multi-window supported app.
Multi-window applications can also be launched by holding down an app listed in the view of recently accessed apps that is accessed using the task key that is located on the left next to the home button.
Software updates
Samsung began to release Android 6.0.1 "Marshmallow" for the S6 in February 2016. It enables new features such as "Google Now on Tap", which allows users to perform searches within the context of information currently being displayed on-screen, and "Doze", which optimizes battery usage when the device is not being physically handled. On S6 Edge models, the Edge Display feature supports wider panels with a larger amount of functionality similar to those found on the S7 Edge.
In March 2017 Samsung began to release Android 7.0. "Nougat" to the European, American, Canadian and Indian Variants After the update the UI (User Interface) was redesigned and it is similar to Galaxy S7 except Video Enhancer, Screen Resolution Adjuster, Performance Mode and the ability to take photos while recording video.
The latest update for the S6 family is based on Android 7.0. "Nougat" with the November 2018 security build.
There have been various reports that the S6 family will receive Android 8.0 "Oreo" but it didn't as of January 2019.
On September 3, 2018, the S6 has received the final security patch as Samsung discontinued the S6 family.
Miscellaneous
The Galaxy S6 is the first Samsung flagship device to offer a torch toggle shortcut in the drop-down quick control menu instead of a home screen widget, making it accessible quicker and without having to leave an opened app.
Reception
Critical reception
The Galaxy S6 received a generally positive reaction from early reviewers, noting its higher quality design over previous Samsung devices, along with refinements to its camera, the high quality of its display, and the significantly smaller amount of "bloat" in the device's software. Anandtech believed that the S6's new fingerprint reader was comparable in quality to Apple's Touch ID system. The Verge concluded that the S6 and S6 Edge were "the first time I’ve felt like Samsung might finally be grappling with the idea of what a smartphone ought to be on an ontological level."
Some critics raised concerns over the S6's regressions in functionality over the Galaxy S5, particularly the battery being both smaller in capacity and non-user-replaceable, the missing microSD card slot, and lack of water resistance, arguing that it could alienate power users who were attracted to the Galaxy S series due to their inclusion.
Wired felt while its dual-sided curved screen was better-designed than that of the Galaxy Note Edge, the S6 Edge's use of the curved display was a "dramatic step backwards" due to the limited number of edge-specific features available in comparison to the Galaxy Note Edge due to the edge bending into existing screen space rather than extending by 160 pixels. As such, Samsung was criticized for providing no true justification for purchasing the S6 Edge over the non-curved S6, explaining that "it doesn’t do anything beyond the base model, but it'll be worth the money to some people because of how it looks and the air of exclusivity it communicates." The Verge was similarly critical, arguing that the additional features were "very forgettable". In regards to performance, The Verge felt that the S6 was the "fastest Android phone I've ever used", and that the company "solved any slowdowns you might experience in Android with brute force."
Some users reported issues with the flash on some S6 devices, remaining dimly lit even if the camera is not in use or the device is turned off. Samsung was made aware of the problem, which affects the S6 and S6 Edge, but has not specified when it will be fixed.
iFixit upon review of the phone noted it is difficult to repair, as it is using strong adhesive in its rear glass and the battery.
Security issues
In November 2015, Google's Project Zero unit publicized eleven "high-impact" security vulnerabilities that it had discovered in Samsung's distribution of Android on the S6. Some of these had already been patched as of their publication.
Sales
In its first month of sales, 6 million Galaxy S6 and S6 Edge units were sold to consumers, exceeding the number of Galaxy S5 units sold within a similar timeframe, but failing to break the record of 10 million set by the Galaxy S4. Still, facing competition by other vendors that have led to declining market share, a lower net profit, and foreseeing a "difficult business environment" for the second half of the year, Samsung announced during its second quarter earnings report in July 2015 that it would be "adjusting" the price of the S6 and S6 Edge. The company stated that sales of the two devices had a "marginal" effect on its profits. The S6 Edge+ shared almost a similar fate albeit lagging behind the Note 5.
See also
Comparison of Samsung Galaxy S smartphones
Comparison of smartphones
Samsung Galaxy S6 Active
Samsung Galaxy S series
Notes
References
External links
Android (operating system) devices
Mobile phones introduced in 2015
Galaxy S6
Galaxy S6
Mobile phones with infrared transmitter
Mobile phones with 4K video recording
Discontinued smartphones
Smartphones |
2307667 | https://en.wikipedia.org/wiki/Lugaru | Lugaru | Lugaru: The Rabbit's Foot is the first commercial video game created by indie developer Wolfire Games. It is a cross-platform, open-source 3D action game. The player character is an anthropomorphic rabbit utilizing a wide variety of combat techniques to battle wolves and hostile rabbits. The name Lugaru is a phonetic spelling of "loup-garou", which is French for werewolf. It was well reviewed and was fairly well received among the shareware community, especially among Mac users. A sequel, Overgrowth, was released in 2017.
Story
The story takes place on the island of Lugaru, an unknown number of years after the fall of the human race; all of the characters are anthropomorphic animals. The story focuses on Turner, a mildly famous retired warrior rabbit who lives in a small village with his family and friends.
Unbeknownst to Turner, a pack of wolves from a nearby island had killed and eaten all of the prey that lived there, and came to Lugaru to find more food. Not wanting to make the same mistakes as before, they plan to conserve their food supply by enslaving and farming the citizens of the local rabbit kingdom rather than hunting their new food supply to extinction. Afraid for his life, the rabbit king Hickory agreed to this takeover, provided that the wolves do not harm him. To secure the deal, he sent Jack, one of his most loyal servants, to kill the local raiders, since they would be the only real resistance the wolves would encounter. To do this, Jack decides to trick Turner into killing the raiders. Jack manipulates Skipper, a close friend of Turner's, convincing him to leave Turner's village undefended as part of a ploy to spur him into killing the raiders. Jack tells him nothing about the wolves, and assures him that no one will actually be harmed during the attack. He then pays the raiders to kill everyone in the village, including Skipper.
Jack's plan is largely successful. The raiders lure Turner away from the village and attack while he is absent, murdering his entire family and nearly all of his friends. Jack stages his own death during the attack. Turner makes it his mission to avenge the deaths of his loved ones, and begins to systematically kill all of the raiders, unwittingly opening the way for the wolves to conquer Lugaru. However, everything did not go exactly to plan: the raiders decided to keep Skipper alive for a ransom. Once most of the raiders have been killed, Turner encounters Skipper in one of the raiders' camps. Skipper tells Turner how Jack manipulated him, which prompts Turner to return to his village and confirm the location of Jack's corpse. After seeing that Jack's body is missing, Turner correctly surmises that Jack had gone to the Rocky Hall (the location of the Rabbit Kingdom's monarchy) after betraying him, and decides to pursue him there.
After Turner reaches the Rocky Hall, one of the guards there informs Turner that Jack has set a bounty on him. However, most of the guards see Turner as a hero and refuse to attack him, instead pretending that they haven't seen him. Grateful, Turner leaves the Hall and goes north to continue pursuing Jack. On the way, Turner encounters and is forced to fight a desperate guard in need of money who had tracked him after he left the hall, a wolf, and five rabbit soldiers Jack had sent to kill him. Despite this, Turner finally tracks Jack down, finding him not far from where the soldiers were defeated. Seeing no reason to continue the charade, Jack explains the entire situation to Turner before being killed by him in single combat.
Having learned the terrible truth, Turner confronts king Hickory about the wolves. Hickory orders his guards to kill Turner, but in light of Hickory's dealings with the wolves, they refuse. With the full support of the king's guard, Turner proceeds to take power in a bloodless coup. He then vows to the other rabbits that he will meet with the Alpha wolf, and if need be, kill him. Hickory, determined to kill Turner and reclaim the throne, uses his connections with the wolf pack to send three wolf assassins to try and kill him before he reaches the Alpha. However, Turner manages to defeat the assassins and later finds Hickory hiding in the mountains with two of his most loyal guards. Not knowing the fate of the assassins, Hickory threatens Turner with them, and is shocked when Turner reveals he was able to kill them. Turner confronts Hickory over his cowardice in not even trying to fight against the wolf conquest, angering him, and prompting him to attack. Turner kills Hickory in the ensuing battle and takes his sword.
Bolstered by his recent victories, and growing misanthropic due to his recent struggles, Turner finally reaches the wolves' den and kills every wolf there, including the mothers and children. Ash, the Alpha wolf, arrives later. Ash warns that if Turner defeats him it would mean ruin for the rabbits as they would overpopulate, causing famine and civil war without the wolves enforcing the natural order. Turner counters this by stating that if he does not kill the wolves, they would just lose control and kill all the rabbits again, starving themselves out, and that a death by his hand would be more honorable. In the ensuing battle, Turner manages to successfully overpower Ash and defeat him. After this he returns to the Rocky Hall, where he is offered the chance to become king, since no one would dare challenge the power of a rabbit who killed an entire wolf pack. Turner declines, feeling that he is not up to governing, and decides to wander the island trying to find a new purpose for his life.
Gameplay
Most of Lugaru's gameplay consists of Hand-to-hand combat with a heavy emphasis on Martial arts, and in many cases the combat incorporates knives, swords, and bo staves. The player can perform disarms, reversals, and counter-reversals. Despite the focus on melee combat, the player is not limited to outright attacking their enemies, since Lugaru has pronounced elements of stealth gameplay, and actively rewards players for defeating foes while remaining undetected.
Lugaru's combat controls are entirely original. There are only three context-sensitive action buttons: one attack button, a jump button, and a more general crouch-reverse button. This setup puts special emphasis on the timing and positioning of attacks to maximize their effectiveness, rather than memorizing complicated key combinations to do more damage.
There is no HUD, so the player must rely entirely on visual cues to determine Turner's current state of health; most notably the character's posture, visible injuries on his body, darkness and blurred vision. The player must also keep note of various environmental factors such as sounds, the direction of the wind and the presence of blood on their weapons; since wolf enemies have a strong sense of smell, are less approachable from downwind, and can also smell blood from both open wounds and soiled weaponry. Similarly, rabbits have great hearing and are sensitive to noises generated by rustled bushes.
The game can be played in campaign mode, which includes mission-specific objectives and the storyline, as well as a "challenge" mode, which involves the player progressing through a series of fourteen maps with the goal of clearing them of all hostile creatures. There is also an interactive tutorial.
History
Development
Lugaru was made almost solely by David Rosen, including the game engine, models, animations and story. It was one of the first independently produced video games to employ ragdoll physics. It uses a unique combat system that bases attacks and counters on timing and context rather than different key combinations.
Originally released January 28, 2005 for Mac OS X, Windows and Linux
followed that year as target platforms.
Open sourcing with Humble Indie bundle
In 2010, after the success the first Humble Indie Bundle Lugaru was part of, Wolfire released the source code of Ryan C. Gordon's code branch of Lugaru under the GNU General Public License on May 11, 2010.
The source code availability allowed the game to be ported to additional platforms such as AmigaOS 4 or OpenPandora. In November 2016 David Rosen relicensed all assets under the open content CC BY-SA 3.0 Creative Commons license which makes Lugaru a fully free video game. In the beginning of 2017 the HD version followed. Development continues on a GitLab repository by the community.
Lugaru HD
Tim Soret would later improve the game's graphical textures, and Wolfire currently sells this updated version as Lugaru HD.
Sequel
David Rosen has announced that Overgrowth would be the sequel to Lugaru. A remake of Lugaru was included in Overgrowth.
Modding
Lugaru also has a number of mods made by the many fans of the game. Players can choose to download the "Lugaru Downloader" (discontinued, seeks a new developer) which gives them a list of all the fan-made mods so far. Lugaru Downloader also extracts and backs up the files and installs the mod automatically, rather than forcing users to back up files themselves and risk errors and glitches. The Wolfire forums contain a link and information, and the website comes with the download link and info.
Advanced modding, namely modifying skeletons to work custom animated characters into the game, 3D model customization, animation editing and map editing have been made possible by reverse-engineering the file formats and writing Python plugins for Blender. An overview of all modification resources, done modifications and how-to documentation can be found in this thread on the Wolfire forums.
Reception
Game designer Jacob Driscoll reviewed Lugaru in his web series, Game Design by Jacob Driscoll.
Ian Beck of InsideMacGames gave Lugaru an 8.25 out of 10 after an in-depth, three-page review. He criticized the game's linear story as a minor drawback and the ragdoll physics as being "mildly ridiculous". However, he praised the game's graphics as being "very good" (with consideration given to the game's low budget), and he also praised the motion blur, slow motion effects, and the attention the developer gave to environmental details like the blood effects on both the terrain and the character models. He especially praised the context-based combat system.
Stephen Kelly of Blue Mage Reviews praised Lugaru's "advanced, innovative combat system" and noted that the game's "strategic depth prolongs replay value", while mentioning that the game can be frustratingly difficult at times, and has poor visuals. He also said, "It's a testament to the core gameplay that it continues to entertain well after the story mode is completed, and its blend of unusual ideas should be remembered and learned from in the future."
As of June 2013, GameRankings lists only a single review for the game: David Vizcaino of Gamers Daily News, who gave Lugaru an 8.3, stated the game is "well worth trying out if you're looking for something fun/challenging to play".
GamingOnLinux reviewer Hamish Paul Wilson gave the game 8/10, stating that it is "an impressive feat, if anything over ambitious and yet still executed with a fair amount of competence and skill. Though it has some rough edges, it offers an experience unparalleled by any other title, be it the console fighting games that established the genre or its counterparts on the desktop computer."
See also
List of open source games
References
External links
Lugaru website at Wolfire Games
Lugaru wiki at Wolfire Games
2005 video games
Action video games
Open-source video games
Single-player video games
Indie video games
AmigaOS 4 games
Amiga games
MacOS games
Linux games
Windows games
AROS software
Wolfire Games games
Creative Commons-licensed video games
Commercial video games with freely available source code
Video games about Leporidae
Video games developed in the United States |
1562173 | https://en.wikipedia.org/wiki/Snake%20Eyes%20%28G.I.%20Joe%29 | Snake Eyes (G.I. Joe) | Snake Eyes (also known as Snake-Eyes) is a fictional character from the G.I. Joe: A Real American Hero toyline, comic books, and animated series, created by Larry Hama. He is one of the original and most popular members of the G.I. Joe Team, and is most known for his relationships with Scarlett and Storm Shadow. Snake Eyes is one of the most prominent characters in the G.I. Joe: A Real American Hero franchise, having appeared in every series of the franchise since its inception. He is portrayed by Ray Park in the 2009 live-action film G.I. Joe: The Rise of Cobra, and the 2013 sequel G.I. Joe: Retaliation. Henry Golding portrays the titular character in the 2021 spin-off Snake Eyes: G.I. Joe Origins.
Profile
Snake Eyes is the code name of a member of the G.I. Joe Team. He is the team's original commando, and much of the history and information about his personal life and military service, including his birth name, place of birth and service number, have stayed classified or top secret throughout all depictions of his origin and his military career. All that is known for certain is his rank/grade (originally U.S. Army Sergeant/E-5, eventually reaching Sergeant First Class/E-7 before it too was deemed classified), his primary military specialty is infantry, and his secondary military specialty is hand-to-hand combat instructor. Snake Eyes was trained at the Military Assistance Command, Vietnam (MACV) Recondo School (Nha Trang), and served in long-range reconnaissance patrol (LRRP) in Southeast Asia with Stalker and Storm Shadow, eventually leaving the service to study martial arts with Storm Shadow's Arashikage ninja clan. He has undergone drill sergeant training, and is a former U.S. Army Special Forces and Delta Force operator. Very little else about his past is known. Basically, all of the intelligence about both his military career and his personal life, including his birth name, birthplace, and childhood has been designated as classified or top secret because of the clandestine military operations that he was participating in.
Snake Eyes was living a life of strict self-denial and seclusion in the High Sierra with a pet wolf named Timber when he was recruited for the G.I. Joe Team. He is an expert in all NATO and Warsaw Pact small arms and a black belt in 12 different fighting systems. He is highly skilled in the use of edged weapons, especially his Japanese sword and spike-knuckled trench knives, but he is equally qualified with and willing to use firearms and explosives. Snake Eyes is quiet in his movements and rarely relies on one set of weapons to the exclusion of others.
During one of his first missions for G.I. Joe, Snake Eyes' face was severely disfigured in a helicopter explosion. Since then, Snake Eyes has had extensive plastic surgery to repair the damage, but his vocal cords cannot be repaired. He usually wears a black bodysuit, along with a balaclava and visor to cover his face. When out of his uniform, Snake Eyes is shown to be Caucasian with an athletic build, blonde hair, and blue eyes.
Snake Eyes has been shown in most continuities to be romantically involved with fellow team member Scarlett. He has also had several apprentices, including Kamakura, Tiger Claw, and Jinx. His personal quote is "Move with the wind, and you will never be heard."
Toyline
Snake Eyes was one of the original figures in the G.I. Joe: A Real American Hero toyline in 1982. He shared many parts with other figures of that series, except for his unique head sculpt. He was designed to save Hasbro money in the paint application process, as his first figure was made of black plastic with no paint applied for details, and his head did not require any detail because of the mask. All of the original sixteen figures from 1982 were released with "straight arms". The same figure was re-released in 1983 with "swivel-arm battle grip", which made it possible for figures to "hold" their rifles and accessories in a more naturally human pose, as the forearm could now rotate 360 degrees.
A second version of Snake Eyes was released in 1985, packaged with his wolf Timber. A third version of Snake Eyes was released in 1989, and a fourth version in 1991. Snake Eyes has also been released as a member of several sub-lines of G.I. Joe figures, such as Ninja Force (1993) and Shadow Ninjas (1994). He has also been released in several Hasbro multi-packs such as the Heavy Assault Squad, Winter Operations, and the Desert Patrol Squad Toys "R" Us exclusive. A common element in almost all Snake Eyes figures, is that his face is covered (except for the 2005 "Classified" series action figure, depicting him before he was disfigured).
The 1991 version was also released as a 12" G.I. Joe Hall of Fame action figure in 1992. This Snake Eyes figure introduced a new variation on the trademark G.I. Joe scar by putting the scar over the figure's left eye, instead of on his right cheek as had traditionally been the case during the vintage era (1964–1978) of G.I. Joe.
A version of Snake Eyes with no accessories came with the Built to Rule Headquarters Attack in 2004. The figure featured additional articulation with a mid-thigh cut joint, and the forearms and the calves of the figure sported places where blocks could be attached.
International variants
The 1982 mold of Snake Eyes was used in several countries in various forms. In most countries, because he was different from all of the other G.I. Joe figures available at that time, he was treated as a member of Cobra. In Brazil, his head was recolored and used to create Cobra De Aço (Cobra of Steel), and the entire mold was used with a silver Cobra logo to create Cobra Invasor. The figure was also available without the Cobra logo as O Invasor. In Argentina, Snake Eyes was recolored in red and silver, and released as Cobra Mortal and as a different version of Cobra Invasor.
25th Anniversary
Snake Eyes was featured in the G.I. Joe Team 5 pack for the 25th Anniversary in 2007 as a Commando, using a new mold heavily based on his first design. His ninja design (V2) also was sold in the first line of individual figures packaged with Timber in 2007. In 2008, he received an updated version of his "Version 3" mold from 1989, which featured removable butterfly swords for the first time. For the finale of the 25th anniversary in April 2009, Hasbro launched a poll on their website, for fans to pick their favorite figures for the Hall of Heroes line. Two versions of Snake Eyes were selected for this series, which featured the figures packaged on a blister card, but also in a special collectors box.
The Rise of Cobra
In 2009, to coincide with the film G.I. Joe: The Rise of Cobra, Hasbro released four figures based on the Snake Eyes movie character. The Ninja Commando figure is a classic rendition of his "V2" uniform from the original series. The Paris Pursuit figure features a uniform similar to his "V2" uniform, but with an overcoat, and includes either a black or grey wolf. The "Arctic Assault" figure is dressed in a white winter parka, with a traditional black mask. The "City Strike" figure features the head of Snake Eyes from G.I. Joe: Resolute, on the body of a previous version. Snake Eyes was also released as part of the Target-exclusive "G.I. Joe Rescue Mission" 4-pack, with the "Paris Pursuit" head on a new body. A version of Snake Eyes was released in 2010 with the "Jet Storm Cycle".
Snake Eyes was released for the G.I. Joe: The Rise of Cobra line as a 12-inch "ninja figure", with a sound chip and speaker in the torso, and push button "sword fighting action". His arms and hands featured molded-on clothing and gear. He was also released in a Wal-Mart exclusive wave of 12 inch figures, packaged with the Arashikage Cycle.
The Pursuit of Cobra
Two versions of Snake Eyes were released in 2010 as part of "The Pursuit of Cobra" line, one with his wolf Timber, and one with a special "tornado kick" feature. Both "Arctic Threat" and "Desert Battle" versions of Snake Eyes were also released in 2011.
30th Anniversary
In 2011, two versions of Snake Eyes were released as part of the 30th Anniversary line, including one based on the animated series G.I. Joe: Renegades. As with the first movie, Hasbro released four figures based on the Snake Eyes character from G.I. Joe: Retaliation in 2012: a single carded figure, one included with the "Ninja Speed Cycle", one (with very limited articulation) included with the "Ninja Commando 4x4", and one with the "G.I. Joe Ninja Showdown Set". Three more versions of Snake Eyes were released in 2013
G.I. Joe Classified Series
Q2 of 2020 sees the release of G.I. Joe Classified Series, a new line of highly articulated 6-inch scale action figures that includes prominent characters like Snake-Eyes. This line features premium deco, detailing, articulation, and classic design updated to bring the classic characters into the modern era, plus accessories inspired by each character's rich history.
Comics
Marvel Comics
Snake Eyes first appears in G.I. Joe: A Real American Hero #1 (June 1982).
In the Marvel Comics' continuity, written and drawn by Larry Hama, Snake Eyes, Stalker, and Storm Shadow served together during the Vietnam War in a LRRP unit. On a particular mission, a heavy firefight with the North Vietnamese Vietnam People's Army (NVA) resulted in the apparent death of his teammates (among them Wade Collins, who actually survives and later joins Cobra, becoming Fred II of the Fred series Crimson Guardsmen). When a helicopter arrived to pick up the surviving team members, the pursuing NVA opened fire, severely injuring Snake Eyes. Despite a direct order from Stalker to leave him, Storm Shadow went back for Snake Eyes, and was able to get Snake Eyes safely aboard the helicopter.
Upon returning home from the war, Snake Eyes met with Colonel Hawk, who informed him that his family had been killed in a car accident (which involved the brother of the man who would eventually become Cobra Commander). Devastated, Snake Eyes accepts an offer to study the ninja arts with Storm Shadow's family, the Arashikage Clan. Over time, Snake Eyes and Storm Shadow became sword brothers, and unintentional rivals for the attention and favor of Storm Shadow's uncle, the Hard Master. During one of Snake Eyes' training sessions, the Hard Master expressed his desire for Snake Eyes to take over leadership of the Arashikage clan instead of Storm Shadow. Snake Eyes refused, but then Zartan—hired by Cobra Commander to avenge the death of his brother—mistakenly killed the Hard Master instead of Snake-Eyes, using an arrow he stole from Storm Shadow. With Storm Shadow believed responsible for the death of the Hard Master, the Arashikage ninja clan dissolved. Snake Eyes returned to America, where he took up residence in the High Sierra mountains, and was eventually recruited for the G.I. Joe Team by Hawk and Stalker.
During one of the team's first missions in the Middle East, Snake Eyes, Scarlett, Rock 'n Roll, and Grunt are sent to save George Strawhacker from Cobra. On the way, their helicopter collides with another in mid air, forcing the Joes to bail out. When Scarlett is trapped in the burning helicopter, Snake Eyes stays behind to save her, but a window explodes in his face, scarring him and damaging his vocal cords. Despite his injuries, Snake Eyes convinces Hawk to let him continue on with the mission. Strawhacker, who was once engaged to Snake Eyes' sister, never learns the identity of the "scarred, masked soldier" who saved his life.
Later, when Scarlett is captured by Storm Shadow, Snake Eyes travels to Trans-Carpathia to rescue Scarlett, and battles Storm Shadow for the first time since he had left the Arashikage clan. Snake Eyes eventually learns that Storm Shadow joined Cobra to find out who was truly behind the murder of the Hard Master. After discovering it was Zartan who killed his uncle, Storm Shadow leaves Cobra and becomes Snake Eyes' ally, ultimately becoming a member of the G.I. Joe Team.
Snake Eyes and Storm Shadow would team up for some of G.I. Joe's toughest missions, and the bond between them would be both strengthened and tested. In the story arc "Snake Eyes Trilogy", the Baroness seeks revenge upon Snake Eyes, under the mistaken belief that he had killed her brother in Southeast Asia. She captures Snake Eyes while he is recovering from plastic surgery to repair his face, and shoots Scarlett in the process. Storm Shadow, Stalker, and Wade Collins lead a rescue at the Cobra Consulate building where Snake Eyes was imprisoned. After a second rescue mission for George Strawhacker and a run-in with the Night Creepers, Snake Eyes is finally reunited with Scarlett. For the first time in many years, Snake Eyes speaks Scarlett's name, and she wakes from her coma, eventually returning to active duty.
As Marvel's G.I. Joe series is drawing to a close, Snake Eyes and Cobra Commander finally battle each other in issue #150. Snake Eyes eventually wins against an armored Cobra Commander, but the Commander would have the last laugh, as he captures Storm Shadow and successfully brainwashes him back to the allegiance of Cobra. Snake Eyes and Scarlett would continue to serve G.I. Joe until its disbandment.
Devil's Due Publishing
Devil's Due Publishing and Image Comics introduced new elements into Snake Eyes' past during their Snake Eyes Declassified miniseries, which show more of Cobra Commander's motivation to kill Snake Eyes while training to become a ninja. Snake Eyes had an encounter with Cobra Commander prior to the formation of Cobra, where Cobra Commander befriended Snake Eyes and tried to recruit him into murdering a judge. The judge had convicted Cobra Commander's older brother of arson and insurance fraud, resulting in the ruin of his brother's life, causing his spiral downward into alcoholism, and ultimately the car accident that claimed both his life and the lives of Snake Eyes' family. Snake Eyes agreed to accompany Cobra Commander, but at the last minute refused to go along with the plan. Cobra Commander then killed the judge, and swore revenge against Snake Eyes, resulting in him hiring Firefly (who in turn subcontracted Zartan) to kill Snake Eyes while he was training with the Arashikage Clan.
The first four issues of G.I. Joe: Frontline featured Larry Hama's story "The Mission That Never Was". After the official disbandment, the original G.I. Joe team had to transport a particle beam weapon from Florida to General Colton's location in New York City. Since Billy, Storm Shadow, and the Baroness were left under the influence of Cobra's Brain Wave Scanner at the end of the original series, Snake Eyes is on this mission to save Storm Shadow. At the end of this story, Storm Shadow returns to his ways as a ninja, and says he will deal with Snake Eyes when he is ready. Snake Eyes and Scarlett move back to his home in the High Sierras, where Timber has died but sired a litter of pups before passing, and Snake Eyes adopts one. After the G.I. Joe Team disbanded, Snake Eyes and Scarlett leave the military and become engaged, but for unknown reasons on the day of the wedding, Snake Eyes disappears and retreats again to his cabin in the High Sierras.
The following Master & Apprentice miniseries reveals that Snake Eyes, along with Nunchuk, and T'Jbang, were training a new apprentice, Ophelia, to be the last of the Arashikage ninja clan, shortly after he and Scarlett became engaged. As Ophelia's final test, she and Snake Eyes confront Firefly for his role in the murder of the Hard Master. However, Firefly kills Ophelia and escapes, leaving Snake Eyes devastated. As a result, on his wedding day, Snake Eyes breaks off his engagement to Scarlett in front of Stalker, then again disappears to his compound in the Sierras. There, he is approached by Sean Collins, the son of his Vietnam War buddy Wade Collins. Sean asks Snake Eyes to train him as a new apprentice, after watching his crew also get slaughtered by Firefly on the night Ophelia was killed. Some time later, Jinx and Budo call Snake Eyes to investigate new intel on the location of Firefly, who is working for the "Nowhere Man". Snake Eyes confronts Firefly, who is meeting with another masked ninja, revealed to be Storm Shadow. Sean is eventually given the name Kamakura, and would later join the G.I. Joe team.
In the pages of G.I. Joe: A Real American Hero, Snake Eyes and Scarlett would be reunited upon G.I. Joe's reinstatement, and the two again became engaged. Snake Eyes is involved in many skirmishes with Cobra, including altercations with Storm Shadow, the return of Serpentor (in which Snake Eyes was injured by a grenade blast but quickly recovered), Snake Eyes' triumph over the Red Ninja leader Sei Tin (which gave Snake Eyes control of the Red Ninja clan), and a close-call defeat at the hands of the heavily armored Wraith. The team is then reduced to a smaller unit, and when Snake Eyes, Scarlett, and Duke get into trouble, a shadowy cabal of generals known as "The Jugglers" has Snake Eyes and Duke arrested. However, Scarlett meets with Storm Shadow (who had broken free of his mind control), and they rescue Snake Eyes and Duke from a convoy. They escape to Iceland and hide out with Scanner, however they are tailed by former Coil agent Overlord, who fatally injures Scanner and locks the Joes in a bomb shelter. In his last moments, Scanner activates the Icelandic station's self-destruct mechanism, killing Overlord in the blast and saving the Joes. The team then assists Flint, Lady Jaye, and General Philip Rey in dealing with a new menace, the Red Shadows. When the Red Shadows attempted to assassinate Hawk at a mountain camp, Snake Eyes sends his apprentice Kamakura to get Hawk to safety. Snake Eyes would later help in defeating the Shadows before their plot could be set into motion, even fighting leader Wilder Vaughn, who escapes.
Snake Eyes and Kamakura also travel to Asia, to assist Storm Shadow in finding his apprentice, who had been kidnapped by the Red Ninjas. Snake Eyes helps Storm Shadow defeat Red Ninja leader Sei Tin, but the mission is a failure. Snake Eyes relinquishes control of the Red Ninjas to Storm Shadow, who in turn leaves his clan in T'Jbang's care.
America's Elite
Snake Eyes is reactivated as a member of the team in G.I. Joe: America's Elite, along with Stalker, Scarlett, Flint, Duke, Shipwreck, Roadblock, and Storm Shadow. With their new covert status and reduced roster, they continued to track down Cobra cells and eliminate them, from their new headquarters in Yellowstone National Park code named "The Rock". When Vance Wingfield seemingly returns from the grave, and drops deadly satellites onto major metropolitan areas using equipment supplied by Destro, Duke, Scarlett and Snake Eyes all leave to conduct solo investigations. Snake Eyes tracks Firefly to Chicago, and interrupts his attempt to assassinate a gang lord. Upon returning, Snake Eyes finds that Scarlett has been captured while investigating Cesspool. He reveals that both he and Scarlett had implanted tracking devices in one another, and that only they know the frequencies. He finds her on Destro's submarine in the Pacific Ocean, and succeeds in rescuing her, but Destro escapes, and Snake Eyes dies during the operation.
Snake Eyes' body is stolen by the Red Ninjas, in order to resurrect him. The Joes track the Red Ninjas to China, where Sei-Tin takes control of Snake Eyes, and uses him to exact his revenge against Storm Shadow and Kamakura. They eventually defeat Sei-Tin and return Snake Eyes to normal. Shortly after, Scarlett observes Snake Eyes seemingly abandoning all of his ninja training, and focusing solely on his military training instead. Following the session, Scarlett unmasks Snake Eyes and is shocked at the sight. Later, Snake Eyes reveals to Scarlett and Stalker that the Baroness is still alive, and being held captive within the Rock, which leads them to confront General Colton. When ordered on a mandatory break, Snake Eyes and Kamakura go on a retreat to the High Sierras, where Kamakura tries to rationalize that Snake Eyes could not have died, but must have put himself into a trance. He then argues that Snake Eyes should not have given up his ninja skills, and that he wishes to work with him to restore his faith. Snake Eyes returns to active duty, and investigates a medical facility with Stalker and Scarlett, where they find a fatally injured Scalpel. He informs them that the Baroness is free and looking for revenge on both G.I. Joe and Cobra.
In the one-shot comic Special Missions: Antarctica, Snake Eyes is part of the team that is called to investigate an Extensive Enterprises venture in Antarctica. The G.I. Joe team eventually split up to find Tomax and Xamot, and Snake Eyes goes with Snow Job to infiltrate their base, where they fight and chase Tomax off.
Snake Eyes is involved in various battles during the final arc "World War III". When the Joes start hunting down every member of Cobra that they can find, Snake Eyes and Scarlett apprehend Vypra, and capture Firefly in Japan. As part of Cobra Commander's sinister plot, he sends the elite squadron known as The Plague to attack G.I. Joe headquarters. As the evenly matched Plague and G.I. Joe teams clash, Cobra sleeper cells attack government buildings in nations across the globe.
Meanwhile, Storm Shadow tries to stop Cobra from liberating prisoners from the G.I. Joe prison facility, The Coffin. He is partially successful, but Tomax manages to free Firefly and several others, while killing those Cobra Commander considered "loose ends". Storm Shadow then joins Snake Eyes and the rest of the main team in defeating several Cobra cells, and disarming nuclear weapons that Cobra Commander has placed in the Amazon and Antarctica. Cobra Commander and The Plague retreat to a secret base in the Appalachian Mountains, where the final battle takes place, and Snake Eyes again defeats Firefly in a sword duel. In the end, Snake Eyes is shown among the members of the fully restored G.I. Joe team.
Hasbro later announced that all stories published by Devil's Due Publishing are no longer considered canonical, and are now considered an alternate continuity.
Alternate continuities
In the separate continuity of G.I. Joe: Reloaded, which featured a more modern and realistic take on the G.I. Joe/Cobra war, it is hinted that Snake Eyes is a former Cobra agent, who quit and decided to assist G.I. Joe instead. Although he did not serve on the team, it was shown that Snake Eyes was interested in Scarlett, but the series ended before anything further was explored.
Snake Eyes appears in G.I. Joe vs. The Transformers, the Devil's Due crossover series with Transformers set in an alternate continuity. As G.I. Joe is organized, Snake Eyes is assigned to a group of soldiers protecting a peace conference in Washington. He is called "Chatterbox" but does not actually speak, because he had been dared by the other soldiers to actually keep quiet for a time. Snake Eyes is terribly scarred, and loses his voice, when a Cobra Commander-controlled Starscream shoots Cover Girl's missile tank out from under him. His family is also killed during the attack. During the assault on Cobra Island, Snake Eyes slices open one of Starscream's optics and shoves a grenade into the socket. During the final part of the first miniseries, Snake Eyes is given a Cybertronian-based Mech that allows him to fight the much larger Decepticons, as well as Cobra agents in Decepticon suits. The second miniseries focuses on several Transformers being sent back in time to various time periods, which forces G.I. Joe and Cobra to team-up to retrieve them. The first group to be sent back in time includes Snake Eyes, Lady Jaye, Zartan, and Storm Shadow, sent back to 1970s California. After recovering all of the Transformers, they arrive back on Cybertron. During the third miniseries, it is shown that Snake Eyes has developed a love interest with Scarlett, who returns those feelings after he rescues her from a Decepticon prison, and removes his mask to show his scarred face. Later, they appear to be in a relationship. During the fourth miniseries, Snake Eyes is only shown in one scene as still being an active member of the Joe team, along with Flint, Lady Jaye, and Duke. He also appears briefly fighting several of the Cobra-La Royal Guards.
Transformers/G.I. Joe was originally planned for publication during the same time as G.I. Joe vs. The Transformers by Dreamwave Productions, until they announced bankruptcy, leaving only the first miniseries completed. The story features the Transformers meeting the G.I. Joe team in 1939, where Snake Eyes is prominent in defeating the Decepticons by opening the Matrix. In the second miniseries set in the 1980s, Snake Eyes is somehow still in fighting shape, despite having been a member of the team in 1939.
IDW Publishing
G.I. Joe: A Real American Hero continuation
In 2009, IDW Publishing took over the license for G.I. Joe comics, and started a new series that continues where the Marvel Comics series ended. The series began with a free Comic Book Day issue #155 ½, and replaces all of the Devil's Due Publishing continuity that had previously been established. This continuation of the Marvel series is again written by Larry Hama.
Snake Eyes ultimately sacrifices himself to stop a revived Serpentor from destroying the Pit III, by tackling him into a shaft with a grenade in his hand. To convince Cobra that Snake Eyes is still alive, the recently recruited Sean Collins, who himself has been disfigured much as Snake Eyes was, is given the identity of Snake Eyes to continue in his name.
Hasbro Comic Universe
IDW Publishing also started a G.I. Joe comic series that does not connect to any of the past continuity. Snake Eyes is once again a member of the team, and throughout the first storyline, he is a renegade agent of G.I. Joe, with whom Scarlett is in communication unapproved by Hawk. Snake Eyes first appears in the Crimean Rivera chasing Nico. It is later mentioned by Duke that Snake Eyes has gone AWOL. Scarlett sends him a message signed "Love Red", which is a code telling him to run. He heads to Seattle where he finds Mainframe, and gives him the hard drive that Scarlett requested, containing information about Springfield. Once there, they retrieve evidence from a secret lab that Cobra exists, before the town is leveled by a MOAB. With the evidence in hand, the two are accepted back into the G.I. Joe team. Snake Eyes eventually heads to Manhattan, NYC, to meet his old mentor, who helps him heal his mind after his defeat.
In G.I. Joe: Origins, Snake Eyes receives an update to the origin of his wounds. In the first storyline, Duke and Scarlett travel to the North Las Vegas community hospital, and find Snake Eyes in the burn unit intensive care near bed K (BUICK), the only survivor of an explosion at a plastic surgery clinic. Snake Eyes' face and hands are completely bandaged, and he is now mute because of the explosion. Duke and Scarlett escape with Snake Eyes, before the hospital room is destroyed by the Billionaire/Chimera. Snake Eyes continues to appear with his face wrapped in bandages throughout the first storyline. He later appears in his black uniform with a visor and sword, a variation of his original figure's uniform, as part of the second storyline on a mission in London .
A solo title G.I. Joe: Snake Eyes started in May 2011, being part of the G.I. Joe: Cobra Civil War saga. After Cobra Civil War ended, G.I. Joe: Snake Eyes continued into the new story arc G.I. Joe: Cobra Command, finally showing why and how he deserted the Joes and what part Storm Shadow had played.
In January 2015, IDW published G.I. Joe: Snake Eyes – Agent of Cobra. Written by Mike Costa, this series looks into Snake Eyes joining Cobra, whether Storm Shadow and Scarlet will join him, and how Destro plays into his transition.
Snake Eyes is also the central character in IDW's 2020 comic series Snake Eyes: Deadgame, from writers Rob Liefeld and Chad Bowers.
DC Comics
Snake Eyes appears in the third issue of DC's crossover comic Batman/Fortnite: Zero Point. In the issue, Snake Eyes is sent into a time loop to fight Batman, both tying in every loop until they both respect each other and start working together. The issue ends with Snake Eyes walking into the storm so that Batman can escape the time loop.
Animated series
Sunbow
Unlike his comic book counterpart, Snake Eyes did not play a major role in the Sunbow's G.I. Joe: A Real American Hero TV series, with the exception of the first three miniseries. He was always portrayed as a trusted and loyal teammate, and even proved to have a sense of humor, as seen when he broke into a break-dancing routine on-stage, and later in a disguise resembling Boy George in the "Pyramid of Darkness" miniseries. In the first miniseries, "The M.A.S.S. Device", Snake Eyes appeared in his "V1" uniform, but for all of his later appearances he wore a bluish-grey version of his "V2" uniform. Additionally, he does not have a rivalry with Storm Shadow in the cartoon, who instead fights with such characters as Spirit and Quick Kick. Although Snake Eyes does not speak, the vocal effects of his wolf Timber were provided by Frank Welker.
Some of his origins were explored in "The M.A.S.S. Device", in which he is among the several Joes who head to the Arctic in search of radioactive crystals, one of three catalytic elements needed to power a M.A.S.S. Device. While Snake Eyes obtains the crystals which are found in a cave, Cobra cuts off the Joes' escape and Major Bludd detonates an explosive charge placed in the mine, which releases a cloud of radioactive gas. However, Snake Eyes lowers a glass shield, sealing himself behind and activating an emergency exit, which the rest of the Joes escape through. Snake Eyes survives the radiation and collects some crystals in a canister. While stumbling through the wilderness, he frees a wolf caught in a trap. They are rescued from a polar bear by a blind hermit, who cures Snake Eyes of his radiation sickness and names the wolf Timber. Adopting the wolf as a pet, Snake Eyes returns to G.I. Joe headquarters, where he delivers the crystals.
In the second miniseries, "The Revenge of Cobra", Duke and Snake Eyes are captured by Cobra. They are forced to fight each other in a gladiatorial combat, but send a Morse code to Joe headquarters to warn them about Cobra's plan to attack Washington, D.C. with the Weather Dominator. They later work with Roadblock and his friend Honda Lou West in stopping Destro from controlling the Weather Dominator. In "The Pyramid of Darkness", the third miniseries and first-season premiere, Snake Eyes and Shipwreck infiltrate a Cobra underwater factory and steal a laser disc containing information on the cubes to the pyramid of darkness. They are later rescued through the efforts of a popular lounge singer named Satin. When Cobra Commander and the Crimson Twins make a final attempt to flee via rocket ship, Snake Eyes, Shipwreck and Satin manage to stop them, before escaping so that the Joes could destroy the rocket.
Snake Eyes is shown in a few scenes of G.I. Joe: The Movie, including the opening title sequence, but like many of the characters of the Sunbow cartoon, he has a very minor role in the final battle. He is part of a unit of Joes led by Roadblock who go after the fleeing Cobra forces after Cobra's first attempt to steal the Broadcast Energy Transmitter (B.E.T.) and become captives of Cobra-La.
DiC
Snake Eyes was shown during the DiC's G.I. Joe series in his 1991 "V4" uniform. He did have a few key episodes, and was shown to be working with his blood brother Storm Shadow, who now was a member of the G.I. Joe Ninja Force. Snake Eyes was shown more in this series as a ninja, but none of his origins or his relationships were explored before this series ended.
Direct to video
Snake Eyes is a member of G.I. Joe in all of the direct to video CG-animated movies. The continuity of these movies does not tie into the previous history, and more directly leads into the events of G.I. Joe: Sigma 6. Snake Eyes and Storm Shadow are once again on opposite sides fighting each other.
Snake Eyes is shown throughout G.I. Joe: Spy Troops, which marked his first appearance as a major animated character. He is a part of the team that goes to rescue Scarlett after she is taken hostage by Zartan, but their relationship is not fully explored. Snake Eyes also spares Storm Shadow's life, even though he asked to have Snake Eyes end it.
Snake Eyes is seen in G.I. Joe: Valor vs. Venom as the master to both of his apprentices Jinx and Kamakura. Snake Eyes gives Kamakura a sword named "Tatsuwashi", and battles Storm Shadow as well as several of the new Cobra Ninjas.
In the animated short G.I. Joe: Ninja Battles, a new apprentice code named Tiger Claw is joining the G.I. Joe team, and learns of Snake Eyes' and Storm Shadow's past in the Arashikage Clan. Most of the movie is narration over original artwork and some scenes from the previous two movies, as well as some new footage at the end. This movie is not in the same continuity as the comics, and events here do not seem to progress into Sigma Six.
Resolute
Snake Eyes first appears in G.I. Joe: Resolute during a briefing on the attack of the USS Flagg. During an autopsy on Bazooka, a scroll with the Arashikage symbol on it is found. The instructions on the scroll tell Snake Eyes to go where everything began, where he takes out a team of Cobra Neo-Vipers while Storm Shadow watches and waits. After this battle, a brief history of Storm Shadow and Snake Eyes is shown. In this series, their rivalry comes from Storm Shadow wanting his uncle to teach him the Seventh Step to the Sun technique, a move that allows one to kill an opponent in seven blows. When his uncle refuses, Storm Shadow signals Zartan to assassinate his uncle. Snake Eyes is shot in the throat by Zartan, to prevent him from warning their master, resulting in his becoming mute. Snake Eyes and Storm Shadow face off in a one on one battle. Storm Shadow initially dominates the fight, as he had been taught the Sixth Step to the Sun compared to Snake Eyes' fifth. Snake Eyes however shows that he in fact was taught the Seventh Step to the Sun technique, and kills Storm Shadow with seven blows, the last perforating his skull. He later rejoins the rest of the team in their final assault on Cobra Commander's headquarters. The love triangle of Snake Eyes, Scarlett and Duke is also explored slightly in this series. Early on in the episode, Duke makes Scarlett choose between Snake Eyes and himself, and she ultimately decides to be with Duke.
Renegades
In G.I. Joe: Renegades, Snake Eyes is a member of G.I. Joe. He was given the name "Hebi no me" ("Snake Eyes") by his Arashikage clan sensei, Hard Master, because he possesses the "steely gaze of a serpent". He cannot speak after having his throat punctured, and just shows up for special missions when called by Scarlett, who can "translate" what he is thinking. He is not used to teamwork, but now that he has joined G.I. Joe, his sense of honor and morality would not let him walk away. In the episode "Dreadnoks Rising", Zartan takes off his visor but puts it back on and says, "You need it more than I do". Snake Eyes' wolf Timber made an appearance in the episode "White Out", where he was rescued by Snake Eyes from a bear trap, before they were assaulted by Storm Shadow and Shadow-Vipers; at the end, Snake Eyes asks Snow Job to watch Timber until he returns. In the episode "Revelations, Part 1", Scarlett learns that Snake Eyes briefly met her father, and promised him to look after his daughter, and he shows signs of having feelings for her. During the time when Snake Eyes still spoke, before his throat injury in the episodes "Return of the Arashikage, Parts 1–2", Snake Eyes was voiced by Danny Cooksey.
Sigma 6
Sigma 6 toys
Snake Eyes again appears as part of the G.I. Joe: Sigma 6 toy series. Although similar in concept to the earlier G.I. Joe: A Real American Hero toyline, the Sigma 6 action figures do not tie into the continuity of the original G.I. Joe universe, and were 8" in height rather than the smaller 3 ¾" scale figures of the A Real American Hero line.
The first wave in 2005 contained a Snake Eyes figure. A "Ninja Showdown" battle pack also contained alternate versions of Snake Eyes and Storm Shadow. In 2006, all of the 2005 figures were re-released with new molds and accessories, including four different versions of Snake Eyes. A new version of Snake Eyes was also released in 2007.
To complement the line of G.I. Joe: Sigma 6 action figures and vehicles, Hasbro also introduced a "mission scale" line of 2 ½ inch scale Mission Sets action figures. Each set of action figures is packaged as a "mission in a box", and includes a Mission Manual.
Sigma 6 animated series
In the Sigma 6 animated series, Snake Eyes' history has been substantially changed from the A Real American Hero series, but he still shares a connection with Storm Shadow, who refers to him as "brother". Although Storm Shadow is a brainwashed Cobra agent, he blames Snake Eyes for the ruin of the Arashikage ninja clan. In Sigma 6, both Jinx and Kamakura serve as Snake Eyes' apprentices and G.I. Joe reserve members. As is in the original series, Snake Eyes is mute, but the reason for this is not explored. While the A Real American Hero animated series never showed Snake Eyes' true face, the Sigma 6 continuity takes some visual cues from the A Real American Hero comics. In one episode, when Snake Eyes is fighting Storm Shadow, his visor breaks and it appears that he has blonde hair, blue eyes, and a scar near his eye as a result of a training accident. In the sixth episode of season 2, Snake Eyes faces off against a pack of wolves; after saving one, the unnamed wolf helps him throughout the episode, and is later seen howling atop a hill near Sigma Six headquarters. This was confirmed as a Sigma 6 version of Timber, when an Arctic Sigma Six figure of Snake Eyes was released with Timber, with the figure's bio card describing the plot from this episode.
Sigma 6 comics
Snake Eyes appeared in the Sigma 6 comic book, released by Devil's Due Publishing with direct connection to animated series. Snake Eyes is spotlighted in issue #6, which centers on Storm Shadow, as Snake Eyes is sent in to retrieve a stolen electronic device from him. Storm Shadow refers to Snake Eyes as "brother", and breaks Snake Eyes' headgear, partially exposing his face, which again is shown to be of a blonde American with a scar.
Live action film
G.I. Joe: The Rise of Cobra (2009)
Martial artist/stuntman Ray Park played the character with actor and martial artist Leo Howard playing the younger version in the film G.I. Joe: The Rise of Cobra. In an early draft by Stuart Beattie, Snake Eyes would have spoken as a gag, but Larry Hama convinced him to drop the joke.
In the movie, Snake Eyes' origin is rebooted, with him being an abandoned 10-year-old child who found his way to the home of the Arashikage Clan. He battles the young Thomas Arashikage (Storm Shadow), who attacks him for stealing food. However, the orphan's natural ability to fight impresses Thomas's uncle, the Hard Master, who gives Snake Eyes his name, while bringing him under his wing. While Snake Eyes would initially lose to Thomas, Snake Eyes eventually surpasses Thomas and gains the favor of the Hard Master, becoming recognized as Hard Master's top student. Angered at Hard Master choosing Snake Eyes over him, Thomas appears to kill the Hard Master off-screen, and is then seen running off in midst of the chaos. Since then, Snake Eyes has chosen to take a vow of silence. Learning that Thomas, now known as Storm Shadow, is a member of Cobra, Snake Eyes fights him, before stabbing him and allowing him to fall into icy water at Cobra's Arctic base, leaving him for dead. Snake Eyes returns to the Pit with the surviving members of G.I. Joe.
G.I. Joe: Retaliation (2013)
Park returns as Snake Eyes in the sequel, G.I. Joe: Retaliation. In the film, Snake Eyes is framed by Zartan for assassinating the President of Pakistan under orders of G.I. Joe. Storm Shadow disguises himself as Snake Eyes to break Cobra Commander out of prison, as the real Snake Eyes watches from the shadows. With the help of Jinx, Snake Eyes captures Storm Shadow and takes him to the Blind Master to pay for his assassination of the Hard Master. However, Snake Eyes learns that Zartan was the one who murdered the Hard Master and framed Storm Shadow for it, and that Storm Shadow only joined Cobra in order to avenge the Hard Master's death. With this revelation, Storm Shadow teams up with Snake Eyes and the Joes to stop Cobra Commander's plan to destroy several countries and take over the world. During the final battle, Snake Eyes allows Storm Shadow to deal with Zartan, by giving him the Blade of Justice. Snake Eyes and the Joes stop Cobra Commander's plan and are declared heroes, absolved of their accusations.
Snake Eyes (2021)
A spin-off of G.I. Joe featuring Snake Eyes which takes place in an alternate continuity was released in 2021. Henry Golding stars in the title role and Max Archibald as the younger version of the character.
In Washington state, a young boy and his father walk through the woods and head for a cabin, which, unknown to the boy, is a "safe house" for them to hide out from assassins led by Mr. Augustine. That night, they are under attack by Augustine and his men. To determine his fate, the father is forced to roll a pair of dice, and rolls double ones. The boy tries to defend him but fails. While escaping as he is told, he hears the sound of a gunshot that kills his father. He takes his name from the dice roll to conceal his true identity.
Twenty years later, the boy, now known as "Snake Eyes", has grown into a talented and deadly martial arts fighter. In an underground fighting circuit in Los Angeles, he is discovered by Yakuza boss Kenta Takamura, who asks him to work for him. Snake Eyes accepts after Kenta offers to help him find his father's killer, and begins smuggling weapons inside gutted fish. One day, Kenta orders Snake Eyes to kill his cousin Tommy for betraying him as a proof of loyalty, but Snake Eyes instead helps Tommy escape. In the process, he loses consciousness due to lack of blood. Snake Eyes wakes up in Tommy's private jet en route to Tokyo. In return for saving his life, Tommy invites him to join the Arashikage Clan. Tommy's grandmother Sen, leader of the clan, and Akiko, the head of security, agrees to let Snake Eyes join the clan if he passes three trials. Unbeknownst to the Clan, Snake Eyes is working as a double agent for Kenta to steal for him the "Jewel of the Sun", an artifact of destructive powers which the Arashikage Clan has long been protecting and sworn not to use, so that Kenta can use it to seize power in the clan. When he learns he is tasked with stealing the jewel on behalf of the Cobra terrorist organization, Snake Eyes is reluctant, but Kenta hands him the dice used when his father was killed and promises to hand over Mr. Augustine if he succeeds. Snake Eyes then wins the trust of Akiko by revealing his father's murder and explains that is why there is no recorded history of him. He does not pass the third and final trial with sacred anacondas, but Akiko saves his life from the snakes. He admits that he has not been entirely honest, and is expelled.
Snake Eyes later returns to the dojo and steals the Jewel of the Sun, delivering it to Kenta and the Baroness. Upon learning that Augustine is a Cobra agent, Snake Eyes realizes the consequences of his bloodlust and spares him. He goes back to warn the Arashikage Clan of Kenta's attack, and assists them in fighting off Kenta's men. Kenta manages to escape, but Snake Eyes traps him in the anaconda pit, allowing the snakes to devour Kenta. The clan judges Snake Eyes to be pure of heart for abandoning his desire for revenge and welcomes him back. Tommy, who has broken his promise to never use the jewel, is determined no longer fit to lead, and leaves the clan, vowing to kill Snake Eyes should they ever meet again. Snake Eyes is approached by G.I. Joe member Scarlett, an ally of the clan, who offers him to become a Joe, after explaining that his father was a G.I. Joe agent as well. Snake Eyes begins a mission to find Tommy and bring him back to the Arashikage Clan. He dons a black outfit and helmet given to him by Akiko before leaving.
Video games
Snake Eyes is one of the featured characters in the 1985 computer game G.I. Joe: A Real American Hero.
Snake Eyes appears, in his "V3" uniform, as a playable character in the 1991 G.I. Joe video game for the NES. His special abilities include jumping faster and higher than the other characters, and he can use his sword as a projectile weapon that does not use up any ammo. He can be selected for any of the missions from the start, and is actually the team leader for the game's third mission set in New York.
Snake Eyes appears, in his "V4" uniform, as a playable character in the 1992 G.I. Joe: The Atlantis Factor video game for the NES. He can be selected for missions after he is found, which is not until late in the game after completing Area E.
Snake Eyes is featured as a playable character in the 1992 arcade game G.I. Joe.
Snake Eyes is featured as a playable character in the 2009 video game G.I. Joe: The Rise of Cobra.
Snake Eyes is featured as a purchasable skin in the game Fortnite: Battle Royale.
Snake Eyes is featured as a playable character in the 2020 video game G.I. Joe: Operation Blackout.
Reception
Snake Eyes is one of the most popular and recognizable G.I. Joe characters. In 1986, G.I. Joe creator Larry Hama called him the most successful character he ever created, believing this is because his mysterious appearance and persona means "he becomes a universal blank slate for projection of fantasy for anybody."
In 2008, TechCrunch used the question "Could he/she beat Snake Eyes?" while evaluating the top video game ninja characters. In 2010, Topless Robot ranked Snake Eyes as the first on the list of The 10 Coolest G.I. Joe Ninjas, calling him "the most popular member of the team". UGO.com included him on the lists of TV's Worst Speakers (in 2010), and the Best Silent Killers of Movies and TV (in 2011). In 2013, IGN ranked Snake Eyes as the best G.I. Joe on the list of The Top 10 Joes and Cobras, also stating he is like the "Wolverine of G.I. Joe, except he knows when to shut up."
References
External links
Snake Eyes at JMM's G.I. Joe Comics Home Page
Snake Eyes at YOJOE.com
GI Joe Classified Collecting Guide at ActionFigure411.com
Comics characters introduced in 1982
Fictional adoptees
Fictional characters with disfigurements
Fictional kenjutsuka
Fictional military sergeants
Fictional mute characters
Fictional ninja
Fictional Ninjutsu practitioners
Fictional orphans
Fictional swordfighters
Fictional United States Army Delta Force personnel
Fictional United States Army Special Forces personnel
Fictional Vietnam War veterans
G.I. Joe soldiers
Fictional suicides |
5331396 | https://en.wikipedia.org/wiki/Pat%20McManus | Pat McManus | Patrick A. McManus (October, 1859 – May 19, 1917) was a Major League Baseball pitcher during part of the 1879 season. He was a native of Ireland.
McManus started and completed two games for the Troy Trojans of the National League. He gave up just 25 baserunners (24 hits and 1 walk) in 21 innings. He also gave up 21 runs, but only 7 of them were earned runs.
His first game was on May 22, 1879 against the Cleveland Blues at Kennard Street Park in Cleveland, Ohio. The Trojans lost, 10–8. His second and last game was August 13, 1879 against the Providence Grays at Putnam Grounds in Troy, New York. The Trojans lost, 11–3.
One of his teammates on the 1879 Trojans was Hall of Famer Dan Brouthers.
In McManus's short MLB career, he was 0–2 with 6 strikeouts, 1 walk, and an ERA of 3.00.
He died at the age of 57 in Mount Hope, New York.
External links
Retrosheet
References
Troy Trojans players
Major League Baseball pitchers
Major League Baseball players from Ireland
Major League Baseball players from the United Kingdom
19th-century baseball players
1859 births
1917 deaths
Troy Haymakers players
Capital City of Albany players
Rochester Hop Bitters players
People from Mount Hope, New York |
18613279 | https://en.wikipedia.org/wiki/Employee%20monitoring | Employee monitoring | Employee monitoring is the (often automated) surveillance of workers' activity. Organizations engage in employee monitoring for different reasons such as to track performance, to avoid legal liability, to protect trade secrets, and to address other security concerns. This practice may impact employee satisfaction due to its impact on the employee's privacy. Among organizations, the extent and methods of employee monitoring differ.
Surveillance Methods
A company can use its everyday electronic devices to monitor its employees almost continuously. Common methods include software monitoring, telephone tapping, video surveillance, email monitoring, and location monitoring.
Software monitoring. Companies often use employee monitoring software to track what their employees are doing on their computers. Tracking data may include typing speed, mistakes, applications used, and what specific keys are pressed.
Telephone tapping can be used to record employees' phone call details and conversations. These can be recorded during monitoring. The number of calls, the duration of each call, and the idle time between calls, can all go into a log for analysis by the company.
Video surveillance can provide video feed of employee activities that are passed through to a central location where they are monitored by another person. These can be recorded and stored for future reference which some believe is the most accurate way to monitor employees. "This is a benefit because it provides an unbiased method of performance evaluation and prevents the interference of a manager's feelings in an employee's review" (Mishra and Crampton, 1998). Management can review an employee's performance by checking the surveillance to detect and potentially prevent problems".
Email monitoring gives employers the ability to look at email messages sent or received by their employees. Emails can be viewed and recovered even if they had been previously deleted. In the United States, the Electronic Communications Privacy Act provides some privacy protections regarding monitoring of employees' email messages and other electronic communications. See Electronic Communications Privacy Act#Employee privacy.
Location monitoring can be used for employees that move their place of work. Common examples of companies that use location monitoring are delivery and transportation industries. Sonetimes the employee monitoring is incidental as the location is tracked for other purposes. Employees' phone calls can be recorded during monitoring. The number of calls, the duration of each call, and the idle time between calls, can all go into an automatic log for analysis by the company.
Key logging, or keystroke logging, is a process that records a user's typing. Key logging software may also capture screenshots when triggered by predefined keywords. Some see it as violating workplace privacy and it is notorious for being used with malicious intent. Loggers can collect and store passwords, bank account information, private messages, credit card numbers, PIN numbers, and usernames.
Legality
Employee monitoring often is in conflict with employees' privacy. Monitoring collects work-related activities, but it can also collect employee's personal information that is not linked to their work. Monitoring in the workplace may put employers and employees at odds because both sides are trying to protect personal interests. Employees want to maintain their privacy while employers want to ensure company resources aren't misused. In any case, companies can maintain ethical monitoring policies by avoiding indiscriminate monitoring of employees' activities. The employee needs to understand what is expected of them while the employer needs to establish that rule.
With employee monitoring, there are many guidelines that one must follow and put in place to protect the company and the individual. Some following cases are ones that have shaped the certain rules and regulations that are in effect today. For instance, in Canada, it is illegal to perform invasive monitoring, such as reading an employee's emails, unless it can be shown that it is a necessary precaution and there are no other alternatives. In Maryland, everyone in the conversation must give consent before the conversation can be recorded (especially during telephone calls). The state of California requires that the monitored conversations have a beep at certain intervals or there must be a message informing the caller that the conversations may be recorded. However, this does not inform the company representative which calls are being recorded. All employers must create a comprehensive employee handbook that will include both mandatory and recommended policies. Handbooks must explain in detail what employees are permitted or not allowed to do in the workplace. Employers must update handbooks if employment laws or policies change. Other states, including Connecticut, New York, Pennsylvania, Colorado and New Jersey have laws relating to when a conversation can be recorded. "Lawyers generally advise that one way for businesses to avoid liability for monitoring employees’ online activities is to take all necessary steps to eliminate any reasonable expectation of privacy that employees may have concerning their use of company email and other communications systems." Businesses makes employee monitoring a known tool that supervisors use to avoid any potential legal issues that may arise. They will announce this during new hire orientation, in a staff meeting, or even in a workplace contract that employees sign either at the time of hire or after a form of misconduct.
On May 7, 2022, employers in the state of New York will be required to provide prior notice for the monitoring of employee internet, telephone or email usage. The new law is an amendment to the New York civil rights law and applies to any private individual or entity with a place of business in the state of New York.
Legal Uses
Businesses use employee monitoring for various reasons. The follow is a list that includes, but is not limited to:
Find needed business information when the employee is not available.
Protect security of proprietary information and data.
Prevent or investigate possible criminal activities by employees.
Prevent personal use of employer facilities.
Check for violations of company policy against sending an offensive or pornographic email.
Investigate complaints of harassment.
Check for illegal software.
Legal Issues
In January 2016, European Court of Human Rights issued a landmark ruling in the case of Bărbulescu v Romania (61496/08) regarding monitoring of employees’ computers. The employee Mr. Bărbulescu accused the employer of violating his rights to ‘private life’ and ‘correspondence’ set in the Article 8 of the European Convention on Human Rights. It held that a sales engineer had a 'reasonable expectation of privacy' against personal messages being read (including those to his fiance and his brother), even though he was told not to use a workplace Yahoo messenger for personal reasons, because "an employer’s instructions cannot reduce private social life in the workplace to zero. Respect for private life and for the privacy of correspondence continues to exist, even if these may be restricted in so far as necessary". It follows that there is a human right to private communication, regardless of what an employer says.
A year later, in July 2017, German court ruled that computer monitoring of employees is reasonable but the use of keylogging software is excessive.
Employee monitoring software developers warn that in each case it is still recommended to advise a legal representative and the employees should give a written agreement with such monitoring Majority of instances are a case by case situation and is hard to treat all the issues and problems as one. As new laws have been enacted dictating the bounds of these practices, employers have been forced to change their monitoring protocols.
Financial costs of monitoring
According to the American Management Association, almost half (48%) of the American companies surveyed use video monitoring to counter theft, violence, and sabotage. Only 7% use video surveillance to track employees' on-the-job performance. Most employers notify employees of anti-theft video surveillance (78%) and performance-related video monitoring (89%) (retrieved from the article The Latest on Workplace Monitoring and Surveillance). In an article in Labour Economics, it has been argued that forbidding employers to track employees' on-the-job performance can make economic sense according to efficiency wage theory, while surveillance to prevent illegal activities should be allowed.
An indirect way that companies can be affected financially through employee monitoring is that they can be sure they are billing clients correctly. According to "Business 2 Community," inaccurately billing clients is always possible because of human error. Such inaccuracies can cause disputes between a company and a client which could eventually lead to the client terminating its business with the company. This sort of termination will not only hurt the company's revenue stream but also its reputation with other clients or potential clients. The suggested solution to this problem is a time tracking software to monitor the number of hours a client spends with an employee.
See also
Abusive supervision
Computer surveillance in the workplace
Counterproductive work behavior
Mass surveillance
Occupational health psychology
Right to privacy
Surveillance
Workplace privacy
Workplace deviance
Workplace incivility
Workplace health surveillance
References
Surveillance
Privacy
Ethically disputed working conditions
Workplace
Employee relations |
28203093 | https://en.wikipedia.org/wiki/Grum%20botnet | Grum botnet | The Grum botnet, also known by its alias Tedroo and Reddyb, was a botnet mostly involved in sending pharmaceutical spam e-mails. Once the world's largest botnet, Grum can be traced back to as early as 2008. At the time of its shutdown in July 2012, Grum was reportedly the world's 3rd largest botnet, responsible for 18% of worldwide spam traffic.
Grum relies on two types of control servers for its operation. One type is used to push configuration updates to the infected computers, and the other is used to tell the botnet what spam emails to send.
In July 2010, the Grum botnet consisted of an estimated 560,000–840,000 computers infected with the Grum rootkit. The botnet alone delivered about 39.9 billion spam messages in March 2010, equating to approximately 26% of the total global spam volume, temporarily making it the world's then-largest botnet. Late in 2010, the botnet seemed to be growing, as its output increased roughly by 51% in comparison to its output in 2009 and early 2010.
It used a panel written in PHP to control the botnet.
Botnet takedown
In July 2012, a malware intelligence company published an analysis of the botnet's command and control servers located in the Netherlands, Panama, and Russia. It was later reported that the Dutch Colo/ISP soon after seized two secondary servers responsible for sending spam instructions after their existence was made public. Within one day, the Panamanian ISP hosting one of Grum's primary servers followed suit and shut down their server. The cybercriminals behind Grum quickly responded by sending instructions through six newly established servers in Ukraine. FireEye connected with Spamhaus, CERT-GIB, and an anonymous researcher to shut down the remaining six C&C servers, officially knocking down the botnet.
Grum botnet zombie clean-up
There was a sinkhole running on some of the former IP addresses of the Grumbot C&C servers. A feed from the sinkhole was processed via both Shadowserver and abusix to inform the Point of Contact at an ISP that has an infected IP addresses. ISP's are asked to contact their customers about the infections to have the malware cleaned up. Shadowserver.org will inform the users of their service once per day and Abusix sends out a X-ARF (extended version Abuse Reporting Format) report every hour.
See also
Botnet
Malware
E-mail spam
Internet crime
Internet security
References
Internet security
Multi-agent systems
Distributed computing projects
Spamming
Botnets |
44409492 | https://en.wikipedia.org/wiki/Realmac%20Software | Realmac Software | Realmac Software is an independent software company based in Brighton, England. Dan Counsell founded the company in November 2002 and serves as its director.
History
In November 2002, Dan Counsell founded Realmac Software. The company released RapidWeaver, a template-based website editor, in 2004. Realmac Software acquired EventBox, a social media app, from its developers, The Cosmic Machine, in October 2009. Realmac Software released Clear, a to-do list app, for iOS in January and Mac operating systems in November 2012. The app reached spot two on the Apple Mac App Store behind Apple's Mountain Lion operating system. The app was also listed on FierceDeveloper's list of "Top Apps" in February 2012. FierceDeveloper named Realmac Software a "2012 Rising Star of Mobile Development" that November.
The company released Analog, a photo manipulation software, in May 2013. The following month, Realmac Software released Ember, an app designed to capture and organize screenshots. The company announced the release of Typed, an OS X app for writing in Markdown, in July 2014. That October, Clear was ranked 12th on Business Insider's list of "The World’s Greatest Apps."
Apps
Clear
Analog
Ember
RapidWeaver
Courier
Squash
References
External links
Typed.com
Typed: A Better Blogging Platform
2002 establishments in England
Companies based in Brighton and Hove
Software companies established in 2002
Software companies of the United Kingdom |
1247858 | https://en.wikipedia.org/wiki/Write%20Anywhere%20File%20Layout | Write Anywhere File Layout | The Write Anywhere File Layout (WAFL) is a proprietary file system that supports large, high-performance RAID arrays, quick restarts without lengthy consistency checks in the event of a crash or power failure, and growing the filesystems size quickly. It was designed by NetApp for use in its storage appliances like NetApp FAS, AFF, Cloud Volumes ONTAP and ONTAP Select.
Its author claims that WAFL is not a file system, although it includes one. It tracks changes similarly to journaling file systems as logs (known as NVLOGs) in dedicated memory storage device non-volatile random access memory, referred to as NVRAM or NVMEM. WAFL provides mechanisms that enable a variety of file systems and technologies that want to access disk blocks.
Design
WAFL stores metadata, as well as data, in files; metadata, such as inodes and block maps indicating which blocks in the volume are allocated, are not stored in fixed locations in the file system. The top-level file in a volume is the inode file, which contains the inodes for all other files; the inode for the inode file itself, called the root inode, is stored in a block with a fixed location. An inode for a sufficiently small file contains the file's contents; otherwise, it contains a list of pointers to file data blocks or a list of pointers to indirect blocks containing lists of pointers to file data blocks, and so forth, with as many layers of indirect blocks as are necessary, forming a tree of blocks. All data and metadata blocks in the file system, other than the block containing the root inode, are stored in files in the file system. The root inode can thus be used to locate all of the blocks of all files other than the inode file.
Main memory is used as a page cache for blocks from files. When a change is made to a block of a file, the copy in the page cache is updated and marked dirty, and the difference is logged in non-volatile memory in a log called the NVLOG. If the dirty block in the page cache is to be written to permanent storage, it is not rewritten to the block from which it was read; instead, a new block is allocated on permanent storage, the contents of the block are written to the new location, and the inode or indirect block that pointed to the block in question is updated in main memory. If the block containing the inode, or the indirect block, is to be written to permanent storage, it is also written to a new location, rather than being overwritten at its previous position. This is what the "Write Anywhere" in "Write Anywhere File Layout" refers to.
As all blocks, other than the block containing the root inode, are found via the root inode, none of the changes written to permanent storage are visible on permanent storage until the root inode is updated. The root inode is updated by a process called a consistency point, in which all dirty blocks not yet written to permanent storage are written to permanent storage, and a new root inode is written out, pointing to the blocks in the new version of the inode file. At that point, all of the changes to the file system are visible on permanent storage, using the new root inode. The NVLOG entries for changes that are now visible are discarded to make room for log entries for subsequent changes. Consistency points are performed periodically or if the non-volatile memory is close to being full of log entries.
If the server crashes before all changes to a file system have been made visible in a consistency point, the changes that have not been made visible are still in the NVLOG; when the server reboots, it replays all entries in the NVLOG, again making the changes recorded in the NVLOG, so that they will not be lost.
Features
As discussed above, WAFL does not store data or metadata in pre-determined locations on disk. Instead it automatically places data using temporal locality to write metadata alongside user data in a way designed to minimize the number of disk operations required to commit data to stable disk storage using single and dual parity based RAID.
Using a data placement based on temporal locality of reference can improve the performance of reading datasets which are read in a similar way to the way they were written (e.g. a database record and its associated index entry), however it can also cause fragmentation from the perspective of spatial locality of reference. On spinning HDDs this does not adversely affect files that are sequentially written, randomly read, or are subsequently read using the same temporal pattern, but does affect sequential read after random write spatial data access patterns because of magnetic head could be only in one position at a time to read data from platter while fragmentation does no effect on SSD drives.
Releases of ONTAP since 7.3.1 have included a number of techniques to optimize spatial data layout such as the reallocate command to perform scheduled and manual defragmentation, and the Write after Reading volume option which detects and automatically corrects suboptimal data access patterns caused by spatial fragmentation. Releases of ONTAP 8.1.1 include other techniques to automatically optimize contiguous free-space within the filesystem which also helps to maintain optimal data layouts for most data access patterns. Before 7G, the wafl scan reallocate command would need to be invoked from an advanced privilege level and could not be scheduled. Releases of ONTAP since 9.1 have included a number of techniques to optimize SSD usage such as Inline Data Compaction (in 9.1), starting with ONTAP 9.2 FabricPool functionality for automatic tiering of cold data to slow S3 storage and back if needed for SSD aggregates, and Cross Volume Deduplication within an aggregate with maximum of 800TiB for each aggregate.
Snapshots
WAFL supports snapshots, which are read-only copies of a file system. Snapshots are created by performing the same operations that are performed in a consistency point, but, instead of updating the root inode corresponding to the current state of the file system, saving a copy of the root inode. As all data and metadata in a file system can be found from the root inode, all data and metadata in a file system, as of the time when the snapshot is created, can be found from the snapshot's copy of the root inode. No other data needs to be copied to create a snapshot.
Blocks are allocated when written using a block map, which keeps track of which blocks are in use and which blocks are free. An entry in the block map contains a bit indicating whether the block is in use in the current version of the file system and several bits, one per snapshot, indicating whether the block is in use in the snapshot. This ensures that data in a snapshot is not overwritten until the snapshot is deleted. Using block map all new writes and rewrites are written to new empty blocks, WAFL only reports that block rewrite successful, but no rewrites actually occur, this approach called Redirect-on-write (ROW) technique. ROW is much faster on rewrite operations compare to Copy-on-write where old data block going to be rewritten in-place and captured in a snapshot needs to be copied first to space allocated for snapshot reserve in order to preserve original data, this generates additional data copy operations once system get rewrites to that block.
Snapshots provide online backups that can be accessed quickly, through special hidden directories in the file system, allowing users to recover files that have been accidentally deleted or modified.
NetApp's Data ONTAP Release 7G operating system supports a read-write snapshot called FlexClone. Snapshots are basis for technologies like SnapMirror, SnapVault and Online Volume Move while features like FlexClone, SnapLock, SnapRestore are snapshot-like technologies leverage on WAFL capabilities and properties like manipulations with inodes. Starting with ONTAP 9.4 maximum number of snapshots supported for each FlexVol is 1024, while for previous versions max limit was 255.
Starting with ONTAP 9.5 snapshot sharing functionality were added to run deduplication scan across Active file system and snapshots, and deduplication savings is a magnitude of number of snapshots. Before 9.5 not deduplicated data locked in a snapshot couldn’t be used by deduplication process and runs only in active file system.
File and directory model
An important feature of WAFL is its support for both a Unix-style file and directory model for NFS clients and a Microsoft Windows-style file and directory model for SMB clients. WAFL also supports both security models, including a mode where different files on the same volume can have different security attributes attached to them. Unix can use either access control lists (ACL) or a simple bitmask, whereas the more recent Windows model is based on access control lists. These two features make it possible to write a file to an SMB type of networked filesystem and access it later via NFS from a Unix workstation. Alongside ordinary files, WAFL can contain file-containers called LUNs with required special attributes like LUN serial number for block devices, which could be accessed using SAN protocols running on ONTAP OS software.
FlexVol
Each Flexible Volume (FlexVol) is a separate WAFL file system, located on an aggregate and distributed across all disks in the aggregate. Each aggregate can contain and usually has multiple FlexVol volumes. ONTAP during data optimization process including the "Tetris" which finishes with Consistency Points (see NVRAM) is programmed to evenly distribute data blocks as much as possible in each FlexVol volume across all disks in aggregate so each FlexVol could potentially use all available performance of all the data disks in the aggregate. With the approach of even data block distribution across all the data disks in an aggregate, performance throttling for a FlexVol could be done dynamically with storage QoS and does not require dedicated aggregates or RAID groups for each FlexVol to guarantee performance and provide the unused performance to a FlexVol volume which requires it. Each FlexVol could be configured as thick or thin provisioned space and later could be changed on the fly any time. Block device access with storage area network (SAN) protocols such as iSCSI, Fibre Channel (FC), and Fibre Channel over Ethernet (FCoE) is done with LUN emulation similar to Loop device technique on top of a FlexVol volume; thus each LUN on WAFL file system appears as a file, yet have additional properties required for block devices. LUNs can also be configured as thick or thin provisioned and can be changed later on the fly. Due to WAFL architecture, FlexVols and LUNs can increase or decrease configured space usage on the fly. If a FlexVol contains data, internal space can be decreased no less than used space. Even though LUN size with data on it could be decreased on WAFL file system, ONTAP has no knowledge about upper-level block structure due to SAN architecture so it could truncate data and damage the file system on that LUN, so the host needs to migrate the blocks containing the data into a new LUN boundary to prevent data loss. Each FlexVol can have its own QoS, FlashPool, FlasCache or FabricPool policies.
If two FlexVol volumes are created, each on two aggregates and those aggregates owned by two different controllers, and the system administrator needs to use space from these volumes through a NAS protocol. Then they would create two file shares, one on each volume. In this case, the administrator will most probably even create different IP addresses; each will be used to access a dedicated file share. Each volume will have a single write waffinity, and there will be two buckets of space. Though even if two volumes reside on a single controller, and for example on a single aggregate (thus if the second aggregate exists, it will not be used in this case) and both volumes will be accessed through a single IP address, there will still be two write affinities, one on each volume and there always will be two separate buckets of space. Therefore, the more volumes you have, the more write waffinities you'll have (better parallelization and thus better CPU utilization), but then you'll have multiple volumes (and multiple buckets for space thus multiple file shares).
Plexes
Similar to RAID 1, plexes in ONTAP systems can keep mirrored data in two places, but while conventional RAID-1 must exist within the bounds of one storage system, two plexes could be distributed between two storage systems. Each aggregate consists of one or two plexes. Conventional HA storage systems have only one plex for each aggregate, while SyncMirror local or MetroCluster configurations can have two plexes for each aggregate. On the other hand, each plex includes underlying storage space from one or more NetApp RAID groups or LUNs from third-party storage systems (see FlexArray) in a single plex similarly to RAID 0. If an aggregate consists of two plexes, one plex is considered a master and second as a slave; slaves must consist of exactly the same RAID configuration and drives. For example, if we have an aggregate consisting of two plexes where the master plex consists of 21 data and 3 1.8 TB SAS parity drives in RAID-TEC, then the slave plex must consist of 21 data and 3 1.8 TB SAS parity drives in RAID-TEC. The second example, if we have an aggregate consisted of two plexes where master plex consists of one RAID 17 data and 3 parity SAS drives 1.8 TB configured as RAID-TEC and second RAID in the master plex is RAID-DP with 2 data and 2 parity SSD 960 GB. The second plex must have the same configuration: one RAID 17 data and 3 parity SAS drives 1.8 TB configured as RAID-TEC, and the second RAID in the slave plex is RAID-DP with 2 data and 2 parity SSD 960 GB.
MetroCluster configurations use SyncMirror technology for synchronous data replication. There are two SyncMirror options: MetroCluster and Local SyncMirror, both using the same plex technique for synchronous replication of data between two plexes. Local SyncMirror creates both plexes in a single controller and is often used for additional security to prevent failure for an entire disk shelf in a storage system. MetroCluster allows data to be replicated between two storage systems. Each storage system could consist of one controller or be configured as an HA pair with two controllers. In a single HA pair, it is possible to have two controllers in separate chassis and distance from each other could be tens of meters, while in MetroCluster configuration distance could be up to 300 km.
Nonvolatile memory
Like many competitors, NetApp ONTAP systems utilizing memory as a much faster storage medium for accepting and caching data from hosts and, most importantly, for data optimization before writes which greatly improves the performance of such storage systems. While competitors widely using non-volatile random-access memory (NVRAM) to preserve data in it during unexpected events like a reboot for both write caching and data optimization, NetApp ONTAP systems using ordinary random-access memory (RAM) for data optimization and dedicated NVRAM or NVDIMM for logging of initial data in an unchanged state as they came from hosts similarly as transaction logging done in Relational databases. So in case of disaster, naturally, RAM will be automatically cleared after reboot, and data stored in non-volatile memory in the form of logs called NVLOGs will survive after reboot and will be used for restore consistency. All changes and optimizations in ONTAP systems done only in RAM, which helps to reduce the size of non-volatile memory for ONTAP systems. After optimizations data from hosts structured in Tetris-like manner, optimized and prepared with passing few stages (i.e., WAFL and RAID) to be written in underlying disks in RAID groups on the aggregate where data are going to be stored. After optimizations, data is going to be sequentially written on disks as part of the Consistency Point (CP) transaction. Data written to aggregates will contain necessary WAFL metadata and RAID parity so no additional read from data disks, calculate and write to parity disks operations will occur as with traditional RAID-6 and RAID-4 groups. CP at first creating system snapshot on an aggregate where data are going to be written, then optimized and prepared data from RAM written sequentially as a single transaction to the aggregate, if it fails, the whole transaction fails in case of a sudden reboot which allows WAFL file system always to be consistent. In case of successful CP transaction new active file system point is propagated and corresponding NVLOGs cleared. All data are always going to be written to a new place, and no rewrites can occur. Data blocks deleted by hosts marked as free so they could be used later on next CP cycles and the system will not run out of space with the always-write-new-data-to-new-place policy of WAFL. Only NVLOGs in HA storage systems is replicated synchronously between two controllers for HA storage system failover capability, which helps to reduce overall system memory protection overheads. In a storage system with two controllers in HA configuration or MetroCluster with one controller on each site, each of the two controllers divides its own non-volatile memory into two pieces: local and its partner. In MetroCluster configuration with four nodes, each non-volatile memory divided into next pieces: local, local partner's and remote partner's.
Starting with the All-Flash FAS A800 system, NetApp replaced the NVRAM PCI module with NVDIMMs connected to the memory bus, increasing the performance.
See also
Comparison of file systems
List of file systems
NetApp
NetApp FAS
ONTAP Operation System, used in NetApp storage systems
Notes
External links
File System Design for an NFS File Server Appliance (PDF)
- Method for maintaining consistent states of a file system and for creating user-accessible read-only copies of a file system - October 6, 1998
WAFL |
179661 | https://en.wikipedia.org/wiki/Duke%20Reid | Duke Reid | Arthur "Duke" Reid CD (21 July 1915 – 1 January 1975) was a Jamaican record producer, DJ and label owner.
He ran one of the most popular sound systems of the 1950s called Reid's Sound System, whilst Duke himself was known as The Trojan possibly named after the British-made trucks used to transport the equipment. In the 1960s, Reid founded record label Treasure Isle, named after his liquor store, that produced ska and rocksteady music. He was still active in the early 1970s, working with toaster U-Roy. He died in early 1975 after having suffered from a severe illness for the last year.
Biography
Reid was born in Portland, Jamaica. After serving ten years as a Jamaican police officer, Reid left the force to help his wife Lucille run the family business, The Treasure Isle Grocery and Liquor Store at 33 Bond Street in Kingston.
He made his way into the music industry first as a sound system (outdoor mobile discothèque) owner, promoter and disc jockey in 1953. He quickly overtook Tom the Great Sebastian and his sound system as the most popular sound system in Jamaica. Soon he was also sponsor and presenter of a radio show, Treasure Isle Time. A jazz and blues man at heart, Reid chose "My Mother's Eyes" by Tab Smith as his theme tune. Other favourites of his included Fats Domino, a noticeable influence on the early Reid sound.
He began producing recordings in the late 1950s. Early Reid productions were recorded in studios owned by others, but when the family business moved from Pink Lane, Kingston to Bond Street, Reid set up his own studio above the store. He became proprietor of a number of labels, chiefly Treasure Isle and Dutchess (his spelling). Much of his income derived from licensing agreements with companies in the UK, some of which set up specialist Duke Reid labels. He was known to carry his pistols and rifle with him in the studio and would sometimes fire them to celebrate a successful audition.
He dominated the Jamaican music scene of the 1960s, specialising in ska and rocksteady, though his love of American jazz, blues and soul was evident. Reid had several things going for him that helped him to rise to prominence. He made a concerted effort to be in the studio as much as possible, something his counterparts did not do. He was known as a perfectionist and had a knack for adding symphonic sounds to his recordings and producing dense arrangements. Furthermore, his records were considerably longer than those being produced by his rivals. His tunes often broke the four-minute barrier, while most ska songs were barely longer than two minutes. The material that Treasure Island issued exemplified the cool and elegant feel of the rocksteady era. In an interview for Kool 97 FM, Jackie Jackson along with Paul Douglas and Radcliffe "Dougie" Bryan were asked about the many recordings they did together as the rhythm section for Treasure Isle Records, and working with Sonia Pottinger and Duke Reid.
Duke Reid made an impact with his presence at toasting battles, trying to out play other DJs. He was dressed in a long ermine cloak and a gilt crown on his head, with a pair of Colt 45s in cowboy holsters, a cartridge belt strapped across his chest and a loaded shotgun over his shoulder. It was not uncommon for things to get out of hand and it was said that Duke Reid would bring the crowd under control by firing his shotgun in the air.
Reid initially disliked ska for being too simple and having too much focus on drums rather than on guitar. However, he eventually got behind ska and produced numerous hits. Reid's ska productions in the 1960s "epitomized the absolute peak of the style", according to music historian Colin Larkin. He had a long string of hits with performers like Stranger Cole, the Techniques, Justin Hinds and the Dominoes, Alton Ellis and the Flames, the Paragons, the Jamaicans, and the Melodians.
By the 1970s, Reid's poor health and the trend towards Rastafarian influenced roots reggae noticeably reduced the number of releases from Treasure Isle. Reid forbade Rasta lyrics from being recorded in his studio and thus Coxsone Dodd was able to dominate the Jamaican recording industry. Reid maintained his high-profile largely by recording the "toasting" of DJs U-Roy and Dennis Alcapone as well as vaguely Rasta-influenced oddities such as Cynthia Richards' "Aily-I".
At around this time, Reid protégé Justin Hinds noticed his boss appeared unwell and recommended a doctor. Cancer was diagnosed and Reid decided to sell Treasure Isle to Sonia Pottinger, widow of his friend Lenford "Lennie the King" Pottinger and already owner of High Note Records, which was one of the largest record labels on the Island. He remained involved for a while acting as a Magistrate but died in 1975.
Reid was posthumously awarded the Order of Distinction in the rank of Commander on 15 October 2007.
Partial discography
Various Artists – Soul To Soul DJ's Choice – 1973 – Trojan Records (1995)
Various Artists – Gems From Treasure Isle – 1966-1968 – Trojan Records (1982)
Various Artists – Ba Ba Boom Duke Reid – 1967-1972 – Trojan Records (1994)
Various Artists – Duke Reid's Treasure Chest – Heartbeat Records (1992)
Various Artists – Treasure Isle Dub Vol 01
Various Artists – Version Affair Vol 01 – Lagoon (1992)
Various Artists – Version Affair Vol 02 – Lagoon (1993)
Various Artists – Sir Coxsone & Duke Reid in Concert at Forresters Hall – Studio One
Various Artists – The Treasure Isle Story (4-CD box set) – Trojan Records (2017)
References
1915 births
1975 deaths
Jamaican police officers
Jamaican record producers
Jamaican sound systems
Jamaican reggae musicians
People from Portland Parish
Trojan Records artists
Commanders of the Order of Distinction |
19385468 | https://en.wikipedia.org/wiki/Rachana%20Malayalam | Rachana Malayalam | Rachna Malayalam is considered as the first computer operating system in Malayalam language and the first such system in a regional language in India. It was launched on February 16, 2006.
The operating system, developed by four people in Kerala, is in the Linux platform and is an open software.
References
2006 software
Language-specific Linux distributions
Malayalam language
Linux distributions |
52572062 | https://en.wikipedia.org/wiki/Legion%20Hacktivist%20Group | Legion Hacktivist Group | Legion is a hacktivist group that has attacked some rich and powerful people in India by hacking their twitter handlers. The group claims to have access to many email servers in India and has the encryption keys used by Indian banks over the Internet.
History
India attacks (2019)
Legion came into news when it launched its series of attacks starting with Rahul Gandhi, the member of Indian National Congress.
Reports say that not only Rahul's Twitter handler was hacked but his mail server was also hacked. The very next day, INC's Twitter handler was also hacked and tweeted irrelevant content. The group then hacked Twitter handlers of Vijay Mallya, Barkha Dutt and Ravish Kumar.
Hacking of Russian government (2021).
Because the Russian government tried to censor Telegram in 2018-2020, the Legion Hacker group hacked a sub-domain belonging to Federal Antimonopoly Service. They didn't cause big harm, but they posted a message to the Russian government stating that "The vandalism and destruction Roskomnadzor has caused to internet privacy and Russian anonymity has made them a target of Legion." - This text document was removed after 16hours but it is still available via Wayback Machine.
References
Advocacy groups
Hacking (computer security)
Internet-based activism
Internet terminology
2000s neologisms
Culture jamming techniques
Hacker culture
Hacker groups |
37899282 | https://en.wikipedia.org/wiki/Parasoft%20Virtualize | Parasoft Virtualize | Parasoft Virtualize is a service virtualization product that can create, deploy, and manage simulated test environments for software development and software testing purposes. These environments simulate the behavior of dependent resources that are unavailable, difficult to access, or difficult to configure for development or testing. It simulates the behavior of dependent resources such as mainframes, ERP systems, databases, web services, third-party information systems, or other systems that are out of direct developer/tester control.
The product is used in conjunction with hardware/OS virtualization to provide developers and testers with the resources they need to execute their development and testing tasks earlier, faster, or more completely. Its technologies for automating continuous testing are used as part of continuous delivery, continuous integration, and continuous release.
Background
In 2002, Parasoft released technology to "create service implementation stubs which emulate critical functionality that cannot be made available for testing.". This technology was introduced in Parasoft SOAtest. Since 2002, the technology was extended with "intelligent stubs [that] emulate the behaviour of a running system, allowing the developer to test services in the context of an application's actual behaviour and not on the live running system.". In 2009, the technology was extended with "application behavior virtualization," which can "create copies of both applications and back-end systems so a developer can reference such applications or systems when developing software."
The technology was extended and released as a separate product in 2011.
Parasoft created a free community edition in 2017 that allows individual users and small projects to use service virtualization at no cost.
Industry recognition
Parasoft Virtualize was awarded the 2012 Jolt Awards Grand Prize by a panel of Dr. Dobb's Journal-appointed judges. This annual award showcases products that have "jolted" the industry with their significance and made the task of creating software faster, easier, and more efficient. The most recent awards/recognitions received were being named "leader in functional and test automation tools" in Forrester's Functional Test Automation Tools evaluation and "innovation and technology leader" in voke's Service Virtualization Market Mover Array.
See also
Service virtualization
Automated testing
Software testing
Agile software development
Software performance testing
References
Web service development tools
Software testing tools |
153563 | https://en.wikipedia.org/wiki/Scilab | Scilab | Scilab is a free and open-source, cross-platform numerical computational package and a high-level, numerically oriented programming language. It can be used for signal processing, statistical analysis, image enhancement, fluid dynamics simulations, numerical optimization, and modeling, simulation of explicit and implicit dynamical systems and (if the corresponding toolbox is installed) symbolic manipulations.
Scilab is one of the two major open-source alternatives to MATLAB, the other one being GNU Octave. Scilab puts less emphasis on syntactic compatibility with MATLAB than Octave does, but it is similar enough that some authors suggest that it is easy to transfer skills between the two systems.
Introduction
Scilab is a high-level, numerically oriented programming language. The language provides an interpreted programming environment, with matrices as the main data type. By using matrix-based computation, dynamic typing, and automatic memory management, many numerical problems may be expressed in a reduced number of code lines, as compared to similar solutions using traditional languages, such as Fortran, C, or C++. This allows users to rapidly construct models for a range of mathematical problems. While the language provides simple matrix operations such as multiplication, the Scilab package also provides a library of high-level operations such as correlation and complex multidimensional arithmetic.
Scilab also includes a free package called Xcos for modeling and simulation of explicit and implicit dynamical systems, including both continuous and discrete sub-systems. Xcos is the open source equivalent to Simulink from the MathWorks.
As the syntax of Scilab is similar to MATLAB, Scilab includes a source code translator for assisting the conversion of code from MATLAB to Scilab. Scilab is available free of cost under an open source license. Due to the open source nature of the software, some user contributions have been integrated into the main program.
Syntax
Scilab syntax is largely based on the MATLAB language. The simplest way to execute Scilab code is to type it in at the prompt, --> , in the graphical command window. In this way, Scilab can be used as an interactive mathematical shell.
Hello World! in Scilab:
disp('Hello World');
Plotting a 3D surface function:
// A simple plot of z = f(x,y)
t=[0:0.3:2*%pi]';
z=sin(t)*cos(t');
plot3d(t,t',z)
Toolboxes
Scilab has many contributed toolboxes for different tasks, such as
Scilab Image Processing Toolbox (SIP) and its variants (such as SIVP)
Scilab Wavelet Toolbox
Scilab Java and .NET Module
Scilab Remote Access Module
More are available on ATOMS Portal or the Scilab forge.
History
Scilab was created in 1990 by researchers from INRIA and École nationale des ponts et chaussées (ENPC). It was initially named Ψlab (Psilab). The Scilab Consortium was formed in May 2003 to broaden contributions and promote Scilab as worldwide reference software in academia and industry. In July 2008, in order to improve the technology transfer, the Scilab Consortium joined the Digiteo Foundation.
Scilab 5.1, the first release compiled for Mac, was available in early 2009, and supported Mac OS X 10.5, a.k.a. Leopard. Thus, OSX 10.4, Tiger, was never supported except by porting from sources. Linux and Windows builds had been released since the beginning, with Solaris support dropped with version 3.1.1, and HP-UX dropped with version 4.1.2 after spotty support.
In June 2010, the Consortium announced the creation of Scilab Enterprises. Scilab Enterprises develops and markets, directly or through an international network of affiliated services providers, a comprehensive set of services for Scilab users. Scilab Enterprises also develops and maintains the Scilab software. The ultimate goal of Scilab Enterprises is to help make the use of Scilab more effective and easy.
In February 2017 Scilab 6.0.0 was released which leveraged the latest C++ standards and lifted memory allocation limitations.
Since July 2012, Scilab is developed and published by Scilab Enterprises and in early 2017 Scilab Enterprises was acquired by Virtual Prototyping pioneer ESI Group
Scilab Cloud App & Scilab Cloud API
Since 2016 Scilab can be embedded in a browser and be called via an interface written in Scilab or an API.
This new deployment method has the notable advantages of masking code & data as well as providing large computational power.
See also
SageMath
List of numerical-analysis software
Comparison of numerical-analysis software
SimulationX
References
Further reading
External links
Scilab website
Array programming languages
Free educational software
Free mathematics software
Free software programmed in Fortran
Numerical analysis software for Linux
Numerical analysis software for MacOS
Numerical analysis software for Windows
Numerical programming languages
Science software that uses GTK |
6490067 | https://en.wikipedia.org/wiki/Fleischhacker | Fleischhacker | Fleischhacker is a German surname, literally meaning "Meat Chopper." Notable people with the surname include:
Hans Fleischhacker (1912–1992), German anthropologist
Michael Fleischhacker (born 1969), Austrian journalist
See also
Fleishhacker
Fleischacker
German-language surnames |
7754370 | https://en.wikipedia.org/wiki/History%20of%20artificial%20life | History of artificial life | The idea of human artifacts being given life has fascinated humankind for at least 3000 years. As seen in tales ranging from Pygmalion to Frankenstein, humanity has long been intrigued by the concept of artificial life.
Pre-computer
The earliest examples of artificial life involve sophisticated automata constructed using pneumatics, mechanics, and/or hydraulics. The first automata were conceived during the third and second centuries BC and these were demonstrated by the theorems of Hero of Alexandria, which included sophisticated mechanical and hydraulic solutions. Many of his notable works were included in the book Pneumatics, which was also used for constructing machines until early modern times. In 1490, Leonardo da Vinci also constructed an armored knight, which is considered the first humanoid robot in Western civilization.
Other early famous examples include al-Jazari's humanoid robots. This Arabic inventor once constructed a band of automata, which can be commanded to play different pieces of music. There is also the case of Jacques de Vaucanson's artificial duck exhibited in 1735, which had thousands of moving parts and one of the first to mimic a biological system. The duck could reportedly eat and digest, drink, quack, and splash in a pool. It was exhibited all over Europe until it fell into disrepair.
In the late 1600s, following René Descartes' claims that animals could be understood as purely physical machines, there was increasing interest in the question of whether a machine could be designed that, like an animal, could generate offspring (a self-replicating machine). After the climax of the British Industrial Revolution in the early 1800s, and the publication of Charles Darwin's On The Origin of Species in 1859, various writers in the late 1800s explored the idea that it might be possible to build machines that could not only self-reproduce, but also evolve and become increasingly intelligent.
However, it wasn't until the invention of cheap computing power that artificial life as a legitimate science began in earnest, steeped more in the theoretical and computational than the mechanical and mythological.
1950s–1970s
One of the earliest thinkers of the modern age to postulate the potentials of artificial life, separate from artificial intelligence, was math and computer prodigy John von Neumann. At the Hixon Symposium, hosted by Linus Pauling in Pasadena, California in the late 1940s, von Neumann delivered a lecture titled "The General and Logical Theory of Automata." He defined an "automaton" as any machine whose behavior proceeded logically from step to step by combining information from the environment and its own programming, and said that natural organisms would in the end be found to follow similar simple rules. He also spoke about the idea of self-replicating machines. He postulated a machine – a kinematic automaton – made up of a control computer, a construction arm, and a long series of instructions, floating in a lake of parts. By following the instructions that were part of its own body, it could create an identical machine. He followed this idea by creating (with Stanislaw Ulam) a purely logic-based automaton, not requiring a physical body but based on the changing states of the cells in an infinite grid – the first cellular automaton. It was extraordinarily complicated compared to later CAs, having hundreds of thousands of cells which could each exist in one of twenty-nine states, but von Neumann felt he needed the complexity in order for it to function not just as a self-replicating "machine", but also as a universal computer as defined by Alan Turing. This "universal constructor" read from a tape of instructions and wrote out a series of cells that could then be made active to leave a fully functional copy of the original machine and its tape. Von Neumann worked on his automata theory intensively right up to his death, and considered it his most important work.
Homer Jacobson illustrated basic self-replication in the 1950s with a model train set – a seed "organism" consisting of a "head" and "tail" boxcar could use the simple rules of the system to consistently create new "organisms" identical to itself, so long as there was a random pool of new boxcars to draw from.
Edward F. Moore proposed "Artificial Living Plants", which would be floating factories which could create copies of themselves. They could be programmed to perform some function (extracting fresh water, harvesting minerals from seawater) for an investment that would be relatively small compared to the huge returns from the exponentially growing numbers of factories. Freeman Dyson also studied the idea, envisioning self-replicating machines sent to explore and exploit other planets and moons, and a NASA group called the Self-Replicating Systems Concept Team performed a 1980 study on the feasibility of a self-building lunar factory.
University of Cambridge professor John Horton Conway invented the most famous cellular automaton in the 1960s. He called it the Game of Life, and publicized it through Martin Gardner's column in Scientific American magazine.
1970s–1980s
Philosophy scholar Arthur Burks, who had worked with von Neumann (and indeed, organized his papers after Neumann's death), headed the Logic of Computers Group at the University of Michigan. He brought the overlooked views of 19th century American thinker Charles Sanders Peirce into the modern age. Peirce was a strong believer that all of nature's workings were based on logic (though not always deductive logic). The Michigan group was one of the few groups still interested in alife and CAs in the early 1970s; one of its students, Tommaso Toffoli argued in his PhD thesis that the field was important because its results explain the simple rules that underlay complex effects in nature. Toffoli later provided a key proof that CAs were reversible, just as the true universe is considered to be.
Christopher Langton was an unconventional researcher, with an undistinguished academic career that led him to a job programming DEC mainframes for a hospital. He became enthralled by Conway's Game of Life, and began pursuing the idea that the computer could emulate living creatures. After years of study (and a near-fatal hang-gliding accident), he began attempting to actualize Von Neumann's CA and the work of Edgar F. Codd, who had simplified Von Neumann's original twenty-nine state monster to one with only eight states. He succeeded in creating the first self-replicating computer organism in October 1979, using only an Apple II desktop computer. He entered Burks' graduate program at the Logic of Computers Group in 1982, at the age of 33, and helped to found a new discipline.
Langton's official conference announcement of Artificial Life I was the earliest description of a field which had previously barely existed:
Artificial life is the study of artificial systems that exhibit behavior characteristic of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems.
Microelectronic technology and genetic engineering will soon give us the capability to create new life forms in silico as well as in vitro. This capacity will present humanity with the most far-reaching technical, theoretical and ethical challenges it has ever confronted. The time seems appropriate for a gathering of those involved in attempts to simulate or synthesize aspects of living systems.
Ed Fredkin founded the Information Mechanics Group at MIT, which united Toffoli, Norman Margolus, Gerard Vichniac, and Charles Bennett. This group created a computer especially designed to execute cellular automata, eventually reducing it to the size of a single circuit board. This "cellular automata machine" allowed an explosion of alife research among scientists who could not otherwise afford sophisticated computers.
In 1982, computer scientist named Stephen Wolfram turned his attention to cellular automata. He explored and categorized the types of complexity displayed by one-dimensional CAs, and showed how they applied to natural phenomena such as the patterns of seashells and the nature of plant growth.
Norman Packard, who worked with Wolfram at the Institute for Advanced Study, used CAs to simulate the growth of snowflakes, following very basic rules.
Computer animator Craig Reynolds similarly used three simple rules to create recognizable flocking behaviour in a computer program in 1987 to animate groups of boids. With no top-down programming at all, the boids produced lifelike solutions to evading obstacles placed in their path. Computer animation has continued to be a key commercial driver of alife research as the creators of movies attempt to find more realistic and inexpensive ways to animate natural forms such as plant life, animal movement, hair growth, and complicated organic textures.
J. Doyne Farmer was a key figure in tying artificial life research to the emerging field of complex adaptive systems, working at the Center for Nonlinear Studies (a basic research section of Los Alamos National Laboratory), just as its star chaos theorist Mitchell Feigenbaum was leaving. Farmer and Norman Packard chaired a conference in May 1985 called "Evolution, Games, and Learning", which was to presage many of the topics of later alife conferences.
2000s
On the ecological front, research regarding the evolution of animal cooperative behavior (started by W. D. Hamilton in the 1960s resulting in theories of kin selection, reciprocity, multilevel selection and cultural group selection) was re-introduced via artificial life by Peter Turchin and Mikhail Burtsev in 2006. Previously, game theory has been utilized in similar investigation, however, that approach was deemed to be rather limiting in its amount of possible strategies and debatable set of payoff rules. The alife model designed here, instead, is based upon Conway's Game of Life but with much added complexity (there are over 101000 strategies that can potentially emerge). Most significantly, the interacting agents are characterized by external phenotype markers which allows for recognition amongst in-group members. In effect, it is shown that given the capacity to perceive these markers, agents within the system are then able to evolve new group behaviors under minimalistic assumptions. On top of the already known strategies of the bourgeois-hawk-dove game, here two novel modes of cooperative attack and defense arise from the simulation.
For the setup, this two-dimensional artificial world is divided into cells, each empty or containing a resource bundle. An empty cell can acquire a resource bundle with a certain probability per unit of time and lose it when an agent consumes the resource. Each agent is plainly constructed with a set of receptors, effectors (the components that govern the agents' behavior), and neural net which connect the two. In response to the environment, an agent may rest, eat, reproduce by division, move, turn and attack. All actions expend energy taken from its internal energy storage; once that is depleted, the agent dies. Consumption of resource, as well as other agents after defeating them, yields an increase in the energy storage. Reproduction is modeled as being asexual while the offspring receive half the parental energy. Agents are also equipped with sensory inputs that allow them to detect resources or other members within a parameter in addition to its own level of vitality. As for the phenotype markers, they do not influence behavior but solely function as indicator of 'genetic' similarity. Heredity is achieved by having the relevant information be inherited by the offspring and subjected to a set rate of mutation.
The objective of the investigation is to study how the presence of phenotype markers affects the model's range of evolving cooperative strategies. In addition, as the resource available in this 2D environment is capped, the simulation also serves to determine the effect of environmental carrying capacity on their emergence.
One previously unseen strategy is termed the "raven". These agents leave cells with in-group members, thus avoiding intra-specific competition, and attack out-group members voluntarily. Another strategy, named the 'starling', involves the agent sharing cells with in-group members. Despite individuals having smaller energy storage due to resource partitioning, this strategy permits highly effective defense against large invaders via the advantage in numbers. Ecologically speaking, this resembles the mobbing behavior that characterizes many species of small birds when they collectively defend against the predator.
In conclusion, the research claims that the simulated results have important implications for the evolution of territoriality by showing that within the alife framework it is possible to "model not only how one strategy displaces another, but also the very process by which new strategies emerge from a large quantity of possibilities".
Work is also underway to create cellular models of artificial life. Initial work on building a complete biochemical model of cellular behavior is underway as part of a number of different research projects, namely Blue Gene which seeks to understand the mechanisms behind protein folding.
See also
Automaton
Clanking replicator
Cellular automaton
Quantum artificial life
References
External links
Aguilar, W., Santamaría-Bonfil, G., Froese, T., and Gershenson, C. (2014). The past, present, and future of artificial life. Frontiers in Robotics and AI, 1(8). https://dx.doi.org/10.3389/frobt.2014.00008
Artificial life
Artificial life, history of
Artificial life, history of
Artificial life, history of |
39148522 | https://en.wikipedia.org/wiki/Smart%20order%20routing | Smart order routing | Smart order routing (SOR) is an automated process of handling orders, aimed at taking the best available opportunity throughout a range of different trading venues.
The increasing number of various trading venues and MTFs leads to a surge in liquidity fragmentation, when the same stock is traded on several different venues, so the price and the amount of stock can vary between them. SOR serves to tackle liquidity fragmentation, or even benefit from it. Smart Order Routing is performed by Smart Order Routers - systems designed to analyze the state of venues and to place orders the best available way, relying on the defined rules, configurations and algorithms.
History of Smart Order Routing
1980s
The forebears of today's smart order routers appeared in the late 1980s: "In an attempt to lock in the client order flow and free traders up from smaller trades, in order to handle the larger trades, many of the larger broker dealers began to offer a new service called
or DOT. DOT boxes were the first electronic machines to provide the institutional buy-side with what we now call "direct sponsored access", they, however, were not very smart yet (they could be directed to only one destination, the New York Stock Exchange)".
By 1988, SuperDOT included "roughly 700 communications lines that carry buy and sell orders."
1990s
It was in the US, in the late 1990s, that the first instances of Smart Order Routers appeared: "Once alternative trading systems (ATSes) started to pop up in U.S. cash equities markets … with the introduction of the U.S. Securities and Exchange Commission’s (SEC’s) Regulation ATS and changes to its order handling rules, smart order routing (SOR) has been a fact of life for global agency broker Investment Technology Group (ITG)."
2000s
As a reaction to the introduction of MiFID (Europe) and Reg NMS (USA), Smart Order Routers proliferated in Europe in 2007–2008, their sole purpose consisting in capturing liquidity on lit venues, or doing an aggressive or a passive split, depending on the market data. Later the SOR systems were enhanced to cope with High Frequency Trading, to decrease latency and implement smarter algorithms, as well as work with dark pools liquidity.
Here are some US statistics from 2006-2007: "Smart order routing capabilities for options are anonymous and easy to use, and optimizes execution quality with each transaction". "In a study conducted earlier this year in conjunction with Financial Insights, BAS found that about 5% of all equity orders were executed using trading algorithms, with this number expected to increase to 20% by 2007".
Smart order routing may be formulated in terms of an optimization problem which achieves a tradeoff between speed and cost of execution.
Benefits and disadvantages of Smart Order Routing
SOR provides the following benefits:
Simultaneous access to several venues;
Automatic search for the best Price;
A good framework for usage of custom algorithms;
Opportunity to get additional validation, control and statistics;
There are, however, some disadvantages:
Additional latency;
Additional complexity, and, therefore, additional risk of loss/outage;
Transparency of information, concerning your transactions, for the third party;
A brief concept
The idea of Smart Order Routing is to scan the markets and find the best place to execute a customer's order, based on price and liquidity.
Thus, SOR can involve a few stages:
1. Receiving incoming orders through different channels:
An incoming FIX gateway;
An incoming Gateway based on any custom protocol;
A front-End;
2. Processing the orders inside the SOR system, taking into account:
Characteristics of available venues;
Custom algorithms;
Settings/preferences of a certain client;
The state of available markets/market data;
Venue parameters, such as average latency, commission, and rank can be used to prioritize certain venues. Custom algorithms, like synthetic orders (peg, iceberg, spraying, TWAP), can be used to manage orders automatically, for instance, if a specific client has certain routing preferences among several brokers, or certain rules for handling of incoming, or creation of outgoing orders. It is also crucial to track the actual venue situation, like the trading phase, as well as the available opportunities. Thus, any Smart Order Router requires real-time market data from different venues. The market data can be obtained either by connecting directly to the venue's feed handlers, or by using market data providers.
3. Routing the orders to one or several venues according to the decision made at step 2 using:
A FIX gateway;
A custom API gateway;
Routing here does not just imply static routing to a certain venue, but dynamic behavior with updates of existing orders, creation of new ones, sweeping to catch a newly appeared opportunity.
At a closer look, the structure of the SOR system usually contains:
Client Gateways (to receive incoming orders of the SOR customers);
Market gateways (to send orders to certain exchanges);
The SOR implementation (to keep the SOR logic and custom algos and tackle the clients’ orders);
Feedhandlers (to provide market data from exchanges, for decision-making);
Client front-ends (to provide GUI for SOR );
Algorithmic trading and SOR
The classic definition of Smart Order Routing is choosing the best prices and order distribution to capture liquidity. "Forwarding orders to the "best" out of a set of alternative venues while taking into account the different attributes of each venue. What is "best" can be evaluated considering different dimensions – either specified by the customer or by the regulatory regime – e.g. price, liquidity, costs, speed and likelihood of execution or any combination of these dimensions".
In some cases, algorithmic trading is rather dedicated to automatic usage of synthetic behavior. "Algorithmic trading manages the "parent" order while a smart order router directs the "child" orders to the desired destinations."
"... slicing a big order into a multiplicity of smaller orders and of timing these orders to minimise market impact via electronic means. Based on mathematical models and considering historical and real-time market data, algorithms determine ex ante, or continuously, the optimum size of the (next) slice and its time of submission to the market. A variety of principles is used for these algorithms, it is aimed at reaching or beating an implicit or explicit benchmark: e.g. a volume weighted average price (VWAP) algorithm targets at slicing and timing orders in a way that the resulting VWAP of its own transactions is close to or better than the VWAP of all transactions of the respective security throughout the trading day or during a specified period of time".
However, smart order routing and algorithmic trading are connected more closely than it seems. Since even Smart Order Routing can be considered the simplest example of algorithm, it is reasonable to say that algorithmic trading is a logical continuation and an extension of Smart Order Routing.
This is a common example of a simple Smart Order Routing strategy.
Having the initial Order Book, the SOR strategy will create child orders, that is orders which aim at completing the initial SOR parent order. These orders can either be aggressive or passive depending on the current context and the SOR algorithm. In this example IOC (immediate or cancel) orders are used:
1) An SOR Buy Day order for [email protected] comes;
2) Aggressive child order to grab opportunity on preferable venue created: Buy IOC [email protected];
3) Aggressive child order to grab opportunity on Venue 1 created: Buy IOC [email protected];
4) The remaining part placed passive to the Preferred venue:
5)New liquidity on Venue 2 appears: Sell [email protected]:
6)The algo "sweeps" from Preferred venue to grab the opportunity on Venue 2: Buy [email protected] IOC
7)New liquidity on Venue 1 appears: Sell [email protected]:
8)The algo "sweeps" from the Preferred venue to grab the opportunity on Venue 1: Buy [email protected] IOC
9)The trade happens, the algo terminates because all the intended shares were executed:
As there are latencies involved in constructing and reading from the consolidated order book, child orders may be rejected if the target order was filled before it got there. Therefore, modern smart order routers have callback mechanisms that re-route orders if they are rejected or partially executed.
If more liquidity is needed to execute an order, smart order routers will post day limit orders, relying on probabilistic and/or machine learning models to find the best venues. If the targeting logic supports it, child orders may also be sent to dark venues, although the client will typically have an option to disable this.
More generally, smart order routing algorithms focus on optimizing a tradeoff between execution cost and execution time.
Cross-Border Routing
Some institutions offer cross-border routing for inter-listed stocks. In this scenario, the SOR targeting logic will use real-time FX rates to determine whether to route to venues in different countries that trade in different currencies. The most common cross-border routers typically route to both Canadian and American venues; however, there are some routers that also factor in European venues while they are open during trading hours.
Development and testing
Providers
BATS
Fidessa
MillenniumIT
Software AG
Testing
There are currently few companies, officially defined as providers of testing and quality assurance of the SOR systems:
Allied Testing
Exactpro
Luxoft
References
Electronic trading systems |
10198142 | https://en.wikipedia.org/wiki/Information%20Builders | Information Builders | Information Builders (ibi) founded in 1975 is a privately held software company headquartered in New York City. Information Builders (ibi) provides services in the fields of Business Intelligence, Data Integration and Data Quality solutions.
History
Gerald D. Cohen, who died in 2020, co-founded Information Builders (ibi) in 1975 with Peter Mittelman and Martin B. Slagowitz. Their initial product, FOCUS, was designed to enable people without formal computer programming skills to work with information systems.
Information Builders (ibi) is one of the largest privately held software firms and operates in more than 60 locations. In 2001, it established iWay Software, a wholly owned company that focuses on data integration and service-oriented architecture (SOA).
In October 2020, TIBCO Software agreed to purchase ibi for an undisclosed sum, pending final regulatory approval expected in TIBCO`s first fiscal quarter 2021.
References
Software companies based in New York (state)
Data companies
Data quality companies
Software companies of the United States |
21626779 | https://en.wikipedia.org/wiki/CryptoBuddy | CryptoBuddy | CryptoBuddy is a simple software application for the encryption and compression of computer files to make them safe and secure. The application uses a 64-bit block cipher algorithm for encryption and a proprietary compression algorithm. The CryptoBuddy software is also used as part of the CryptoStick encryption device from Research Triangle Software, Inc. The software was released for public use on June 12, 2002.
References
2002 software
Cryptographic software |
35752024 | https://en.wikipedia.org/wiki/The%20Unknowns | The Unknowns | The Unknowns is a self-proclaimed ethical hacking group that came to attention in May 2012 after exploiting weaknesses in the security of NASA, CIA, White House, the European Space Agency, Harvard University, Renault, the United States Military Joint Pathology Center, the Royal Thai Navy, and several ministries of defense. The group posted their reasons for these attacks on the sites Anonpaste & Pastebin including a link to a compressed file which contained a lot of files obtained from the US Military sites they breached. The Unknowns claim "... our goal was never to harm anyone, we want to make this whole internet world more secured because, simply, it's not at all and we want to help." The group claims to be ethical in their hacking activities, but nonetheless lifted internal documents from their victims, posting them online. They claim this was because they had reported the security holes to many of their victims, but did not receive a response back from any of them. The whole point was to show that these government-run sites have loopholes in their code and anyone can exploit them. The group used methods like advanced SQL injection to gain access to the victim websites. NASA and the ESA have both confirmed the attack. They claimed that the affected systems were taken offline and have since been patched. At the time this was one of the most wanted hacking groups in Europe and also wanted by the FBI, although they refused to tell if they were investigating the hacks.
Members
The team had 6 not 7 core members:
Pixiedust, founder, spokesperson, and leader,
Mr. P-Teo, programmer
Fabien Léac,a French researcher in computer faults and a white hacker
MrSecurity, a black-hat hacker, programmer and ghostwriter of The Unknowns
NeTRoX, a black-hat hacker, penetration tester and researcher. Joined to the team in late 2015 after the team reunited.
Jail
Zyklon B, who lives in France, was arrested by the French Intelligence Service on June 24, 2012. He was later released because he was just sixteen years old at the time. He has trials taking place in 2014 supposedly. His life is related in a book written by his mother Sophie Léac L'histoire vraie d'un jeune hacker français (in October 2013) or the true story of a French teen-hacker. A second book is in preparation: Hack! There will be cyberwar!.
Hacked websites and applications
The group has hacked many websites and applications using a series of different attacks. The most notable, however, being SQL injection. There have been a lot of companies affected by the group, but some of the hacks even for big companies did not make the media (probably due to keeping the multi-country legal investigation a secret). However, the most notable hacks done by The Unknowns, mostly government related websites, did make mass media. The group is still active, and the members are still working together, as they try to make the internet safer.
SQL injection attacks were used on the following:
Asian College of Technology
Bahrain Defense Force
California State University
Christian Mingle
Deutsche Federal Government
European Space Agency
ESET
French Ministry of Defense
Harvard University
Jordanian Yellow Pages
Lawrence Livermore National Laboratory
United States Navy
NASA
Ames Research Center
Glenn Research Center
New7Wonders
Renault
Royal Thai Navy
Sempra Energy
Social Democratic Party of Germany
United Kingdom Ministry of Defense
University of Rhode Island
United States Military
United States Air Force
United States Department of Commerce
United States Department of the Treasury
PayPal, no information was released. The Unknown contacted PayPal with the exploits he/she found and received $1,000 as a reward.
However they have used different attacks:
Two United Kingdom police servers were exploited and root access was gained to the systems. Not much is known about this attack.
Abolished
The purpose of The Unknowns was to find security issues in high-profiled websites and to get them patched. The information from the hacked sites was released because The Unknowns attempted to make contact with all their targets informing them of the security issues, but they did not receive a response back from any of the websites targeted. Some data was leaked to force these websites to patch their systems.
After a period of hacking high-profile websites, The Unknown disbanded the group in 2012 but reunited it in the early 2015.
References
List of hacked websites/companies
Hacker groups |
45514583 | https://en.wikipedia.org/wiki/Open%20Trusted%20Technology%20Provider%20Standard | Open Trusted Technology Provider Standard | The Open Trusted Technology Provider Standard (O-TTPS) (Mitigating Maliciously Tainted and Counterfeit Products) is a standard of The Open Group that has also been approved for publication as an Information Technology standard by the International Organization of Standardization and the International Electrotechnical Commission through ISO/IEC JTC 1 and is now also known as ISO/IEC 20243:2015. The standard consists of a set of guidelines, requirements, and recommendations that align with best practices for global supply chain security and the integrity of commercial off-the-shelf (COTS) information and communication technology (ICT) products. It is currently in version 1.1. A Chinese translation has also been published.
Background
The O-TTPS was developed in response to a changing landscape and the increased sophistication of cybersecurity attacks worldwide. The intent is to help providers build products with integrity and to enable their customers to have more confidence in the technology products they buy. Private and public sector organizations rely largely on COTS ICT products to run their operations. These products are often produced globally, with development and manufacturing taking place at different sites in multiple countries. The O-TTPS is designed to mitigate the risk of counterfeit and tainted components and to help assure product integrity and supply chain security throughout the lifecycle of the product.
The Open Group's Trusted Technology Forum (OTTF) is a vendor-neutral international forum that uses a formal consensus based process for collaboration and decision making about the creation of standards and certification programs for information technology, including the O-TTPS. In the forum, ICT providers, integrators and distributors work with organizations and governments to develop standards that specify secure engineering and manufacturing methods along with supply chain security practices.
The Implementation Guide to Leveraging Open Trusted Technology Providers in the Supply Chain provides mapping between The National Institute of Standards and Technology (NIST) Cybersecurity Framework and related organizational practices listed in the O-TTPS. NIST referenced O-TTPS in their NIST Special Publication 800-161 "Supply Chain Risk Management Practices for Federal Information Systems and Organizations" that provides guidance to federal agencies on identifying, assessing, and mitigating ICT supply chain risks at all levels of their organizations.
Purpose
The standard, developed by industry experts within the Forum, specifies organizational practices that provide assurance against maliciously tainted and counterfeit products throughout the COTS ICT product lifecycle. The lifecycle described in the standard encompasses the following phases: design, sourcing, build, fulfillment, distribution, sustainment, and disposal.
Measurement and Certification
Organizations can be certified for their conformance to the standard through the Open Group's Trusted Technology Provider Accreditation Program. Conformance to the standard is assessed by Recognized third party Assessors. Once an organization has been successfully assessed as conforming to the standard then the organization is publicly listed in the Open Group's Accreditation Register. The third party assessment process is governed by the Accreditation Policy and Assessment Procedures.
History
The effort to build the standard began in January 2010 with a meeting organized by The Open Group and including major industry representatives and the United States Department of Defense and NASA. The Open Trusted Technology Forum was formally launched in December 2010 to develop industry standards and enhance the security of global supply chains and the integrity of COTS ICT products.
The first publication of the Forum was a whitepaper describing the overall Trusted Technology Framework in 2010. The whitepaper was broadly focused on overall best practices that good commercial organizations follow while building and delivering their COTS ICT products. That broad focus was narrowed during late 2010 and early 2011 to address the most prominent threats of counterfeit and maliciously tainted products resulting in the O-TTPS which focuses specifically on those threats.
The first version of O-TTPS was published in April 2013. Version 1.1 of the O-TTPS standard was published in July 2014. This version was approved by ISO/IEC in 2015 as ISO/IEC 20243:2015.
The O-TTPS Accreditation Program began in February 2014. IBM was the first company to achieve accreditation for conformance to the standard.
The standard and accreditation program have been mentioned in testimony delivered to the US Congress regarding supply chain risk and cybersecurity. The National Defense Authorization Act for Fiscal Year 2016 Section 888 (Standards For Procurement Of Secure Information Technology And Cyber Security Systems) requires that the United States Secretary of Defense conduct an assessment of O-TTPS or similar public, open technology standards and report to the Committees on Armed Services of the US Senate and the US House of Representatives within a year.
See also
Supply chain security
Counterfeit electronic components
International Organization for Standardization
Commercial off-the-shelf
Information and communications technology
References
External links
http://csrc.nist.gov/scrm/references.html
http://www.afcea.org/committees/cyber/documents/Supplychain.pdf
http://www.networkworld.com/article/2196759/malware-cybercrime/defense-department-wants-secure--global-high-tech-supply-chain.html
http://www.computerworlduk.com/news/security/3343185/the-open-group-previews-o-ttps-security-standard-for-supply-chains/
http://www.opengroup.org/subjectareas/trusted-technology
http://www.infoworld.com/article/2613780/supply-chain-management/supply-chain-2013--stop-playing-whack-a-mole-with-security-threats.html
http://washingtontechnology.com/microsites/2012/sewp-2012/04-program-office-takes-leadership-role.aspx
https://www.dhs.gov/news/2011/01/06/securing-global-supply-chain
http://blogs.ca.com/2013/04/12/the-launch-of-the-open-trusted-technology-provider-standard/?intcmp=searchresultclick&resultnum=1
Open Group standards |
16427921 | https://en.wikipedia.org/wiki/Toad%20Data%20Modeler | Toad Data Modeler | Toad Data Modeler is a database design tool allowing users to visually create, maintain, and document new or existing database systems, and to deploy changes to data structures across different platforms. It is used to construct logical and physical data models, compare and synchronize models, generate complex SQL/DDL, create and modify scripts, and reverse and forward engineer databases and data warehouse systems. Toad's data modelling software is used for database design, maintenance and documentation.
Product History
Toad Data Modeler was previously called "CASE Studio 2" before it was acquired from Charonware by Quest Software in 2006. Quest Software was acquired by Dell on September 28, 2012. On October 31, 2016, Dell finalized the sale of Dell Software to Francisco Partners and Elliott Management, which relaunched on November 1, 2016 as Quest Software.
Features/Usages
Multiple database support - Connect multiple databases natively and simultaneously, including Oracle, SAP, MySQL, SQL Server, PostgreSQL, DB2, Ingres, and Microsoft Access.
Data modelling tool - Create database structures or make changes to existing models automatically and provide documentation on multiple platforms.
Logical and physical modelling - Build complex logical and physical entity relationship models and reverse, forward, and engineer databases.
Reporting - Generate detailed reports on existing database structures.
Model customization - Add logical data to user diagrams to customize user models.
All Toad products typically have 2 releases per year.
Other features
Model Actions (Compare Models, Convert Model, Merge Models, Generate Change Script)
Version Control System (Apache Subversion)
Naming Conventions
Auto Layout
Multiple Workspaces
Scripting and Customization
Automation
Object Gallery
Full Unicode Support
Integration with Toad for Oracle
Related Software
Erwin Data Modeler
Oracle
SAP
MySQL
SQL Server
PostgreSQL
DB2
Ingres
Microsoft Access
See also
Comparison of data modeling tools
Relational Model
Data modeling
RDBMS
References
External links
Programming tools
Desktop database application development tools
Data modeling tools
2000s software
Oracle database tools
Microsoft database software
Sybase
MySQL |
37097410 | https://en.wikipedia.org/wiki/IBM%20Websphere%20Host%20On-Demand | IBM Websphere Host On-Demand | The IBM WebSphere Host On-Demand Server, or HOD as it is commonly known is a Java application that runs on a Server that is deliverable via modern web servers such as the Apache web server. The application allows the end user to access IBM 3270, IBM 5250 and other Virtual terminals using the Telnet protocol whether through a secure or unsecured mode of communication. The product in its present form runs on AIX, UNIX, HP-UX, IBM i, z/OS, Linux, Solaris and Windows Server.
First generation
Delivery of the first web-based Telnet 3270, or TN3270 emulation adaptation is credited to Netscape Navigator a web browser designed by Netscape Communications Corporation. Telnet 3270, or TN3270 describes either the process of sending and receiving 3270 data streams using the Telnet protocol or the software that emulates a 3270 class terminal that communicates using that process. TN3270 allows a 3270 terminal emulator to communicate over a TCP/IP network instead of an SNA network. Standard telnet clients cannot be used as a substitute for TN3270 clients, as they use fundamentally different techniques for exchanging data. The Netscape versions offered limited functionality and essentially allowed web-based users to access the 3270 and UNIX, VAX and other computer host systems via the Netscape Navigator and Netscape Communicator web browser.
Second generation
IBM Corporation in concert with Netscape Communications Corporation ultimately leveraged the popularity of the web-based host access for use with OS/2 and subsequent dissimilar computing operating systems. The second generation product was known as IBM Host On-Demand Version 1 through version 10.
IBM Websphere Host On-Demand provides access to IBM i (5250), IBM System z (3270), and DEC/UNIX systems through host screen emulation within the web browser. The connections are generally applied in a secure Internet environment using TLS, or SSL through the employment of Java's Virtual Machine.
Third generation
IBM after acquiring Rational Software in 2003 and with Host On-Demand Version 11 changed the branding of Host On-Demand as a component of the IBM Rational Host Access Platform. Rational Host On-Demand allows for secure Web-to-host terminal emulation and host access application programming support with one interface to a TN3270E, TN5250, VT52, VT100, VT220, VT420, IBM CICS and FTP server access across multiple software operating systems and computing platforms.
Provides FIPS compliant, secure connectivity with TLS and SSL technologies and installs on a server, simplifying maintenance, distribution, and upgrades for system software. Host On-Demand also contains the necessary API's to allow creation of a custom portlet to access host applications from within IBM WebSphere Portal and additional programming operations from WebSphere Application Server.
References
Java platform software
Java virtual machine
IBM software |
5715923 | https://en.wikipedia.org/wiki/List%20of%20mergers%20and%20acquisitions%20by%20IBM | List of mergers and acquisitions by IBM | IBM has undergone a large number of mergers and acquisitions during a corporate history lasting over a century; the company has also produced a number of spinoffs during that time.
The acquisition date listed is the date of the agreement between IBM and the subject of the acquisition. The value of each acquisition is listed in USD because IBM is based in the United States. If the value of an acquisition is not listed, then it is undisclosed.
Many of the companies listed in this article had subsidiaries of their own who had subsidiaries who ... For examples, see Pugh's book Building IBM, page 26.
Precursors 1889–1910
Herman Hollerith initially did business under his own name, as The Hollerith Electric Tabulating System, specialising in punched card data processing equipment. In 1896 he incorporated as the Tabulating Machine Company.
1889 Bundy Manufacturing Company incorporated.
1891 Computing Scale Company incorporated.
1893 Dey Patents Company (soon renamed the Dey Time Register Company) incorporated.
1894 Willard & Frick Manufacturing Company (Rochester, New York) incorporated.
1896
Detroit Automatic Scale Company incorporated.
Hollerith incorporates the Tabulating Machine Company. Will be reincorporated in 1905.
1899 Standard Time Stamp Company acquired by Bundy Manufacturing Company.
1900
International Time Recording Company incorporated, acquiring the time-recording business of the Bundy Manufacturing Company and the Willard & Frick Manufacturing Company (Rochester).
Chicago Time-Register Company acquired by International Time Recording Company.
Dayton Moneyweight Scale Company acquired by Computing Scale Company.
Detroit Automatic Scale Company acquired by Computing Scale Company.
1905 Hollerith reincorporates as The Tabulating Machine Company.
1907 Dey Time Register Company acquired by International Time Recording Company.
1908 Syracuse Time Recorder Company acquired by International Time Recording Company.
Computing-Tabulating-Recording Company, 1911
Since the 1960s or earlier, IBM has described its formation as a merger of three companies: The Tabulating Machine Company (1880s origin in Washington, DC), the International Time Recording Company (ITR; 1900, Endicott), and the Computing Scale Company of America (1901, Dayton, Ohio). However, there was no merger, it was an amalgamation, and an amalgamation of four, not three, companies. The 1911 CTR stock prospectus states that the Bundy Manufacturing Company was also included.
While ITR had acquired its time recording business in 1900 Bundy had remained a separate entity producing an adding machine and other wares.
The Tabulating Machine Company
Computing Scale Corporation
International Time Recording Company
Bundy Manufacturing Company
CTR owned the stock of the four companies; CTR neither produced nor sold any product; the four companies continued to operate, as before, under their own names.
Acquisitions during 1912–1999
1912–1929
1917
American Automatic Scale Company acquired as International Scale Company.
CTR consolidates three already-existing Canadian companies: The Canadian Tabulating Machine Co., Ltd, the International Time Recording Co. of Canada, Ltd., and the Computing Scale Co. of Canada, Ltd., in a new holding company, International Business Machines Co., Ltd.
1921
Pierce Accounting Machine Company (asset purchase).
Ticketograph Company (of Chicago).
1923
Dehomag
1924
CTR was renamed "IBM".
1930–1949
1930 Automatic Accounting Scale Company.
1932 National Counting Scale Company.
1933 The separate companies were integrated in 1933 as IBM and the holding company eliminated.
1933 Electromatic Typewriters Inc. (See: IBM Electromatic typewriter)
1941 Munitions Manufacturing Corporation.
1950–1969
1959 Pierce Wire Recorder Corporation.
1964 Science Research Associates.
1970–1989
1974 CML Satellite Corporation; renamed Satellite Business Systems (SBS).
1984 ROLM
1986 RealCom Communications Corporation.
1990–1999
1993
CGI Informatique (France), bought in 1993, ran independently until 1996, and was then progressively absorbed by IBM, country by country, this process being achieved in 1999.
1994
Transarc (Transarc Corporation bought by IBM in 1994, became part of IBM proper in 1999 as the IBM Pittsburgh Lab)
1995
Lotus Development Corporation for $3.5 billion.
Information Systems Management Canada (ISM Canada)
K3 Group Ltd.
1996
Wilkerson Group
Tivoli Systems, Inc. for $743 million.
Data Sciences Ltd, prior to 1991 comprising Thorn EMI Software, Datasolve and the Corporate Management Services Division of Thorn EMI, for £95 million.
Object Technology International (OTI) is acquired by IBM
Cyclade Consultants (Netherlands)
Fairway Technologies
Professional Data Management, Inc. / LifePRO
1997
Software Artistry for $200 million.
Unison Software.
Dominion Semiconductor (Manassas, VA) is created by forming a 50/50 joint venture with Toshiba to produce 64MB and 256MB DRAM chips. Administrative offices are located in Building 131 the former IBM Federal Systems campus now primarily owned by Lockheed Martin; the new state-of-the-art fabrication facility was built from on adjacent land.
1998
CommQuest Technologies.
DataBeam Corporation, Lexington, KY
Ubique Ltd., Israel
1999
Dascom Technologies (USA), A subsidiary of Dascom Holdings.
Mylex Corporation.
Sequent Computer Systems for $810 million.
Acquisitions since 2000
Number of acquisitions per year according to table below:
In 2020 IBM acquired 5 companies
In 2019 IBM acquired 1 companies
In 2018 IBM acquired 3 companies
In 2017 IBM acquired 3 companies
In 2016 IBM acquired 12 companies
In 2015 IBM acquired 13 companies
In 2014 IBM acquired 4 companies
In 2013 IBM acquired 9 companies
In 2012 IBM acquired 9 companies
In 2011 IBM acquired 8 companies
Spin-offs
1934 – Dayton Scale Division is sold to the Hobart Manufacturing Company.
1942 – Ticketograph Division is sold to the National Postal Meter Company.
1958 – Time Equipment Division is sold to the Simplex Time Recorder Company.
1974 – Service Bureau Corporation sold to Control Data Corporation
1984 – Prodigy, formerly a joint venture with Sears, Roebuck and Company.
1985 – Satellite Business Systems sold to MCI Communications
1988 – Copier/Duplicator business, including service and support contracts, sold to Eastman Kodak.
1990 – ARDIS mobile packet network, a joint venture with Motorola. Motorola buys IBM's 50% interest in 1994. Now Motient.
1991 – Lexmark (keyboards, typewriters, and printers). IBM retained a 10% interest. Lexmark has sold its keyboard and typewriter businesses.
1991 – Kaleida, a joint Multimedia software venture with Apple Computer.
1992 – Taligent, a joint software venture with Apple Computer.
1992 – IBM Commercial Multimedia Technologies Group, spun off to form private company Fairway Technologies.
1992 – IBM sells its remaining 50 percent stake in the Rolm Company to Siemens A.G. of Germany.
1994 – Xyratex enterprise data storage subsystems and network technology, formed in a management buy-out from IBM.
1995 – Advantis (Advanced Value-Added Networking Technology of IBM & Sears), a voice and data network company. Joint Venture with IBM holding 70%, Sears holding 30%. IBM buys Sears' 30% interest in 1997. AT&T acquires the infrastructure portion of Advantis in 1999, becoming the AT&T Global Network. IBM retained business and strategic outsourcing portions of the joint venture.
1994 – Federal Systems Division sold to Loral becoming Loral Federal Systems. The Federal Systems Division performed work for NASA. Loral was later acquired by Lockheed Martin.
1996 – Celestica, Electronic Manufacturing Services (EMS).
1998 – IBM Global Network sold to AT&T to form AT&T Business Internet.
1999 – Dominion Semiconductor (DSC) IBM sells its 50% share to JV partner Toshiba. DSC becomes a wholly owned subsidiary of Toshiba.
2001 – Information Services Extended department, developer of specialized databases and software for telephone directory assistance, is spun off to form privately held company ISx, Inc (later sold to Local Matters).
December 31, 2002 – IBM sells its HDD business to Hitachi Global Storage Technologies for approximately $2 billion. Hitachi Global Storage Technologies now provides many of the hardware storage devices formerly provided by IBM, including IBM hard drives and the Microdrive. IBM continues to develop storage systems, including tape backup, storage software and enterprise storage.
December 2004 – Lenovo acquires 90% interest in IBM Personal Systems Group, 10,000 employees and $9 billion in revenue.
April 3, 2006 – Web analytics provider Coremetrics acquires SurfAid Analytics, a standalone division of IBM Global Services. The deal was said to be in the "eight-figure" range, making it worth at least $10 million. (Note: Since then Coremetrics has in turn been acquired by IBM)
January 25, 2007 – Three-year joint venture with IBM Printing Systems division and Ricoh to form new Ricoh-owned subsidiary, InfoPrint Solutions Company, for $725 million.
September 2009 – IBM launches online business IT video advice service in association with GuruOnline.
September 2009 – IBM sells its U2 multivalue database and application development products (created by VMark, UniData, System Builder and Prime Computer, obtained via the Informix acquisition) to Rocket Software
April 2012 – IBM sells its Retail Store Solutions division (Point-of-Sales) to Toshiba TEC
January 2014 – IBM sells its IBM System x business to Lenovo for $2.3 billion.
October 2014 – IBM sells its microelectronics (computer chips) business to GlobalFoundries. IBM will pay GlobalFoundries $1.5 billion over 3 years to take over the business.
December 2014 – UNICOM Global acquires IBM Rational Focal Point and IBM Rational Purify Plus.
January 2015 – IBM sells Algorithmics Collateral to SmartStream Technologies
December 2015 – UNICOM Global acquires IBM Rational System Architect
December 2018 – HCL Technologies to acquire Select IBM Software Products for $1.8B.
July 2019 – IBM Watson Marketing business spins off into standalone company Acoustic, after acquisition by Centerbridge Partners
October 8, 2020 – IBM announced it was spinning off the Managed Infrastructure Services unit of its Global Technology Services division into a new public company, an action expected to be completed by the end of 2021.
January 21, 2022 – IBM announced that it would sell Watson Health to the private equity firm Francisco Partners.
See also
List of largest mergers and acquisitions
Lists of corporate acquisitions and mergers
References
External links
IBM list of selected acquisitions
mergers and acquisitions
IBM |
21660525 | https://en.wikipedia.org/wiki/Software%20entrepreneurship | Software entrepreneurship | Software entrepreneurship has a different set of developing strategies than other business start-ups. The development of software, a digital “soft” good, involves different business models, product strategy, people management, and development plan compared to the traditional manufacturing and service industries. For example in the software business, making one or ten million copies of a product cost about the same. Furthermore, the productivity difference between a good and bad employee is ten to twentyfold. As well, software projects tolerate 80 percent lateness and ongoing design changes on a regular basis.
Software entrepreneurship involves a broad range of businesses; from helping people plan daily events to controlling a space shuttle. There are mainly three kinds of software businesses: products, services and content business such as Wikipedia.
Products Versus Services
The first thing software entrepreneurs should understand is the difference and the interrelationship between products and services business. The software product business is about selling licensed packages to customers. These products help solve a user pain and have potential for growth and profits. One advantage of starting up in this direction is the ability to attract stock market investors and venture capitalists for funding. This business also enjoys enormous economies of scale in selling multiple copies of the same software. The downside of creating products is that software sales are subject to fluctuations. Sales will drop drastically in economic recessions.
The service business involves creating applications for clients that tailor to their business needs. This includes the maintenance of software products they have purchased before. One advantage of service business is that long-term customer contracts can allow the company to survive rough economic times. The downside is that the business needs to attract enough clients to keep developers and consultants busy.
Software companies can also develop a hybrid solution involving a mixture of products and services. In this case, solutions are sold to clients that require extensive customization. Approximately 20 to 50 percent of coding is required for each individual client. Customers purchasing this type of IT solution usually do not switch vendors for long periods of time.
Funding
Funding is done by corporate professionals who are at high level management & cannot leave Job and take the career risk. They are well settled with their high paying jobs. They are interested in business, but do not possess personality of a businessman. At the same time want to engage in business. They can become passive partner, shareholder or investor (isafe notes is an instrument to invest without becoming partner/shareholder) and infuse capital.
These people invest to make money (The dirty but sole objective of business)
Bank Financing
This is practically an impossible method as the bank requires security and personal loan guarantees.
Government Aid
Governments usually give out non-repayable grants to encourage start-ups. There are also investment tax credits that can be claimed. One thing to be careful about government aid is the lengthy procedures that may be required before obtaining the funds.
Venture Capital
Venture capital is risk capital invested into a start-up company at its early stages. Venture capitalists usually invest in start-ups that already have a relatively developed software product and some early sales. They look for products that have a large potential in a growing market with a competitive edge.
Venture capital help offers a large sum of money and often assistance in managing the company. People with software start-up experiences are available for mentoring. There are also help available in assisting the process of going public.
See also
Entrepreneurship
Entrepreneurs
References
Software industry
Entrepreneurship |
28097046 | https://en.wikipedia.org/wiki/Datamatics | Datamatics | Datamatics is an Indian company that provides consulting, information technology (IT), data management, and business process management services. Its services use robotics, artificial intelligence and machine learning algorithms.
Headquartered in Mumbai, the company has a presence across America, Australia, Asia and Europe.
The company was incorporated in 1987, offering computer and electronic data processing linked services, and later added information technology enabled services with robotic process automation.
History
On 3 November 1987, the company was incorporated as Interface Software Resources Private Ltd. The name of the company was then changed to Datamatics Technologies Private Ltd. on 18 December 1992.
It then changed its name to Datamatics Technologies Ltd. when it got listed as a public company under the provisions of section 43A of the Companies Act on 13 January 2000.
Locations
Datamatics has a presence in the following locations globally:
India: Mumbai, Nashik, Chennai, Bangalore, Pune and Puducherry,
Asia (excluding India): Philippines and UAE
Australia: Australia
Europe: United Kingdom
United States: Michigan, New Jersey, Massachusetts and Missouri
Present
In February 2019, Datamatics Global Services and AEP Ticketing solutions SRL, Italy (AEP) were granted the letter of acceptance (LOA) by Mumbai Metropolitan Region Development Authority (MMRDA) for implementing an automatic fare collection system for 52 stations of the Mumbai Metro Rail project for belongs to Muralidharan don puducherry ₹ 160 crore.
In May 2019, the company's shares rose as much as 19.99% to Rs 107.75, marking the biggest intraday percentage gain for Datamatics since December 2010.
See also
List of IT consulting firms
List of Indian IT companies
List of companies of India
References
External links
Software companies based in Mumbai
Software companies established in 1987
Technology companies established in 1987
Information technology companies of India
Indian companies established in 1987
1987 establishments in Maharashtra
Companies listed on the National Stock Exchange of India
Companies listed on the Bombay Stock Exchange |
24126101 | https://en.wikipedia.org/wiki/Gmsh | Gmsh | Gmsh is a finite-element mesh generator developed by Christophe Geuzaine and Jean-François Remacle. Released under the GNU General Public License, Gmsh is free software.
Gmsh contains 4 modules: for geometry description, meshing, solving and post-processing. Gmsh supports parametric input and has advanced visualization mechanisms. Since version 3.0, Gmsh supports full constructive solid geometry features, based on Open Cascade Technology.
A modified version of Gmsh is integrated with SwiftComp, a general-purpose multiscale modeling software. The modified version, called Gmsh4SC, is compiled and deployed on the Composites Design and Manufacturing HUB (cdmHUB).
Interfaces
Various graphical user interfaces exist that integrate Gmsh into their workflow:
A Matlab interface available with FEATool Multiphysics.
The Mesh Design and FEM Workbenches of FreeCAD support Gmsh for meshing inside the program, along with other meshers like Netgen.
See also
TetGen
Salome (software)
References
External links
Gmsh website
Official Gmsh Documentation
Gmsh Tutorials by Dolfyn
Gmsh Matlab and FEATool GUI and CLI integration
Free mathematics software
Free software programmed in C++
Cross-platform free software
Mesh generators
Numerical analysis software for Linux
Numerical analysis software for MacOS
Numerical analysis software for Windows
Software that uses FLTK
Computer-aided engineering software for Linux |
43515895 | https://en.wikipedia.org/wiki/A.%20P.%20Shah%20Institute%20of%20Technology | A. P. Shah Institute of Technology | A. P. Shah Institute of Technology (APSIT) is a private engineering college located in Kasarvadavali, in Thane, India. It was established in 2014 and is managed by the Parshvanath Charitable Trust.
It is a Jain Religious Minority College (i.e., 51% of all seats are reserved for students from the Jain Religious Minority Community) and is affiliated to the University of Mumbai (a public university, funded by the state government of Maharashtra). The college is approved by the Indian Government's All India Council for Technical Education (AICTE) and is recognized by the Directorate of Technical Education (DTE) of the state Government of Maharashtra.
It offers a Bachelor of Engineering (B.E.) degree in Civil engineering, Computer engineering ,Computer engineering in data science and AI,Ml respectively,Electronics, and telecommunication engineering, Information Technology, and Mechanical engineering. Most of these courses last for 4 years.
Campus and location
The five-story campus on Ghodbunder Road in Kasarvadavali Naka is owned by the Parshvanath Charitable Trust. Wi-Fi is available on campus and there is wide use of CCTV cameras for security.
In addition to small shops and restaurants in the area, there is the Big Bazar Mall, McDonald's fast-food restaurant, and commercial buildings like that of G-Corp. The Sanjay Gandhi National Park (forest) is a short distance away and can be seen from the main road, as can portions of Vasai Creek.
Admissions
Admissions to the seats not reserved for the Jain religious minority, for first-year undergraduate engineering programs, are carried out via the Centralized Admissions Process (CAP) of the State Government's Directorate of Technical Education (DTE). Entrance examinations are used: 85% of CAP seats are filled using the Composite Score followed by the Joint Entrance Examination score and HSC score (Maharashtra state students only). The remaining 15% of CAP seats are filled using the Composite Score after the Joint Entrance Examination score and HSC score (students for other Boards of Education).
Each engineering branch takes in 60 students. This intake is only for admissions to the first year of the 4-year B.E. degree program but includes direct admissions to the second year for Jain students who have completed 3-year engineering diplomas.
As a private college, it also has some "institute-level seats" which are filled on merit basis.
Place distribution is shown below:
Departments
The college has the following academic departments:
Mechanical Engineering
The department laboratories include the following:
CAD (computer-aided design) / CAM (computer-aided manufacturing) Lab
Heat and Mass Transfer Lab
Fluid Mechanics Lab
Refrigeration and Air Conditioning Lab
Mechatronics Lab
Hydraulic Machinery Lab
Workshop and Machine Shop
Internal Combustion Engine Lab
Materials Testing Lab (shared with the Department of Civil Engineering)
Incubation Lab
MQE Lab
IC Engine Lab
Electronics and Telecommunications Engineering
This department is equipped with the following laboratories:
Television Engineering and PCB Lab.
Power Electronics and Drives Lab.
Electronic Circuit Lab.
Digital Electronics and Microprocessor Lab.
Control Systems Lab.
Electrical Networks Lab.
Communication Lab.
Information Technology
This department houses several specialist laboratories to extend the general computing provision. The labs are equipped with specialist software such as Oracle, Microsoft Visual Studio, Java, the Adobe Creative Suite, and many other key products. The main corporate operating system is Microsoft Windows, but many labs have Linux installed as well. All labs provide free Internet access.
There is also a specialist networking lab that is equipped with more than 20 enterprise-level network switches and routers, including wireless and VoIP devices. There are a dedicated security and software forensics laboratory. There are also many special-purpose facilities for embedded system development and robotics.
The following laboratories are managed by this department:
Project / Research and Development Lab.
Software Testing Lab.
Computer Graphics and Image Processing Lab.
Web Engineering Lab.
Network Security Lab.
System Software Lab.
Database and Server Security Lab.
Mobile Computing Lab.
Computer Engineering
The department has 400+ nodes and 12 servers all networked with Linux, Microsoft Windows, and NOVELL NetWare.
The department facilities include the following:
Operating Systems
Windows 2003, Novell NetWare 3.12, MS-DOS 6.22, Ubuntu
Compilers
C GNU, C++ GNU, Java GNU, Fortran GNU, Turbo PASCAL, MS Visual Studio 6.0
Application Software
Octave, Circuit simulators, VHDL toolkit, UML tools.
Database Support
MS SQL 2000 Server, MY SQL Server, PostGRE Server.
Civil Engineering
This department has the following laboratories:
Engineering Geology Lab.
Soil Mechanics Lab.
Transportation Lab.
Environmental Engineering Lab.
Concrete Technology Lab.
Building Material and Construction Lab.
Strength of Materials Lab (shared with the Department of Mechanical Engineering).
Applied Sciences and Humanities
The department of applied sciences and humanities does not offer any degrees of its own, but supports the curriculum of other departments by offering courses in the following disciplines:
Applied Mathematics (first 4 or 5 semesters in each branch).
Applied Physics (first 2 semesters).
Applied Chemistry (first 2 semesters).
Communication Skills (second semester).
Presentation and Communication Techniques (third semester of each branch).
Environmental Studies (fifth semester of each branch except civil engineering).
In the 4-year Bachelor of Engineering curriculum of the University of Mumbai, the first year (i.e., the first 2 semesters) is shared by all engineering majors. Thus, first-year students of the Parshvanath College of Engineering were managed by the department of applied sciences and humanities, rather than the departments of the engineering branches that they had taken admission in.
The department of applied sciences and humanities has the following laboratories:
Applied Chemistry Lab
Applied Physics Lab
Computer Programming Lab
Engineering Mechanics Lab
Language Lab (for the subjects of communication skills and presentation & communication techniques)
Basic Electrical & Electronics Engineering Lab
Sci Lab (for Scilab software taught in applied mathematics)
Library
The college library is equipped with 22843 books covering 5025 titles. It was spread over an area of about 400 square meters. The college, as an associate member of INDEST AICTE Consortium, had subscribed for 42 national and 21 international journals. The library was supported with online access for members.
The library had two reading rooms (one of which is air-conditioned), a reference section (for using books without taking them out of the library), and an internet surfing section for students and staff members.
Students could either take books for "reference" (returning them on the same day) or for "issue" (returning them after a maximum duration of one week).
A book bank is also provided by the library to students from 3rd semester and hard copy of notes to first-year students.
References
External links
Official website of the A. P. Shah Institute of Technology.
Jain universities and colleges
Private engineering colleges in India
Engineering colleges in Mumbai
Affiliates of the University of Mumbai
Education in Thane district
Educational institutions established in 2014
2014 establishments in Maharashtra |
27258675 | https://en.wikipedia.org/wiki/Surf%20Party | Surf Party | Surf Party is a 1964 beach party film directed by Maury Dexter and starring Bobby Vinton, Patricia Morrow, Jackie DeShannon, and Ken Miller. It was the first direct imitation of AIP's hit Beach Party, which was released six months earlier, and showcased several musical acts onscreen. It is one of the few movies in the genre shot in black and white.
It has rarely been screened, and only received its first home video release in April 2013 as a DVD-R "on demand" through Fox.
Plot
Arizonans Terry (Patricia Morrow), Sylvia (Lory Patrick), and Junior (Jackie DeShannon) drive to California's Malibu Beach to take a vacation, learn how to surf, and find Terry's brother "Skeet", Malibu's Big Kahuna bad boy (and a former football star whose career was ended with a skull injury).
While the girls are learning to surf, Terry falls in love with Len (Bobby Vinton), the operator of a local surf shop; Junior falls in love with Milo (Ken Miller), a new surfer; and Sylvia falls in love with Skeet (Jerry Summers).
Milo takes the girls to Casey's Surfer, the hangout on the pier where the surfers and their ilk gather. While the girls get into the club on the virtue that Terry is Skeet's sister, Milo is kept out because he is just a "gremmie."
In an effort to qualify for membership into Skeet's unruly surfing club (called "The Lodge"), Milo attempts to "shoot the pier" (surfing through the pier – called "run the pier" in the movie) and is injured when he smacks into one of the posts. As a result of Milo's smash-up, Len gets into an argument with Skeet, and just as they are about to fight, Terry warns Len that Skeet's football injury is still dangerous. Throughout all the proceedings, Sgt. Wayne Neal (Richard Crane), the decidedly "anti-surf" police sergeant, is on Skeet's back, waiting for him to screw up so he can either throw him in jail or out of town. Terry soon learns that her brother's reputation is greater than the reality.
Skeet is further humiliated when he throws a party and Pauline (Martha Stewart) – the wealthy older woman who apparently owns the beach house that Skeet has been living in – finds him in her bedroom with Sylvia. Pauline reveals that Skeet is indeed a "kept man". To the delight of Sgt. Neal, Skeet decides to return to Arizona with Sylvia when he realizes how much he loves her, and the girls enjoy the rest of the vacation with their boyfriends.
Cast
Bobby Vinton as Len Marshal
Patricia Morrow as Terry Wells
Jackie DeShannon as Junior Griffith
Ken Miller as Milo Talbot (as Kenny Miller)
Lory Patrick as Sylvia Dempster
Richard Crane as Sgt. Wayne Neal
Jerry Summers as Skeet Wells
Martha Stewart as Pauline Lowell
The Astronauts as Themselves
The Routers as Themselves
Lloyd Kino as Casey
Mickey Dora as Surfer
Johnny Fain as Surfer
Pam Colbert as Surfer
Donna Russell as Surfer
Production
Cast
Popular singer Bobby Vinton, who plays Len, only appeared in three movies, this being his only one in the 1960s. It was Vinton's debut, although his agent had lobbied hard to get him the lead in Beach Party. He was paid $750 for a week of work.
Ken Miller, who plays the fresh-out-of-high-school "gremmie" Milo, was 33 years old at the time of filming. He had made two movies previously with Dexter.
Legendary surfer Mickey Dora doesn't have a speaking role, but is a featured extra in a sequence in the Casey's Surfer restaurant – playing the bowling-shirted surfer who follows Skeet's signal to lead the crowd in a clap-out.
Surf bands
The Astronauts was a Colorado-based surf band that had a Billboard Top 100 hit in 1963 with their song "Baja." They also appeared in Dexter's later beach party movie Wild on the Beach as well as two other beach party movies, Wild Wild Winter and Out of Sight – more than any other surf band.
The Routers was a band formed by Mike Gordon, whose first release in September 1962 was "Let's Go (Pony)", which reached #19 on the Billboard charts. Gordon also formed The Marketts and wrote their million seller "Out of Limits" and "Surfer's Stomp", which was one of the early successful surf songs released in 1961. The Routers toured for over six years, in part from the popularity of the movie and the songs associated with it.
Shooting
In addition to appearing as extras, surfers Mickey Dora and Johnny Fain, who appeared in several of AIP's beach party movies, performed the surfing stunts for this movie.
The pier featured throughout the movie is the historic 1905 Malibu Pier near Surfrider Beach. The exterior of Casey's Surfer on the pier is the westernmost of the two wood-sided white buildings with royal blue trim at the beachward end of the pier. Originally called Alice's, the restaurant and bar was operated as the Beachcomber Café from 2008 to 2012. "Len's Surf Shop" was situated in Malibu, west of the pier, near the intersection of Malibu Road and Webb Way, at 23651 Malibu Road.
Filming started September 1963 and was finished by October.
Music
Jimmie Haskell composed the score and co-wrote five songs for the movie.
Jackie DeShannon performs two songs in the movie, "Glory Wave" and "Never Comin' Back", with Patricia Morrow and Lory Patrick (both written by Haskell and 'By' Dunham).
Bobby Vinton performs (twice) "If I Were an Artist", and Patricia Morrow sings "That's What Love Is" (both written by Bobby Beverly and Dunham).
The Astronauts perform two songs, the instrumental "Surf Party", heard over the opening and closing credits (written by Beverly and Dunham); and the onscreen performance of "Fire Water" (written by Haskell and Dunham).
The Routers perform "Crack Up" (written by Haskell and Dunham) onscreen.
Ken Miller performs "Pearly Shells" (written by Lani Kai, Jericho Brown and Dunham).
Dunham and Haskell also wrote "Great White Water", which is heard as source music on a jukebox in the sequence at Casey's Surfer restaurant.
Although the poster states "Hear ‘em Sing These Surfin Hits!" and listed nine tracks, only two tracks can be deemed as "surf music." In It's Party Time: A Musical Appreciation of the Beach Party Genre, Stephen J. McParland writes: "The commissioned song-writing team of Jimmie Haskell and By Dunham were hard-pressed to come up with enough convincing items to carry the soundtrack to the masses. What cognizance they possessed of ‘surf music’ was scant at best and only the Astronauts’ instrumentals...bore any real resemblance to the musical genre." For example, "Never Comin’ Back" is written as a folk song, "Pearly Shells" is in the style of a Hawaiian folk song, "That's What Love Is" is a country & western song, and "Glory Wave" is written and performed in the style of a Negro spiritual.
Regarding the two tracks by The Astronauts, the book Pop Surf Culture states "The Astronauts bang out a thick, reverb-laden instrumental called ‘Firewater,’ and their theme song ‘Surf Party’ happens to be one of the best surf instrumentals ever recorded."
Critical response
Upon release, Eugene Archer of The New York Times wrote "Flaming youth may be passé, but you'd never know it from Surf Party...It's only the attitudes that seem archaic, as they bounce into passionless love affairs, take reckless surfboard risks in pointless tests of courage and display an alarming lack of inhibitions and not a trace of social responsibility."
Tom Lisanti writes "Surf Party is [a] realistic, albeit melodramatic, look at the surfing craze and Malibu surfers in particular. It is also an obvious rip-off of Beach Party without the zaniness. There is some neat surfing footage featuring pros like Mickey Dora and Johnny Fain but the flat black-and-white photography doesn’t do it justice though it buoys the story. The female leads all do well but Bobby Vinton and Kenny Miller fail miserably trying to pass themselves off as surfers."
Pop Surf Culture states "It was in the crummy-but-perfectly-named Surf Party that true beach crud reached its peak".
Lisanti later wrote that "with a bigger budget and more convincing male leads, Surf Party could have been considered one of the best Hollywood surf movies of the time, instead of just a middling cheap knock off of Beach Party."
Notes
References
McParland, Stephen J. (1994). It's Party Time – A Musical Appreciation of the Beach Party Film Genre. USA: PTB Productions. pp. 39.
Chidester, Brian & Priore, Domenic (2008). Pop Surf Culture: Music, Design, Film, and Fashion from the Bohemian Surf Boom USA: Santa Monica Press pg 171–172.
Archer, Eugene: "Surf Party Opens", The New York Times, March 12, 1964.
Sixties Cinema by Tom Lisanti
External links
Surf Party at Letterbox DVD
1964 films
1960s teen films
Beach party films
American films
Films directed by Maury Dexter
Films scored by Jimmie Haskell
Films set in Malibu, California
American surfing films
American teen films
20th Century Fox films
Teensploitation
1960s English-language films |
782723 | https://en.wikipedia.org/wiki/A%20for%20Andromeda | A for Andromeda | A for Andromeda is a British television science fiction drama serial first made and broadcast by the BBC in seven parts in 1961. Written by cosmologist Fred Hoyle, in conjunction with author and television producer John Elliot, it concerns a group of scientists who detect a radio signal from another galaxy that contains instructions for the design of an advanced computer. When the computer is built, it gives the scientists instructions for the creation of a living organism named Andromeda, but one of the scientists, John Fleming, fears that Andromeda's purpose is to subjugate humanity.
The serial was the first major role for the actress Julie Christie. Only one episode of the original production survives, along with a few short extracts from other episodes. A for Andromeda has been remade twice: first by the Italian state television RAI in 1972 and by the BBC in 2006. A sequel, The Andromeda Breakthrough, was made by the BBC in 1962.
Plot
The opening titles of each episode are prefaced by a television interview in which Professor Ernst Reinhart (Esmond Knight) looks back on the events of the serial.
"The Message"
Great Britain, 1970 – a new radio telescope, designed by the young scientists John Fleming (Peter Halliday) and Dennis Bridger (Frank Windsor) under the supervision of Professor Reinhart (Esmond Knight), has been built at Bouldershaw Fell. Shortly before its official opening, the telescope picks up a signal from the distant Andromeda Nebula. Examining the signal, Fleming realises that the signal is a computer program.
"The Machine"
Fleming is permitted to use the computer facilities at the London Institute of Electronics, where he is aided by Christine (Julie Christie). Using the computer to decode the message, Fleming realises that the message contains a set of instructions for the construction of another more advanced computer. The message also contains another program for the computer to run, and data to process. Bridger meanwhile, has sold out to an international conglomerate called Intel, represented by the sinister Kaufmann (John Hollis). The British government decides to build the computer at a military establishment at Thorness in Scotland. The computer is switched on and begins to output its first set of instructions.
"The Miracle"
The team at Thorness is joined by the biologist Madeline Dawnay (Mary Morris). The computer is outputting instructions for the creation of living cells. Fleming becomes nervous, worried that whatever life-form they are creating may not have humanity's best interests at heart. Dawney proceeds with the experiment, however, synthesising a primitive protoplasmic life-form. In the meantime, Bridger's leaking of Thorness' secrets has been discovered. Bridger is confronted by Ministry of Defence agent Judy Adamson (Patricia Kneale); fleeing he tumbles over a cliff to his death.
"The Monster"
It is now 1971 and the protoplasmic lifeform, now nicknamed "Cyclops" on account of its giant eye, continues to grow. Fleming has become ever more sceptical about the project, certain that the computer has its own agenda. He comes to realise that two terminals positioned either side of the computer's main display have the ability to affect the brainwaves of those who stand near it. His warnings are not heeded, however, and Christine, mesmerised by Cyclops and by the machine, is compelled to grasp the two terminals – she falls to the floor, killed by a massive electric shock.
"The Murderer"
Following Christine's death, the computer outputs a new set of instructions – this time for the creation of a complete human embryo. Fleming is horrified and demands that it be killed. He is ignored. The embryo rapidly grows to maturity; everyone is stunned when it is revealed to be a clone of the deceased Christine. The creature – which they name "Andromeda" – quickly learns to communicate and is brought before the computer. The computer, realising its instructions have been carried out, destroys Cyclops as it has been superseded by Andromeda.
"The Face of the Tiger"
Andromeda is put to work developing a program to enable Britain to intercept orbital missiles which a foreign power is firing over British airspace as a demonstration of power. Using the missiles designed by Andromeda, they are successful in destroying one of the missiles. The Government is now determined to make full use of Andromeda, not just for defence but also to aid industry. Fleming continues to make trouble and has his access to the computer revoked. He is horrified to discover that the Government has made a trade deal with Kaufmann and Intel for the rights to a new enzyme that Andromeda has developed that heals injured cells. By this stage, Dawnay is also beginning to have doubts about Andromeda – she agrees to aid Fleming by entering a program into the computer to convince it Andromeda is dead. The program is quickly discovered and reversed by Andromeda. However, the computer soon exacts its revenge – it corrupts the formula for the enzyme, making Dawnay and her assistants sick.
"The Last Mystery"
It is 1972 and the message from the Andromeda Nebula has stopped transmitting. Fleming has been able to determine the correct formula to counteract the effects of the enzyme and save Dawnay. Fleming, Dawnay, Reinhart, and Judy now agree that Andromeda must be stopped – however, the military now has control over the project. Andromeda tries to kill Fleming but fails; she confesses to Fleming that she is a slave of the computer, which is working to take over humanity. Fleming gains entry to the computer room where he takes an axe to the machine, destroying it. Now free of the machine Andromeda is able to access the safe that contains the copies of the original message with the instructions for building the computer which she burns so that the machine cannot be rebuilt. She flees with Fleming to one of the islands near the base. Pursued by soldiers, they hide in a series of caves on the island. However, Andromeda is apparently killed when she falls into a deep pool. The dejected Fleming is brought back to Thorness by the soldiers.
Background
Origins
Fred Hoyle was an astronomer best known for his work on the understanding of the creation of the elements through stellar nucleosynthesis, for developing the steady state theory of the universe and for coining the term "Big Bang" for the steady state theory's rival dynamic evolving model of the universe. Hoyle also had a taste for science fiction, having written a novel, The Black Cloud (1957), about a cloud of interstellar gas that threatens the Earth; this was adapted for radio and broadcast on 14 December 1957 by the BBC Home Service. The BBC were also interested in adapting The Black Cloud for television but Hoyle had already signed away the movie rights. Hoyle followed The Black Cloud with another science fiction novel, Ossian's Ride (1958); this attracted the interest of Norman James, a BBC designer keen to move into television production, who contacted Hoyle with a view to obtaining the rights to the novel. In discussion with the writer, James learned that Hoyle was interested in writing an original story for television. James contacted John Elliot, the assistant head of the BBC Script Department, who was interested in making a science fiction serial. Elliot, along with James and BBC script editor Donald Bull, met with Hoyle who outlined a potential story for an eight-part serial; this was what would eventually become A for Andromeda. Hoyle drew his inspiration for the serial from the work of astronomer Frank Drake who at that time had begun "Project Ozma", one of the first experiments in the Search for Extra-Terrestrial Intelligence (SETI). In late June 1960, the BBC made an offer of 250 guineas to Hoyle for the idea, which would be dramatised for television, as a serial in seven 30-minute parts, by John Elliot. Hoyle replied, "It has seemed to me that the overall sum of 250 gns is unsatisfactorily low. My own computations would suggest 1000 gns. I estimate that such a sum would still be only 2–3 percent of the receipts by the BBC from licences. Such a percentage still seems low – it is considerably less than rates available in the US". Eventually, a fee of 700 guineas was agreed. Elliot delivered his draft scripts between March and April 1961; at this point it was decided that each episode should run for 45 minutes and so Elliot had to work to expand each script.
Casting
The title role of Andromeda was played by Julie Christie. Hoyle originally saw Andromeda as an androgynous character but Elliot changed this to a young woman. The production team were keen to cast a young, unknown actress. While searching for a suitable candidate, co-producer and director Michael Hayes met an agent who suggested Julie Christie, then a student at the Central School of Speech and Drama, recommending her as "the new Bardot". In playing the part, Christie wanted to give the character of Andromeda more emotion but Hayes directed her to act more impassively, using his camera to define the character. Mindful that a sequel was under consideration, Hayes advised the BBC to sign her up before she became a big star; he was ignored and the role had to be recast for the sequel, The Andromeda Breakthrough. Christie went on to have a highly successful film career, her breakthrough role occurring in Doctor Zhivago (1965).
Peter Halliday played John Fleming. He had trained at the Royal Academy of Dramatic Art before joining the Royal Shakespeare Company where he met and befriended Michael Hayes. Halliday had a reputation for playing angry young men.
Because the serial was set in the near future, both Hoyle and Michael Hayes felt that women would occupy more progressive roles in the years to come. This was reflected in the writing and casting; appearing as the security services agent, Judy Adamson, was Patricia Kneale. Kneale found the character "a rather prissy sort of person, not really the sort of person I usually played at all." A late change to the script was changing the sex of the biologist character, George Dawnay, to Madeline Dawnay; writing to Mary Morris offering her the part, Hayes said, "don't be put off by the fact that the lady appears to smoke a pipe half way through and give vent to some rather strange utterances."
Production
Norman James had hoped to produce the serial himself but the BBC felt it was too complex an undertaking for a novice producer. However, James was given the role of co-producer and designer; he also received an additional payment in acknowledgement of his role in developing the serial. Assigned as co-producer and director was Michael Hayes who had directed the Shakespearean serial An Age of Kings (1960). Location filming took place in July 1961 around London, including at IBM's offices on Wigmore Street, and in the vicinity of Tenby in Pembrokeshire, Wales where the Manorbier Army Base stood in for the Thorness research centre. The army assisted the production by providing a helicopter for scenes of personnel arriving at Thorness and for aerial shots of the base and environs. They also supplied a pursuit launch for the chase scene in the final episode. A number of pre-filmed inserts were also shot at Ealing Studios. The production then went into studio at BBC Television Centre with each episode recorded every Wednesday between 1 August 1961 and 13 September 1961. For editing purposes, the output of the electronic studio cameras was recorded onto 35mm film rather than videotape. A last minute addition to the serial were the pre-credits sequences at the start of each episode depicting Reinhart recalling the events of the serial in a television interview. These were made in the style of the well-known interview programme Face to Face hosted by John Freeman and were shot at Television Centre on 22 September 1961.
Broadcast and critical reception
The debut episode of A for Andromeda was promoted on the cover of listings magazine Radio Times. The accompanying article said, "A new science fiction series is an exciting prospect at any time. When it is backed by the authority of a scientist with the international reputation of Fred Hoyle, it ranks as a major television event". A for Andromeda was broadcast on Tuesday nights at 8:30pm from 3 October 1961. The opening episode was watched by 7.5 million viewers; however, by the end of the serial this had risen to 12.9 million viewers and the serial averaged 9.6 million viewers over its seven-week run.
A for Andromeda met with a varied critical reception. "Science fiction serial starts well", said The Times after the broadcast of the first episode, adding, "Although it is encouraging to have the authority of Professor Fred Hoyle for the scientific credibility of [A for Andromeda]... it is the skill of Mr Hoyle the novelist that will mainly be called upon to hold our attention". The Evening News, meanwhile declared the serial to be "a jolly good successor to Quatermass". Not so impressed was L. Marsland Gander in The Daily Telegraph who wrote, "As a devotee of Prof. Hoyle and a keen student of disembodied intelligence I felt impatient... I am too well acquainted with [his] work to be disappointed, but the temptation is great". A harsher verdict came from Philip Phillips of the Daily Herald who said, "The next six episodes might be brilliant. But I won't be watching them" while The Sunday Times said, "I cannot with the best will find anything in the least exciting about A for Andromeda".
The BBC produced an Audience Research report for episodes one, five and seven. Many respondents criticised the serial for being slow and full of scientific terminology. However, as the serial progressed, viewers became more enthusiastic; after episode five, one viewer said, "The serial, like Andromeda herself, suddenly came alive. This episode was spine-chilling". Another commented "I didn't like the way Andromeda was created – it is absolutely against Christian belief". J.A.K. Fraser of Dornock, Scotland, wrote to the BBC's correspondence programme Points of View, saying, "Enough surely has been seen of Prof. Fleming's overacted hysterical outbursts". Writing to the Radio Times, B.W. Wolfe of Basingstoke said, "Congratulations on the recent BBC-tv science-fiction serial A for Andromeda. Bug-eyed monsters we have seen before, but never a creature so radiantly beautiful as Andromeda herself. I was completely captivated". Other letter writers to the Radio Times discussed the scientific accuracy of the serial including one correspondent, C.W. Bartlett of Watford, who wrote to inform readers that the reference to DNA (then newly discovered) was not a fictional substance but really existed.
Archive status
As was common practice at the time, the BBC's copies of the serial were junked after broadcast and the bulk of the serial still remains missing. In 2005, a 16mm film print of the sixth episode, "The Face of the Tiger", was donated to the BBC archives by a private collector; this copy is missing the pre-credits sequence of Reinhart's interview. A number of film clips from episodes one, two, three, and seven also exist, as does a full audio-only copy of episode seven, taken from an off-air recording. A complete set of off-air photographs, known as tele-snaps, were taken of all seven episodes and were held in the collection of Michael Hayes prior to his death.
Remakes
A come Andromeda (1971)
A version of the serial entitled A come Andromeda, still set in Britain ("in the following year") but filmed at Italian locations, was made for Italian television (RAI) in 1971. It was adapted by Inisero Cremaschi and directed by Vittorio Cottafavi. This version still exists and has been repeated on Italian TV. It has been released on VHS, and latterly on DVD but without English subtitles. The cast includes Nicoletta Rizzi as Andromeda, Paola Pitagora as Judy Adamson, Luigi Vannucchi as Fleming, and Tino Carraro as Reinhart.
A for Andromeda (2006)
A second remake of A for Andromeda was made by BBC Fictionlab for BBC Four in early 2006. It was produced by Richard Fell, who the previous year had overseen a remake, performed live, of The Quatermass Experiment, another classic BBC science fiction production largely absent from the BBC archives.
Novelisation
The prospect of novelising A for Andromeda arose early in the serial's production when Souvenir Press contacted the BBC in May 1961 indicating their interest in publishing a tie-in novel. John Elliot responded stating that while the concept had been Hoyle's, the characterisation, dialogue and plot structure was his. Elliot sent copies of the shooting scripts to Souvenir and was formally commissioned to write the novelisation in July 1961. The terms of the contract concerned Hoyle as they gave Souvenir first call on the sequel; he insisted that this could only be permitted if the sequel's novelisation was largely written by Elliot. Elliot delivered his manuscript on 28 September 1961. The novelisation was closer to the original 30-minute scripts and had much of the material required to pad each episode to 45 minutes removed. Promoting it as the story that would "out-Quatermass Quatermass", the book was published by Souvenir in February 1962.
Weekly Science Diary said, "It is a brightly written, really exciting tale with the added inducement of scientific accuracy". It has since been translated into several languages. There have also been two alternative versions; the first – issued by Macmillan in 1964 – rewritten by Elliot in simpler English as a study aid for English language students and the second a children's version published in 1969.
A major theme of the novel is that information for generating new life can be transmitted by radio signal over galactic distances. Hoyle and Elliot published the novel “A for Andromeda” in 1962 at a time when the fundamental importance of the biological information encoded in DNA was just starting to be understood. The fictional biochemist in the novel, Professor Dawnay, was able to create life forms, initially a bacterial form and later a human female, using what is referred to in the novel as a “D.N.A. synthesizer” and the DNA sequence information transmitted from the Andromeda Galaxy. In the real world of today, whole genomes can actually be built from chemically synthesized DNA sequences, and when inserted into a receptive cellular environment can be brought to life to create a novel organism (see for example Hutchinson et al.). Thus the fictional syntheses of life forms described in the novel anticipated what, today, is starting to be realized.
In other media
Several film studios, including MGM, the Associated British Picture Corporation and Hammer Films, made enquiries regarding the film rights to A for Andromeda. However, no film version was ever made.
In 2006, BBC Worldwide released a DVD box set, The Andromeda Anthology, comprising the original A for Andromeda and its sequel The Andromeda Breakthrough. A for Andromeda was reconstructed using tele-snaps with on-screen captions to describe the plot set to a soundtrack of music from the serial. The surviving film sequences were placed in the narrative where appropriate and the surviving episode "The Face of the Tiger" was presented in its entirety. Extra features included a commentary on the surviving material by Michael Hayes, Peter Halliday and Frank Windsor; a specially made making-of documentary, Andromeda Memories; an excerpt from Points of View as well as a photo gallery, PDFs of the shooting scripts and the Radio Times articles and detailed production notes by television historian Andrew Pixley. Both the Italian and BBC remakes of A for Andromeda have also been released on DVD.
In 2007, footage of Julie Christie and Peter Halliday in the series is seen in Torchwood episode, "Random Shoes".
See also
Species (1995), another film featuring an attempted invasion through embodied malware from space
Trojan horse, the trope forming the central plot device
References
Further reading
External links
A for Andromeda at Action TV
A for Andromeda at the British Film Institute's Screenonline
Andromeda book series at Goodreads
1961 British television series debuts
1961 British television series endings
1960s British drama television series
1960s British television miniseries
BBC television dramas
1960s British science fiction television series
Fictional computers
Lost BBC episodes
Lost television shows
Works by Fred Hoyle
British science fiction television shows
Television shows written by John Elliot (author)
English-language television shows
Black-and-white British television shows |
3718533 | https://en.wikipedia.org/wiki/Chandraseniya%20Kayastha%20Prabhu | Chandraseniya Kayastha Prabhu | Chandraseniya Kayastha Prabhu (CKP) is an ethno-religious caste of South Asia.
Traditionally, the CKPs have the upanayana (thread ceremony) and have been granted the rights to study the vedas and perform vedic rituals along with the Brahmins.
Ritually ranked very high, the caste may be considered socially proximate to the Maharashtrian Brahmin community. They have traditionally been an elite and literate but a numerically small community.
'Prabhu' means a person who holds a high position in the government.Historically, they made equally good warriors, statesmen as well as writers. They held the posts such as Deshpandes and Gadkaris and according to the historian, B.R. Sunthankar, produced some of the best warriors in Maharashtrian History. The CKP also performed three Vedic karmas or duties which in sanskrit are called: Adhyayan- studying of the Vedas, yajna- ritual done in front of a sacred fire, often with mantras and dāna – alms or charity.
Traditionally, in Maharashtra, the caste structure was headed by the Brahmins castes – the deshasthas, chitpawans, karhade, saraswats and the CKPs. Other than the Brahmins, the Prabhus (CKPs and Pathare Prabhus) were the communities advanced in education.
They are mainly concentrated in Maharashtra.
More formally, in Maharashtra, they are one of the Prabhu Communities and a sister caste of the Pathare Prabhu.
The CKP followed the Advaita Vedanta tradition propounded by Adi Shankara, the first Shankaracharya whereas the Pathare Prabhu followed the Smartha tradition.
History
The CKP claim descent from Chandrasen, an ancient kshatriya king of Ayodhya and of the Haihaya family of the lunar Kshatriya Dynasty.
The name Chandraseniya may be a corruption of the word Chandrashreniya, meaning from the valley of the Chenab River (also known as "Chandra"). This theory states that the word Kayastha originates from the term Kaya Desha, an ancient name for the region around Ayodhya.
During the times of the Shilahara dynasty of Konkan (around the 10th century), the Silhara kings were known to invite for settlement into their lands, Brahmins and Kshatriyas of the northern Indo-Gangetic valley. These are the Goud Saraswat Brahmin and the CKP.
In fact, epigraphical evidences i.e. engravings from the Shilahara times have been found in Deccan to prove that many CKPs held high posts and controlled the civilian and military administration. For example, a Shilahara inscription around A.D. 1088 mentions the names of a certain Velgi Prabhu. Lakshmana Prabhu is mentioned as a MahaDandanayaka (head of military) and MahaPradhana (prime minister); Ananta-Prabhu is mentioned as a MahaPradhana (prime minister), Kosadhikari (Head of treasury) and Mahasandhivigrahika (charge of foreign department). According to Historian and researcher S.Muley, these epigraphs might be the first available evidences of the existence of the CKP in Maharashtra.
Kayastha chiefs claiming Kshatriya varna ruled over vast swathes of land in Andhra country, and they are recorded in Andhra history dating back to the 13th century CE.
The CKPs have traditionally been placed in the Kshatriya varna and also followed Brahmin rituals, like the sacred thread (Janeu) ceremony As another example of similarity with the Brahmin rituals, the observation of the period of mourning and seclusion by person of a deceased's lineage by the CKPs has traditionally been for 10 days although Kshatriyas generally observe it for 12 days.
According to a letter written by the Shankaracharya, who confirmed the 'Vedadhikar' of the CKPs, the title Prabhu, which means high official, must have been given to the CKPs by the Shilahar kings of Konkan.
The Shankaracharya has also formally endorsed their Kshatriya status by citing various sanskrit scriptures; especially
one scripture that explicitly called them Chandraseniya Kshatriyas. He also cited documents from Banares and Pune Brahmins ratified by Bajirao II himself that proved their rights over the Vedas. His letter is addressed to all Brahmins.
According to the American Indologist and scholar of Religious Studies and South Asian Studies who is the Professor of International Studies and Comparative Religion at the University of Washington, Christian Lee Novetzke
The CKPs, described as a traditionally well-educated and intellectual group, came into conflict with Marathi Brahmins at least 350 years ago over their rights to be teachers and scholars. As such they competed with the Brahmins in the 18th and 19th centuries for government jobs.. They even demanded privileges of the Brahmin order – the rights to conduct the vedic rituals (all by themselves) and satkarma (all six karmas of the Brahmin order) for which they were opposed especially by the Chitpawans.University of Toronto historians and Professors Emeriti, Milton Israel and N.K Wagle opine about this as follows in their analysis:
Deccan sultanate and Maratha Era
The CKP community became more prominent during the Deccan sultanates and Maratha rule era. During Adilshahi and Nizamshahi, CKP, the Brahmins and high status Maratha were part of the elites. Given their training CKP served both as civilian and military officers. Several of the Maratha Chhatrapati Shivaji's generals and ministers, such as Murarbaji Deshpande and Baji Prabhu Deshpande, were CKPs.
In 17th century Maharashtra, during Shivaji's time, the so-called higher classes i.e. the Marathi Brahmins, CKPs and Saraswat Brahmins, due to social and religious restrictions were the only communities that had a system of education for males. Except these three castes, education for all other castes and communities was very limited and consisted of listening to stories from religious texts like the Puranas or to Kirtans and thus the common masses remained illiterate and backward. Hence Shivaji was compelled to use people from these three educated communities - Marathi Brahmins, CKPs and Saraswat Brahmins - for civilian posts as they required education and intellectual maturity. However, in this time period, these three as well as other communities, depending on caste, also contributed their share to Shivaji's "Swaraj"(self-rule) by being cavalry soldiers, commanders, mountaineers, seafarers etc.
During this period, some prominent CKPs like Pilaji Prabhu Deshpande (the son of Baji Prabhu Deshpande) and Shamji Kulkarni (the son of Raoji Narao Kulkarni) were converted to Islam. The conversion happened after being taken as prisoners in war campaigns. After their escape, conversions back to Hinduism were done using Brahminical rituals performed after authorization by the Brahmins, under the minister "Panditrao". Thus, they were accepted back not only into Hinduism but also the CKP community.
During the Peshwa era, the CKP's main preceptor or Vedic Guru was a Brahmin by the name of Abashastri Takle, who was referred to by the CKP community as "Gurubaba". Sale of liquor was banned by the Brahmin administrators to the Brahmins, CKPs, Pathare Prabhus and Saraswat Brahmins but there was no objection to other castes drinking it or even to the castes such as Bhandaris from manufacturing it. Gramanyas i.e. "dispute involving the supposed violation of the Brahmanical ritual code of behavior" were very common in that era and some Chitpawans, at times, initiated Gramanya against other communities – Prabhu communities (CKP, Pathare Prabhu), Saraswats and Shukla Yajurvedis. They did not come to fruition however. The analysis of gramanyas against the CKP was done in depth by historians from the University of Toronto. Modern scholars quote statements that show that they were due to political malice – especially given that the Gramanya was started by a certain Yamaji Pant who had sent an assassin to murder a rival CKP. This was noted by Gangadharshastri Dikshit who gave his verdict in favor of the CKPs. Abashastri Takle had used the scriptures to establish their "Vedokta". Similarly, the famous jurist Ramshastri Prabhune also supported the CKPs Vedokta. Modern scholars conclude that the fact that the CKPs held high ranking positions in administration and the military and as statesmen was a "double edged sword". Historians, while analyzing the gramanyas state "As statesmen, they were engulfed in the court intrigues and factions, and, as a result, were prone to persecution by opposing factions. On the other hand, their influence in the court meant that they could wield enough political clout to effect settlements in favor of their caste.". The gramanyas during the Peshwa eras finally culminated in the favor of the CKPs as the Vedokta had support from the Shastras and this was affirmed by two letters from Brahmins from Varanasi as well as one from Pune Brahmins ratified by Bajirao II himself. The late Indian professor of sociology, Govind Sadashiv Ghurye commented on the strictness of the caste system during the Peshwa rule in Maharashtra by noting that even advanced caste such as the Prabhus had to establish rights to carry on with the vedic rituals.
As the Maratha empire/confederacy expanded in the 18th century, and given the nepotism of the Peshwa of Pune towards their own Chitpavan Brahmin caste, CKP and other literal castes migrated for administration jobs to the new Maratha ruling states such as the Bhosale of Nagpur, the Gaekwads, the Scindia, the Holkars etc.,
The Gaekwads of Baroda and the Bhosale of Nagpur gave preference to CKPs in their administration.
In 1801-1802 CE (1858 Samvat), a Pune-based council of 626 Brahmins from Maharashtra, Karnataka and other areas made a formal declaration that the CKPs are twice-born (upper caste) people who are expected to follow the thread ceremony (munja).
British era and later
During the British colonial era, the two literate communities of Maharashtra, namely the Brahmins and the CKP were the first to adopt western education with enthusiasm and prospered with opportunities in the colonial administration. A number of CKP families also served the semi-independent princely states in Maharashtra and other regions of India, such as Baroda.
The British era of the 1800s and 1900s saw the publications dedicated to finding sources of CKP history
The book 'Prabhu Kul Deepika' gives the gotras (rishi name) and pravaras etc. of the CKP caste.
Another publication, "Kayastha-mitra"(Volume 1, No.9. Dec 1930) gives a list of north Indian princely families that belonged to the CKP caste.
Rango Bapuji Gupte, the CKP representative of the deposed Raja Pratapsinh Bhosale of Satara spent 13 years in London in the 1840s and 50s to plead for restoration of the ruler without success. At the time of the Indian rebellion of 1857, Rango tried to raise a rebel force to fight the British but the plan was thwarted and most of the conspirators were executed. However, Rango Bapuji escaped from his captivity and was never found.
At times, there were Gramanyas, also known as "Vedokta disputes", initiated by certain individuals who tried to stop CKP rights to Upanayana. These individuals based their opinion on the belief that no true Kshatriyas existed in the Kali Yuga; however the upanayana for CKPs were supported by prominent Brahmin arbitrators like Gaga Bhatt and Ramshastri Prabhune who gave decisions in the favor of the community. In the final Gramanya, started by Neelkanthashastri and his relative Balaji Pant Natu, a rival of the CKP Vedic scholar V.S.Parasnis at the court of Satara, the Shankaracharya himself intervened as arbiter and he gave his verdict by fully endorsing the rights over Vedas for the CKP. The Shankaracharya's letter is addressed to all Brahmins and he refers to various Shastras, earlier verdicts in the favour of the CKPS as well as letters about the lineage of the CKP to make his decision and void the dispute started by Natu.
When the prominent Marathi historian Vishwanath Kashinath Rajwade contested their claimed Kshatriya status in a 1916 essay, the CKP writer Prabodhankar Thackeray wrote a text outlining the identity of the caste, and its contributions to the Maratha empire. In this text, Gramanyachya Sadhyant Itihas, he wrote that the CKPs "provided the cement" for Shivaji's swaraj (self-rule) "with their blood".
Gail Omvedt concludes that during the British era, the overall literacy of Brahmins and CKP was overwhelmingly high as opposed to the literacy of others such as the Kunbis and Marathas for whom it was strikingly low.
In 1902, all communities other than Marathi Brahmins, Saraswat Brahmins, Prabhus (Chandraseniya Kayastha Prabhus, Pathare Prabhus) and Parsi were considered backward and 50% reservation was provided for them in by the princely state of Kolhapur. In 1925, the only communities that were not considered backward by the British Government in the Bombay Presidency were Brahmins, CKP, Pathare Prabhus, Marwaris, Parsis, Banias and Christians.
According to the studies by D.L.Sheth, the former director of the Center for the Study of Developing Societies in India (CSDS), educated upper castes and communities - Punjabi Khatris, Kashmiri Pandits, CKPs, the Chitpawans, Nagar Brahmins, South Indian Brahmins, Bhadralok Bengalis, etc., along with the Parsis and upper crusts of the Muslim and Christian society were among the Indian communities in 1947, at the time of Indian independence, that constituted the middle class and were traditionally "urban and professional" (following professions like doctors, lawyers, teachers, engineers, etc.). According to P. K. Varma, "education was a common thread that bound together this pan Indian elite" and almost all the members of these communities could read and write English and were educated "beyond school"
Culture
The mother tongue of most of the community is now Marathi, though in Gujarat they also communicate with their neighbours in Gujarati, and use the Gujarati script, while those in Maharashtra speak English and Hindi with outsiders, and use the Devanagari script.
The CKP historically performed three "vedic karmas"(studying vedas, fire sacrifice, giving alms) as opposed to full("Shatkarmi") Brahmins who performed six vedic duties which also include accepting gifts, teaching Vedas to other and performing vedic rites for others.
They have Vedic thread ceremonies("munja" in Marathi) for male children and a death pollution period of 10 days. Educationally and professionally, 20th century research showed that the Saraswat, CKP, Deshastha and Chitpawan were quite similar. Researcher and professor Dr.Neela Dabir sums it up as follows "In Maharashtra for instance, the family norms among the Saraswat Brahmins and CKPs were similar to those of the Marathi Brahmins". However, she also criticizes these communities by concluding that until the 20th century, the Marathi Brahmin, CKP and Saraswat Brahmin communities, due to their upper-caste ritualistic norms, traditionally discouraged widow remarriage. This resulted in distress in the lives of widows from these castes as opposed to widows from other Marathi Hindu castes.
They worship Ganesh, Vishnu and other Hindu gods. Many are devotees of Sai Baba of Shirdi. Some CKPs may also be devotees of the religious swamis from their own caste – and "Gajanan Maharaj (Gupte)", who took samadhis at Kalyan (in 1919) and Nasik (in 1946) respectively. Many CKP clans have Ekvira temple at Karle as their family deity whereas others worship Vinzai, Kadapkarin, Janani as their family deity
The CKPs share many common rituals with the upper-caste communities and the study of Vedas and Sanskrit. Unlike most upper-caste Marathi communities however, the CKPs, through their interaction with Muslims and residence in the coastal Konkan region, have adopted a diet which includes meat, fish, poultry and eggs.
CKPs have had a progressive attitude regarding female education compared to other communities. For example, Dr.Christine Dobbin's research concludes that the educationally advanced communities in the 1850s – the CKPS, Pathare Prabhus, Saraswats, Daivadnya Brahmin and the Parsis were the first communities in the Bombay Presidency that allowed female education.
Notable people
Baji Prabhu Deshpande (1615–1660), commander of Shivaji's forces who along with his brother died defending Vishalgad in 1660
Murarbaji Deshpande (?–1665), commander of Shivaji's forces who died defending the fort of Purandar against the Mughals in 1665
Sakharam Hari Gupte (1735–1779), a General of Raghunathrao Peshwa responsible for conquering Attock on the banks of the Indus and repelling the Durrani ruler, Ahmad Shah Abdali out of India in the 1750s. Later he was involved in the plot against Peshwa Narayanrao
Vithal Sakharam Parasnis (17xx-18xx)- Sanskrit, Vedic and Persian scholar; consultant to British Historian James Grant Duff; author of the Sanskrit "karma kalpadrum"(manual for Hindu rituals); first head of the school opened by Pratapsimha to teach Sanskrit to the boys of the Maratha caste
Lakshman Jagannath Vaidya, Dewan Bahadur of the princely state of Baroda during British Raj era.
Narayan Jagannath Vaidya (18xx–1874), introduced educational reforms in Mysore and Sindh (now in Pakistan). The Narayan Jagannath High School (popularly known as NJV School in Karachi) is named after him to acknowledge his contributions to education in the region.
Rango Bapuji Gupte (1800 –missing 5 July 1857), Lawyer for Pratapsingh of Satara, tried to organise a rebellion against the British in 1857.
Mahadev Bhaskar Chaubal (1857–1933), Indian origin British era Chief Justice of the Bombay High Court. Member of Executive Council of Governor of Bombay in 1912 and Member of Royal Commission on Public Services in India.
Ram Ganesh Gadkari (1885–1919), playwright and poet who was presented the Kalpana Kuber and Bhasha Prabhu awards
Shankar Abaji Bhise (1867–1935), scientist and inventor with 200 inventions and 40 patents. The American scientific community referred to him as the "Indian Edison".
Narayan Murlidhar Gupte (1872–1947), Marathi poet and a scholar of Sanskrit and English.
Prabodhankar Thackeray (1885–1973), anti-dowry, anti-untouchability social activist, politician and author. Father of Bal Thackeray
Shankar Ramchandra Bhise (1894-1971), popularly known as "Acharya Bhise" or "Bhise Guruji", was a social reformer, educationalist and novelist devoted to the education and upliftment of the Adivasi community in the early 20th century.
Vasudeo Sitaram Bendrey (1894–1986), historian, credited for discovering the true portrait of Shivaji and creating records called "Bendrey's indices". He won the Maharashtra state award for his biography on Sambhaji.
Gangadhar Adhikari (1898-1981) - Indian communist leader, former general secretary of the Communist Party of India and prominent scientist.
Surendranath Tipnis, social reformer and the chairman of the Mahad Municipality in the early 1900s. Helped Ambedkar during the Mahad Satyagraha by declaring its public spaces open to untouchables. Awarded the titles 'Dalitmitra'(friend of the dalits) and 'Nanasaheb'.
C. D. Deshmukh (1896–1982), first recipient of the Jagannath Shankarseth Sanskrit Scholarship, awardee of the Frank Smart Prize from the University of Cambridge, topper of ICS Examination held in London, first Indian Governor of RBI, first finance Minister of Independent India and tenth vice chancellor of the University of Delhi.
Dattatreya Balakrishna Tamhane (1912–2014), a Gandhian freedom fighter, litterateur and social reformer. He won the Maharashtra State government's award for literature
B. T. Ranadive (1904–1990), popularly known as BTR was an Indian communist politician and trade union leader.
Kusumavati Deshpande (1904–1961), Marathi writer and first female president of the Marathi Sahitya Sammelan. Wife of the Marathi poet, Atmaram Ravaji Deshpande
Kumarsen Samarth, film director, his biggest success being the 1955 Marathi film Shirdi che Saibaba. Father of actresses Nutan and Tanuja and husband of Shobhana Samarth.
Shobhna Samarth (1916–2000), film actress of the 1940s. She was mother of actresses Nutan and Tanuja.
Kamal Ranadive (1917–2001) - Prominent Indian biologist and scientist, well known for her work on relationship between virus and cancer.
Ahilya Rangnekar (1922–2009), founder of Maharashtra state unit of the All India Democratic Women's Association. Leader of the Communist Party of India and B T Ranadive's younger sister
Nalini Jaywant (1926–2010), film actress of the 1940s and 1950s. She was the first cousin of Shobhna Samarth
Vijaya Mehta, actor and director on Marathi stage, television and film
Bal Thackeray (1926–2012), founder of Shiv Sena and founder-editor of the Saamana newspaper
Anant Damodar Raje (1929–2009) - Indian architect and academic.
Kushabhau Thakre (1922–2003), Notable Politician and Former Party President of Bharatiya Janata Party
Arun Shridhar Vaidya (1926–1986 ), 13th Chief of Army Staff of the Indian Army
Mrinal Gore (1928–2012),Socialist party leader of India. She earned the sobriquet Paaniwali Bai (water lady) for her effort to bring drinking water supply to Goregaon, a North Mumbai suburb.
Bhalachandra Vaidya (1928–2018) – Known as "Bhai" Vaidya (Bhai means Brother in Hindi) was Mayor of Pune city, freedom fighter and reformer who went to jail 28 times fighting for the cause of Dalits, farmers and backward classes. He was known for his honesty and non-corrupt attitude.
Bhalchandra Gopal Deshmukh (1929–2011), ex-cabinet secretary of India and author of several books
Nutan (1936–1991), she holds the record of five wins of the Best Actress award at Filmfare, which was held only by her for over 30 years until it was matched by her niece Kajol Mukherjee in 2011.
Shashikumar Madhusudan Chitre (1936–2021) - Indian astrophysicist and mathematician. Best known for his work on solar physics and gravitational lensing. He is awarded by Padma Bhushan in 2012.
Shrinivas Khale ( 1926-2011) - Padma Bhushan awardee, music composer in five languages Marathi, Hindi, Sanskrit, Bengali and Gujarati
Dilip Purushottam Chitre (1938–2009) - Well known Marathi-English writer and poet, recipient of Sahitya Akademi Award.
References
Notes
Citations
Prabhu Communities of Maharashtra
Kayastha
Social groups of Maharashtra
Ethnoreligious groups in Asia
Marathi people |
1730437 | https://en.wikipedia.org/wiki/Mathcad | Mathcad | Mathcad is computer software for the verification, validation, documentation and re-use of mathematical calculations in engineering and science, notably mechanical, chemical, electrical, and civil engineering. Released in 1986 on DOS, it introduced live editing (WYSIWYG) of typeset mathematical notation in an interactive notebook, combined with automatic computations. It was originally developed by Mathsoft, and since 2006 has been a product of Parametric Technology Corporation.
History
Mathcad was conceived and developed by Allen Razdow at his company Mathsoft. It was released in 1986. It was the first system to support WYSIWYG editing and recalculation of mathematical calculations mixed with text. It was also the first to check the consistency of engineering units through the full calculation. Other equation solving systems existed at the time, but did not provide a notebook interface: Software Arts' TK Solver was released in 1982, and Borland's Eureka: The Solver was released in 1987.
Mathcad was acquired by Parametric Technology in April 2006.
Mathcad was named "Best of '87" and "Best of '88" by PC Magazine 's editors.
Overview
Mathcad's central interface is an interactive notebook in which equations and expressions are created and manipulated in the same graphical format in which they are presented (WYSIWYG). This approach was adopted by systems such as Mathematica, Maple, Macsyma, MATLAB, and Jupyter.
Mathcad today includes some of the capabilities of a computer algebra system, but remains oriented towards ease of use and documentation of numerical engineering applications.
Mathcad is part of a broader product development system developed by PTC, addressing analytical steps in systems engineering. It integrates with PTC's Creo Elements/Pro, Windchill, and Creo Elements/View. Its live feature-level integration with Creo Elements/Pro enables Mathcad analytical models to be directly used in driving CAD geometry, and its structural awareness within Windchill allows live calculations to be re-used and re-applied toward multiple design models.
Summary of capabilities
The Mathcad interface allows users to combine a variety of different elements (mathematics, descriptive text, and supporting imagery) into a worksheet, in which dependent calculations are dynamically recalculated as inputs change. This allows for simple manipulation of input variables, assumptions, and expressions. Mathcad's functionality includes:
Numerous numeric functions for statistics, data analysis, image processing, and signal processing;
Ubiquitous dimensionality checking and simplification;
Solution of systems of equations, such as ODEs and PDEs using several methods;
Root finding for polynomials and other functions;
Symbolic manipulation of mathematical expressions;
Parametric 2D and 3D plotting and discrete data plotting;
Leverage standard, readable mathematical expressions within embedded program constructs;
Vector and matrix operations, including eigenvalues and eigenvectors;
Curve fitting and regression analysis;
Statistical and design of experiments functions and plot types, and evaluation of probability distributions;
Import from and export to other applications and file types, such as Microsoft Excel and MathML;
Cross references to other Mathcad worksheets;
Integration with other engineering applications, such as CAD, FEM, BIM, and Simulation tools, to aid in product design, like Autocad, Ansys, Revit.
Although Mathcad is mostly oriented to non-programmers, it is also used in more complex projects to visualize results of mathematical modeling by using distributed computing and coupling with programs written using more traditional languages such as C++.
Current releases
As of 2021, the latest release from PTC is Mathcad Prime 7.0.0.0. This release is a freemium variant: if the software is not activated after a Mathcad Prime 30-day trial, it is possible to continue using PTC Mathcad Express for an unlimited time as "PTC Mathcad Express Free-for-Life Engineering Calculations Software". This freemium pilot is a new marketing approach for PTC. Review and markup of engineering notes can now be done directly by team members without them all requiring a full Mathcad Prime license.
The last release of the traditional (pre "Prime") product line, Mathcad 15.0, came out in June 2010 and shares the same worksheet file structure as Mathcad 14.0. The last service release, Mathcad 15.0 M050, which added support for Windows 10, was released in 2017. Mathcad 15.0 is no longer actively developed but in "sustained support".
Computer operating system platforms
Mathcad only runs on Microsoft Windows. Mathcad Prime 6.0 requires a 64-bit version of Windows 7, Windows 8.1 or Windows 10. Until 1998, Mathcad also supported Mac OS.
Support
Starting in 2011 (Mathcad 15.0) the first year of maintenance and support has been included in the purchase or upgrade price.
Release history
Screen captures of previous Mathcad versions
See also
Comparison of computer algebra systems
Comparison of numerical-analysis software
TK Solver
PTC:Creo
PTC:Windchill
SMath Studio, a freeware similar to MathCad
References
External links
Mathcad blogs
Free trial of Mathcad Prime – Mathcad Express
Computer-related introductions in 1986
Array programming languages
Computer algebra system software for Windows
Computer algebra systems
Computer vision software
Data visualization software
Dynamically typed programming languages
High-level programming languages
Linear algebra
Mathematical optimization software
Numerical analysis software for Windows
Numerical linear algebra
Numerical programming languages
Numerical software
Plotting software
Proprietary software
Regression and curve fitting software
Statistical programming languages
Time series software
Windows-only software |
2308057 | https://en.wikipedia.org/wiki/List%20of%20universities%20in%20Myanmar | List of universities in Myanmar | The following is a comprehensive list of universities in Myanmar, categorised by state and region. Nearly all major and national universities in Myanmar are in Yangon Region and Mandalay Region. The Burmese higher education system is entirely state-run, and its universities and colleges are organised along their fields of studies. The country's 150-plus universities and colleges are administered by various government ministries. For example, liberal arts and science universities such as Yangon University and Mandalay University, and technological universities are run by the Ministry of Education, the medical schools are run by the Ministry of Health, Private colleges offer international joint diploma to the residents in some fields such as engineering, computing, and Business administration.
By state/division
Ayeyarwady Region
Javana Buddhist Academy(JBA) Kyaiklat
Bogalay Education College
Computer University, Hinthada
University of Computer Studies (Maubin)
Computer University, Pathein
Government Technical Institute, Wakema
Hinthada University
Maubin University
Myanmar Union Adventist Seminary
Myaungmya Education College
Pathein Education College
Pathein University
Technological University, Hinthada
Technological University, Maubin
Technological University, Pathein
Bago Region
Bago Degree College
Computer University, Pyay
Computer University, Taungoo
Paku Divinity School
Pyay Education College
Pyay Technological University
Pyay University
Taungoo Educational College
Taungoo University
Technological University, Taungoo
Chin State
Chin Christian College
Union Theological College
Zomi Theological College
Hakha College
Kachin State
Bhamo University
Computer University, Bhamo
Computer University, Myitkyina
Government Technical College, Mohnyin
Kachin Theological College
Mohnyin Degree College
Myitkyina Education College
Myitkyina University
Technological University, Bhamo
Technological University, Myitkyina
Kayah State
Computer University, Loikaw
Loikaw University
Technological University, Loikaw
Loikaw Education College
Kayin State
Computer University, Hpa-An
Hpa-An Education College
Hpa-An University
Technological University, Hpa-An
Magway Region
University Of Computer Studies Magway
University of Computer Studies (Pakokku)
Yenangyaung Government Technical Institute
Magway Education College
Magway University
Pakokku Education Degree College
Pakokku University
Technological University, Magway
Technological University, Pakokku
University of Community Health, Magway
University of Medicine, Magway
Pali University of Buddhism, Pakokku
Yenangyaung University
Government Technical Institute, Chauk
Government Technical Institute, Thayet
Government Technical Institute, Magway
Pakokku Nursing College
Mandalay Region
University of Mandalay
Computer University, Mandalay
Computer University, Meiktila
Government Technical Institute, Kyaukpadaung
Government Technical Institute, Pyinoolwin
Government Technical Institute, Yamethin
Defence Services Academy
Defence Services Technological Academy
Kyaukse University
London Business University
Mandalay Education College
Mandalay Institute of Nursing
Mandalay Regional Co-operative College
Mandalay Technological University
Mandalar Degree College
Meiktila Education College
Meiktila Institute of Economics
Meiktila University
Myanmar Aerospace Engineering University
Myanmar Institute of Information Technology
Myanmar Theological College, Mandalay
Myingyan Degree College
Nationalities Youth Resource Development Degree College, Mandalay
State Pariyatti Sasana University, Mandalay
Technological University, Kyaukse
Technological University, Mandalay
Technological University, Meiktila
University of Computer Studies, Mandalay
University of Culture, Mandalay
University of Dental Medicine, Mandalay
University of Distance Education, Mandalay
University of Foreign Languages, Mandalay
University of Forestry, Yezin
University of Medical Technology, Mandalay
University of Medicine, Mandalay
University of Paramedical Science, Mandalay
University of Pharmacy, Mandalay
University of Technology, Yadanabon Cyber City
University of Traditional Medicine, Mandalay
University of Veterinary Science, Yezin
Yadanabon University
Yezin Agricultural University
Myanmar Commercial Management Institute (MCMI)
Mandalay Business School
Mon State
Computer University, Thaton
Thaton Institute of Agriculture
Mawlamyine Education College
Mawlamyine Institute of Education
Mawlamyine University
Technological University (Mawlamyine)
Government Technical Institute (Mawlamyine)
Rakhine State
Computer University, Sittwe
Government Technical Institute, Kyaukphyu
Government Technical Institute, Thandwe
Kyaukphyu Education College
Sittwe University
Taunggup College
Technological University, Sittwe
Sagaing Region
Computer University, Kalay
Computer University, Monywa
Technological University, Sagaing
Sitagu World Buddhist University, Sagaing
Government Technical College, Shwebo
McNeilus Maranatha Christian College, Kalay
Monywa Education College
Monywa Institute of Economics
Monywa University
Sagaing Institute of Education
Co-operative University, Sagaing
Shwebo University
Technological University, Monywa
University for the Development of the National Races of the Union
University of Kalay
Technological University (Kalay)
Nationalities Youth Resource Development Degree College, Sagaing
Sagaing University
Sagaing University of Education
Shan State
Computer University, Lashio
Computer University, Kyaingtong
Computer University, Panglong
Computer University, Taunggyi
Kyaingtong University
Lashio University
Lashio Education College
Panglong University
Taunggyi Education College
Taunggyi University
Technological University, Kyaingtong
Technological University, Lashio
Technological University, Panglong
Technological University, Taunggyi
University of Medicine, Taunggyi
University of Computer Studies (Taunggyi)
Shan State Buddhist University, Taunggyi, Shan State
Tanintharyi Region
Computer University, Dawei
Computer University, Myeik
Dawei Education College
Dawei University
Myeik University
Technological University, Dawei
Technological University, Myeik
Yangon Region
Academy of Management and Technology (AMT)
Central Co-operative College, Phaunggyi
Co-operative University, Thanlyin
City University, Yangon
Dagon University
Defence Services Institute of Nursing and Paramedical Science
Defence Services Medical Academy
Hlegu Education College
Info Myanmar University
International Theravada Buddhist Missionary University
Karen Baptist Theological Seminary
Lorrain Theological College
Myanmar Institute of Theology
Myanmar Maritime University
National Management University of Myanmar
Nationalities Youth Resource Development Degree College, Yangon
State Pariyatti Sasana University, Yangon
Strategy First University
STI Myanmar University
Technological University, Hmawbi
Technological University, Thanlyin
Thingangyun Education College
University of Computer Studies, Yangon
University of Culture, Yangon
University of Dental Medicine, Yangon
University of Distance Education, Yangon
University of East Yangon
University of Foreign Languages, Yangon
University of Information Technology, Yangon
University of Medical Technology, Yangon
University of Medicine 1, Yangon
University of Medicine 2, Yangon
University of Paramedical Science, Yangon
University of Pharmacy, Yangon
University of Public Health, Yangon
University of West Yangon
Victoria University College
West Yangon Technological University
Yangon Institute of Economics
Yangon Institute of Education
Yangon Institute of Marine Technology
Yangon Institute of Nursing
Yangon Technological University
Yangon University
Yankin Education College
Gallery
See also
Education in Myanmar
References
CAMEMIS - Free Education Management System
Higher education in Myanmar
Universities
Myanmar
Myanmar |
21222062 | https://en.wikipedia.org/wiki/Brian%20Eno | Brian Eno | Brian Peter George St John le Baptiste de la Salle Eno (; born Brian Peter George Eno, 15 May 1948) is a British musician, composer, record producer and visual artist best known for his pioneering work in ambient music and contributions to rock, pop and electronica. A self-described "non-musician", Eno has helped introduce unique conceptual approaches and recording techniques to contemporary music. He has been described as one of popular music's most influential and innovative figures.
Born in Suffolk, Eno studied painting and experimental music at the art school of Ipswich Civic College in the mid 1960s, and then at Winchester School of Art. He joined glam rock group Roxy Music as its synthesiser player in 1971, recording two albums with the group then departing in 1973 amidst tensions with the group's frontman Bryan Ferry. Eno recorded a number of solo albums beginning with Here Come the Warm Jets (1974). In the mid-1970s, he began exploring a minimalist direction on releases such as Discreet Music (1975) and Ambient 1: Music for Airports (1978), coining the term "ambient music" with the latter.
Alongside his solo work, Eno collaborated frequently with other musicians in the 1970s, including Robert Fripp, Harmonia, Cluster, Harold Budd, David Bowie, and David Byrne. He also established himself as a sought-after producer, working on albums by John Cale, Jon Hassell, Laraaji, Talking Heads, Ultravox, and Devo, as well as the no wave compilation No New York (1978). In subsequent decades, Eno continued to record solo albums and produce for other artists, most prominently U2 and Coldplay, alongside work with artists such as Daniel Lanois, Laurie Anderson, Grace Jones, Slowdive, Karl Hyde, James, Kevin Shields, and Damon Albarn.
Dating back to his time as a student, Eno has also worked in other media, including sound installations, film, and writing. In the mid-1970s, he co-developed Oblique Strategies, a deck of cards featuring aphorisms intended to spur creative thinking. From the 1970s onwards, Eno's installations have included the sails of the Sydney Opera House in 2009 and the Lovell Telescope at Jodrell Bank in 2016. An advocate of a range of humanitarian causes, Eno writes on a variety of subjects and is a founding member of the Long Now Foundation. In 2019, Eno was inducted into the Rock and Roll Hall of Fame as a member of Roxy Music.
Early life
Brian Peter George Eno was born on 15 May 1948 in the village of Melton, Suffolk, the son of William Arnold Eno (1916–1988), a postal worker and clock and watch repairer, and Maria Alphonsine Eno (née Buslot; 1922–2005), a Belgian national. Eno is the eldest of their three children; he has a brother, Roger, and sister Arlette. They have a half-sister, Rita, from their mother's previous relationship. The surname Eno is derived from the French Huguenot surname Hennot.
In 1959, Eno attended St Joseph's College in Ipswich, a Catholic grammar school of the De La Salle Brothers order. His confirmation name is derived from the school, taking the name Brian Peter George St. John le Baptiste de la Salle Eno. In 1964, after earning four O-levels, including one in art and maths, Eno had developed an interest in art and music and had no interest in a "conventional job". He enrolled at the Ipswich School of Art, taking on the newly established Groundcourse foundation art degree established by new media artist Roy Ascott. Here, one of Eno's teachers was artist Tom Phillips, who became a lifelong friend and encouraged his musical ability. Phillips recalled the pair doing "piano tennis" in which, after collecting pianos, the two stripped and aligned them in a hall and struck them with tennis balls. In 1966, Eno studied for a diploma in Fine Arts at the Winchester School of Art, from which he graduated in 1969. It was at Winchester where Eno once attended a lecture by future Who guitarist Pete Townshend, also a former student of Ascott's, and cites that moment when Eno realised he could make music without formal training.
Whilst at school, Eno used a tape recorder as a musical instrument and in 1964 he joined his first group, the Black Aces, a four-piece with Eno on drums that he formed with three friends he met at the youth club he visited in Melton. In late 1967, Eno pursued music once more, forming the Merchant Taylor's Simultaneous Cabinet, an avant-garde music, art, and performance trio with two Winchester undergraduates. This was followed by short stints in multiple avant-garde and college groups, including The Maxwell Demon and Dandelion and The War Damage which featured Eno as frontman who adapted a theatrical persona on stage and later played the guitar.
Career
1970s
In 1969, after separating from his wife, Eno moved to London where his professional music career began. He became involved with the Scratch Orchestra and the Portsmouth Sinfonia; Eno's first appearance on a commercially released recording is the Deutsche Grammophon edition of The Great Learning (1971) by Cornelius Cardew and the Scratch Orchestra which features Eno as one of the voices on the track "Paragraph 7". Another early recording was the soundtrack to Berlin Horse (1970), a nine-minute avant-garde art film by Malcolm Le Grice. At one point, Eno had to earn money as paste-up assistant for the advertisement section of a local paper for three months. He quit and became an electronics dealer by buying old speakers and making new cabinets for them before selling them to friends.
In 1971, Eno co-formed the glam and art rock band Roxy Music. He had a chance meeting with saxophonist Andy Mackay at a train station, which led to him joining the band. Eno later said: "If I'd walked ten yards further on the platform, or missed that train, or been in the next carriage, I probably would have been an art teacher now". Eno played on their first two albums, Roxy Music (1972) and For Your Pleasure (1973), and is credited as "Eno" with playing the VCS 3 synthesiser, tape effects, backing vocals, and co-producer. Initially Eno did not appear on stage at their live shows, but operated the group's mixing desk at the centre of the concert venue where he had a microphone to sing backup vocals. After the group secured a record deal, Eno joined them on stage playing the synthesiser and became known for his flamboyant costumes and makeup, partly stealing the spotlight from lead singer Bryan Ferry. After touring For Your Pleasure ended in mid-1973, Eno quit the band. He cited disagreements with Ferry and the frontman's insistence on being in command of the group, which affected Eno's ability to incorporate his own ideas.
Almost immediately after his exit from Roxy Music, Eno embarked on his solo career. He released four albums of electronically inflected art pop: Here Come the Warm Jets (1973), Taking Tiger Mountain (By Strategy) (1974), Another Green World (1975), and Before and After Science (1977). Tiger Mountain contains the "Third Uncle" which became one of Eno's best-known songs, owing in part to its later cover by Bauhaus and 801. Critic Dave Thompson writes that the song is "a near punk attack of riffing guitars and clattering percussion, 'Third Uncle' could, in other hands, be a heavy metal anthem, albeit one whose lyrical content would tongue-tie the most slavish air guitarist." During this period, Eno also played three dates with Phil Manzanera in 801, a supergroup that performed more or less reworked selections from albums by Eno, Manzanera, and Quiet Sun, as well as covers by The Beatles and The Kinks.
In 1973, King Crimson founder and guitarist Robert Fripp collaborated with Eno and his tape delay system to make experimental, ambient, and drone music. The result was (No Pussyfooting) (1973), released as their duo name of Fripp & Eno. Fripp subsequently referred to the tape delay recording method as Frippertronics. The pair followed their debut with a second album Evening Star (1975), and completed a European tour. Eno produced the albums The Portsmouth Sinfonia Plays the Popular Classics (1974) and Hallelujah! The Portsmouth Sinfonia Live at the Royal Albert Hall (1974) by the Portsmouth Sinfonia, both of which feature Eno playing the clarinet. He also deployed the orchestra's dissonant string section on Taking Tiger Mountain (By Strategy). Eno went on to work with several performers in the orchestra on his Obscure label, including Gavin Bryars and Michael Nyman. Later in 1974, Eno and Kevin Ayers contributed music for the experimental/spoken word album Lady June's Linguistic Leprosy (1974) by poet June Campbell Cramer.
Ambient music
Eno released a number of eclectic ambient electronic and acoustic albums. He coined the term "ambient music", which is designed to modify the listener's perception of the surrounding environment. In the liner notes accompanying Ambient 1: Music for Airports, Eno wrote: "Ambient music must be able to accommodate many levels of listening attention without enforcing one in particular, it must be as ignorable as it is interesting."
Eno was hit by a taxi while crossing the street in January 1975 and spent several weeks recuperating at home. His girlfriend brought him an old record of harp music, which he lay down to listen to. He realized that he had set the amplifier to a very low volume, and one channel of the stereo was not working, but he lacked the energy to get up and correct it. "This presented what was for me a new way of hearing music – as part of the ambience of the environment just as the colour of the light and sound of the rain were parts of the ambience."
Eno's first work of ambient music was Discreet Music (1975), again created with an elaborate tape-delay methodology which he diagrammed on the back cover of the LP; it is considered the landmark album of the genre. This was followed by his Ambient series: Music for Airports (Ambient 1), The Plateaux of Mirror (Ambient 2) featuring Harold Budd on keyboard, Day of Radiance (Ambient 3) with American composer Laraaji playing zither and hammered dulcimer, and On Land (Ambient 4)).
1980s
Eno provided a film score for Herbert Vesely's Egon Schiele – Exzess und Bestrafung (1980), also known as Egon Schiele – Excess and Punishment. The ambient-style score was an unusual choice for an historical piece, but it worked effectively with the film's themes of sexual obsession and death.
Before Eno made On Land, Robert Quine played him Miles Davis' "He Loved Him Madly" (1974). Eno stated in the liner notes for On Land, "Teo Macero's revolutionary production on that piece seemed to me to have the 'spacious' quality I was after, and like Federico Fellini's 1973 film Amarcord, it too became a touchstone to which I returned frequently."
In 1980 to 1981, during which time Eno travelled to Ghana for a festival of West African music, he was collaborating with David Byrne of Talking Heads. Their album My Life in the Bush of Ghosts, was built around radio broadcasts Eno collected whilst living in the United States, along with sampled music recordings from around the world transposed over music predominantly inspired by African and Middle Eastern rhythms.
In 1983, Eno collaborated with his brother, Roger Eno, and Daniel Lanois on the album Apollo: Atmospheres and Soundtracks that had been commissioned by Al Reinert for his film For All Mankind (1989). Tracks from the album were subsequently used in several other films, including Trainspotting.
1990s
In September 1992, Eno released Nerve Net, an album utilising heavily syncopated rhythms with contributions from several former collaborators including Fripp, Benmont Tench, Robert Quine and John Paul Jones. This album was a last-minute substitution for My Squelchy Life, which contained more pop oriented material, with Eno on vocals. Several tracks from My Squelchy Life later appeared on 1993's retrospective box set Eno Box II: Vocals, and the entire album was eventually released in 2014 as part of an expanded re-release of Nerve Net. Eno also released The Shutov Assembly in 1992, recorded between 1985 and 1990. This album embraces atonality and abandons most conventional concepts of modes, scales and pitch. Emancipated from the constant attraction towards the tonic that underpins the Western tonal tradition, the gradually shifting music originally eschewed any conventional instrumentation, save for treated keyboards.
During the 1990s, Eno worked increasingly with self-generating musical systems, the results of which he called generative music. This allows the listener to hear music that slowly unfolds in almost infinite non-repeating combinations of sound. In one instance of generative music, Eno calculated that it would take almost 10,000 years to hear the entire possibilities of one individual piece. Eno achieves this through the blending of several independent musical tracks of varying length. Each track features different musical elements and in some cases, silence. When each individual track concludes, it starts again re-configuring differently with the other tracks. He has presented this music in his own art and sound installations and those in collaboration with other artists, including I Dormienti (The Sleepers), Lightness: Music for the Marble Palace, Music for Civic Recovery Centre, The Quiet Room, and Music for Prague.
In 1993, Eno worked with the Manchester rock band James to produce two albums, Laid and Wah Wah. Laid was met with notable critical and commercial success both in the UK and the United States after its release in 1993. Wah Wah, in comparison, received a more lukewarm response after its release in 1994.
One of Eno's better-known collaborations was with the members of U2, Luciano Pavarotti and several other artists in a group called Passengers. They produced the 1995 album Original Soundtracks 1, which reached No. 76 on the US Billboard charts and No. 12 in the UK Albums Chart. It featured a single, "Miss Sarajevo", which reached number 6 in the UK Singles Chart. This collaboration is chronicled in Eno's book A Year with Swollen Appendices, a diary published in 1996.
In 1996, Eno scored the six-part fantasy television series Neverwhere.
2000s
In 2004, Fripp and Eno recorded another ambient music collaboration album, The Equatorial Stars.
Eno returned in June 2005 with Another Day on Earth, his first major album since Wrong Way Up (with John Cale) to prominently feature vocals (a trend he continued with Everything That Happens Will Happen Today). The album differs from his 1970s solo work due to the impact technological advances on musical production, evident in its semi-electronic production.
In early 2006, Eno collaborated with David Byrne again, for the reissue of My Life in the Bush of Ghosts in celebration of the influential album's 25th anniversary. Eight previously unreleased tracks recorded during the initial sessions in 1980/81, were added to the album.
An unusual interactive marketing strategy was employed for its re-release, the album's promotional website features the ability for anyone to officially and legally download the multi-tracks of two songs from the album, "A Secret Life" and "Help Me Somebody". This allowed listeners to remix and upload new mixes of these tracks to the website for others to listen and rate them.
In late 2006, Eno released 77 Million Paintings, a program of generative video and music specifically for home computers. As its title suggests, there is a possible combination of 77 million paintings where the viewer will see different combinations of video slides prepared by Eno each time the program is launched. Likewise, the accompanying music is generated by the program so that it's almost certain the listener will never hear the same arrangement twice. The second edition of "77 Million Paintings" featuring improved morphing and a further two layers of sound was released on 14 January 2008. In June 2007, when commissioned in the Yerba Buena Center for the Arts, San Francisco, California, Annabeth Robinson (AngryBeth Shortbread) recreated 77 Million Paintings in Second Life.
The Nokia 8800 Sirocco Edition mobile phone, released in late 2006, features exclusive ringtones and sounds composed by Eno. Although he was previously uninterested in composing ringtones due to the limited sound palette of monophonic ringtones, phones at this point primarily used audio files. Between 8 January 2007 and 12 February 2007, ten units of Nokia 8800 Sirocco Brian Eno Signature Edition mobile phones, individually numbered and engraved with Eno's signature, were auctioned off. All proceeds went to two charities chosen by Eno: the Keiskamma AIDS treatment program and the World Land Trust.
In 2007, Eno's music was featured in a movie adaption of Irvine Welsh's best-selling collection Ecstasy: Three Tales of Chemical Romance. He also appeared playing keyboards in Voila, Belinda Carlisle's solo album sung entirely in French.
Also in 2007, Eno contributed a composition titled "Grafton Street" to Dido's third album, Safe Trip Home, released in November 2008.
In 2008, he released Everything That Happens Will Happen Today with David Byrne, designed the sound for the video game Spore and wrote a chapter to Sound Unbound: Sampling Digital Music and Culture, edited by Paul D. Miller (a.k.a. DJ Spooky).
In June 2009, Eno curated the Luminous Festival at Sydney Opera House, culminating in his first live appearance in many years. "Pure Scenius" consisted of three live improvised performances on the same day, featuring Eno, Australian improvisation trio The Necks, Karl Hyde from Underworld, electronic artist Jon Hopkins and guitarist Leo Abrahams.
Eno scored the music for Peter Jackson's film adaptation of The Lovely Bones, released in December 2009.
2010s
Eno released another solo album on Warp in late 2010. Small Craft on a Milk Sea, made in association with long-time collaborators Leo Abrahams and Jon Hopkins, was released on 2 November in the United States and 15 November in the UK. The album included five compositions that were adaptions of those tracks that Eno wrote for The Lovely Bones.
He later released Drums Between the Bells, a collaboration with poet Rick Holland, on 4 July 2011.
In November 2012, Eno released Lux, a 76-minute composition in four sections, through Warp.
Eno worked with French–Algerian Raï singer Rachid Taha on Taha's Tékitoi (2004) and Zoom (2013) albums, contributing percussion, bass, brass and vocals. Eno also performed with Taha at the Stop the War Coalition concert in London in 2005.
In April 2014, Eno sang on, co-wrote, and co-produced Damon Albarn's Heavy Seas of Love, from his solo debut album Everyday Robots.
In May 2014, Eno and Underworld's Karl Hyde released Someday World, featuring various guest musicians: from Coldplay's Will Champion and Roxy Music's Andy Mackay to newer names such as 22-year-old Fred Gibson, who helped produce the record with Eno. Within weeks of that release, a second full-length album was announced titled High Life. This was released on 30 June 2014.
In January 2016, a new Eno ambient soundscape was premiered as part of Michael Benson's planetary photography exhibition "Otherworlds" in the Jerwood Gallery of London's Natural History Museum. In a statement Eno commented on the unnamed half-hour piece:
The Ship, an album with music from Eno's installation of the same name was released on 29 April 2016 on Warp.
In September 2016, the Portuguese synthpop band The Gift, released a single entitled Love Without Violins. As well as singing on the track, Eno co-wrote and produced it. The single was released on the band's own record label La Folie Records on 30 September.
Eno's Reflection, an album of ambient, generative music, was released on Warp Records on 1 January. 2017. It was nominated for a Grammy Award for 2018's 60th. Grammy awards ceremony.
In April 2018, Eno released The Weight Of History / Only Once Away My Son, a collaborative double A-side with Kevin Shields, for Record Store Day.
In 2019, Eno participated in DAU, an immersive art and cultural installation in Paris by Russian film director Ilya Khrzhanovsky evoking life under Soviet authoritarian rule. Eno contributed six auditory ambiances.
2020s
In March 2020, Eno and his brother, Roger Eno, released their collaborative album Mixing Colours.
Eno provided original music for Ben Lawrence's 2021 documentary Ithaka about John Shipton's battle to save his son, Julian Assange.
Record producer
From the beginning of his solo career in 1973, Eno was in demand as a record producer. The first album with Eno credited as producer was Lucky Leif and the Longships by Robert Calvert. Eno's lengthy string of producer credits includes albums for Talking Heads, U2, Devo, Ultravox and James. He also produced part of the 1993 album When I Was a Boy by Jane Siberry. He won the best producer award at the 1994 and 1996 BRIT Awards.
Eno describes himself as a "non-musician", using the term "treatments" to describe his modification of the sound of musical instruments, and to separate his role from that of the traditional instrumentalist. His skill in using the studio as a compositional tool led in part to his career as a producer. His methods were recognised at the time (mid-1970s) as unique, so much so that on Genesis's The Lamb Lies Down on Broadway, he is credited with 'Enossification'; on Robert Wyatt's Ruth Is Stranger Than Richard with a Direct inject anti-jazz raygun and on John Cale's Island albums as simply being "Eno".
Eno has contributed to recordings by artists as varied as Nico, Robert Calvert, Genesis, David Bowie, and Zvuki Mu, in various capacities such as use of his studio/synthesiser/electronic treatments, vocals, guitar, bass guitar, and as just being 'Eno'. In 1984, he (amongst others) composed and performed the "Prophecy Theme" for the David Lynch film Dune; the rest of the soundtrack was composed and performed by the group Toto. Eno produced performance artist Laurie Anderson's Bright Red album, and also composed for it. The work is avant-garde spoken word with haunting and magnifying sounds. Eno played on David Byrne's musical score for The Catherine Wheel, a project commissioned by Twyla Tharp to accompany her Broadway dance project of the same name.
He worked with Bowie as a writer and musician on Bowie's influential 1977–79 'Berlin Trilogy' of albums, Low, "Heroes" and Lodger, on Bowie's later album Outside, and on the song "I'm Afraid of Americans". Recorded in France and Germany, the spacey effects on Low were largely created by Eno, who played a portable EMS Synthi A synthesiser. Producer Tony Visconti used an Eventide Harmonizer to alter the sound of the drums, claiming that the audio processor "f–s with the fabric of time." After Bowie died in early 2016, Eno said that he and Bowie had been talking about taking Outside, the last album they'd worked on together, "somewhere new", and expressed regret that they wouldn't be able to pursue the project.
Eno co-produced The Unforgettable Fire (1984), The Joshua Tree (1987), Achtung Baby (1991), and All That You Can't Leave Behind (2000) for U2 with his frequent collaborator Daniel Lanois, and produced 1993's Zooropa with Mark "Flood" Ellis. In 1995, U2 and Eno joined forces to create the album Original Soundtracks 1 under the group name Passengers; songs from which included "Your Blue Room" and "Miss Sarajevo". Even though films are listed and described for each song, all but three are bogus. Eno also produced Laid (1993), Wah Wah (1994) Millionaires (1999) and Pleased to Meet You (2001) for James, performing as an extra musician on all four. He is credited for "frequent interference and occasional co-production" on their 1997 album Whiplash.
Eno played on the 1986 album Measure for Measure by Australian band Icehouse. He remixed two tracks for Depeche Mode, "I Feel You" and "In Your Room", both single releases from the album Songs of Faith and Devotion in 1993. In 1995, Eno provided one of several remixes of "Protection" by Massive Attack (originally from their Protection album) for release as a single.
In 2007, he produced the fourth studio album by Coldplay, Viva la Vida or Death and All His Friends, released in 2008. Also in 2008, he worked with Grace Jones on her album Hurricane, credited for "production consultation" and as a member of the band, playing keyboards, treatments and background vocals. He worked on the twelfth studio album by U2, again with Lanois, titled No Line on the Horizon. It was recorded in Morocco, the South of France and Dublin and released in Europe on 27 February 2009.
In 2011, Eno and Coldplay reunited and Eno contributed "enoxification" and additional composition on Coldplay's fifth studio album Mylo Xyloto, released on 24 October of that year.
The Microsoft Sound
In 1994, Microsoft designers Mark Malamud and Erik Gavriluk approached Eno to compose music for the Windows 95 project. The result was the six-second start-up music-sound of the Windows 95 operating system, "The Microsoft Sound". In an interview with Joel Selvin in the San Francisco Chronicle he said:
Eno shed further light on the composition of the sound on the BBC Radio 4 show The Museum of Curiosity, admitting that he created it using a Macintosh computer, stating "I wrote it on a Mac. I've never used a PC in my life; I don't like them."
Video work
Eno has spoken of an early and ongoing interest in exploring light in a similar way to his work with sound. He started experimenting with the medium of video in 1978. Eno describes the first video camera he received, which would initially become his main tool for creating ambient video and light installations:
"One afternoon while I was working in the studio with Talking Heads, the roadie from Foreigner, working in an adjacent studio, came in and asked whether anyone wanted to buy some video equipment. I'd never really thought much about video, and found most 'video art' completely unmemorable, but the prospect of actually owning a video camera was, at that time, quite exotic."
The Panasonic industrial camera Eno received had significant design flaws preventing the camera from sitting upright without the assistance of a tripod. This led to his works being filmed in vertical format, requiring the television set to be flipped on its side to view it in the proper orientation. The pieces Eno produced with this method, such as Mistaken Memories of Mediaeval Manhattan (1980) and Thursday Afternoon (1984) (accompanied by the album of the same title), were labelled as 'Video Paintings.' He explained the genre title in the music magazine NME:
"I was delighted to find this other way of using video because at last here's video which draws from another source, which is painting ... I call them 'video paintings' because if you say to people 'I make videos', they think of Sting's new rock video or some really boring, grimy 'Video Art'. It's just a way of saying, 'I make videos that don't move very fast."
These works presented Eno with the opportunity to expand his ambient aesthetic into a visual form, manipulating the medium of video to produce something not present in the normal television experience. His video works were shown around the world in exhibitions in New York and Tokyo, as well as released on the compilation 14 Video Paintings in 2005.
Eno continued his video experimentation through the 80s, 90s and 2000s, leading to further experimentation with the television as a malleable light source and informing his generative works such as 77 Million Paintings in 2006.
Generative music
Eno gives the example of wind chimes. He says that these systems and the creation of them have been a focus of his since he was a student: "I got interested in the idea of music that could make itself, in a sense, in the mid 1960s really, when I first heard composers like Terry Riley, and when I first started playing with tape recorders."
Initially Eno began to experiment with tape loops to create generative music systems. With the advent of CDs he developed systems to make music of indeterminate duration using several discs of material that he'd specifically recorded so that they would work together musically when driven by random playback.
In 1995, he began working with the company Intermorphic to create generative music through utilising programmed algorithms. The collaboration with Intermorphic led Eno to release Generative Music 1 - which requires Intermorphic's Koan Player software for PC. The Koan software made it possible for generative music to be experienced in the domestic environment for the first time.
Generative Music 1
In 1996, Eno collaborated in developing the SSEYO Koan generative music software system (by Pete Cole and Tim Cole of Intermorphic) that he used in composing Generative Music 1—only playable on the Koan generative music system. Further music releases using Koan software include: Wander (2001) and Dark Symphony (2007)—both include works by Eno, and those of other artists (including SSEYO's Tim Cole).
Released excerpts
Eno started to release excerpts of results from his 'generative music' systems as early as 1975 with the album Discreet Music. Then again in 1978 with Music for Airports:
The list below consists of albums, soundtracks and downloadable files that contain excerpts from some of Eno's generative music explorations:
1970 – Berlin Horse [Film Short]
1975 – Discreet Music
1975 – Evening Star (Fripp & Eno)
1978 – Ambient 1: Music for Airports
1981 – Mistaken Memories of Mediaeval Manhattan [Installation Video]
1982 – Ambient 4: On Land
1983 – Apollo: Atmospheres and Soundtracks (Eno, Lanois & R Eno)
1983 – Music for Films II (Eno, Lanois & R Eno) [exclusive to Working Backwards Box Set]
1984 – Thursday Afternoon [Installation Video]
1985 – Thursday Afternoon
1988 – Music for Films III (Various Artists)
1989 – Textures (Eno, Lanois & R Eno)
1992 – The Shutov Assembly
1993 – Neroli (Thinking Music Part IV)
1994 – Glitterbug [Original Soundtrack]
1996 – Neverwhere [BBC TV Mini-Series Soundtrack]
1997 – Contra 1.2
1997 – Lightness
1998 – Music for Prague
1999 – I Dormienti
1999 – Kite Stories
2000 – Music for Civic Recovery Centre
2001 – Compact Forest Proposal
2003 – Curiosities – Volume I
2004 – Curiosities – Volume II
2012 – Lux
2013 – CAM [Web – the book Brian Eno: Visual Music includes a download code]
2014 – The Shutov Bonus Material [Shutov Assembly reissue bonus CD]
2014 – New Space Music [Neroli reissue bonus CD]
2016 – The Ship
2016 – Reflection
2017 – Sisters [Web Download]
2018 – Music for Installations [Box Set]
Several of the released excerpts (listed above) originated as, or are derivative of, soundtracks Eno created for art installations. Most notably The Shutov Assembly (view breakdown of Album's sources), Contra 1.2 thru to Compact Forest Proposal, Lux, CAM, and The Ship.
Installations
Eno has created installations combining artworks and sound that have shown across the world since 1979, beginning with 2 Fifth Avenue and White Fence, in the Kitchen Centre, New York, NY. Typically Eno's installations feature light as a medium explored in multi-screen configurations, and music that is created to blur the boundaries between itself and its surroundings:
"There is a sharp distinction between "music" and "noise", just as there is a distinction between the musician and the audience. I like blurring those distinctions – I like to work with all the complex sounds on the way out to the horizon, to pure noise, like the hum of London." With each installation Eno's music and artworks interrogate the visitors' perception of space and time within a seductive, immersive environment.
Since his experiments with sound as an art student using reel to reel tape recorders, - and in art employing the medium of light, Eno has utilised breakthroughs in technology to develop 'processes rather than final objects', processes that in themselves have to "jolt your senses," have "got to be seductive." Once set in motion these processes produce potentially un-ending and continuous, non-repeating music and artworks that Eno, though the artist, could not have imagined; and with them he creates the slowly unfolding immersive environments of his installations.
David A. Ross writes in the programme notes to Matrix 44 in 1981: "In a series of painterly video installations first shown in 1979, Eno explored the notion of environmental ambiance. Eno proposes a use for music and video that is antithetical to behavior control-oriented "Muzak" in that it induces and invites the viewer to enter a meditative, detached state, rather than serve as an operant conditioner for work-force efficiency. His underlying strategy is to create works which provide natural levels of variety and redundancy which bring attention to, rather than mimic, essential characteristics of the natural environment. Eno echoes Matisse's stated desire that his art serve as an armchair for the weary businessman."
Early installations benefitted from breakthroughs in video technology that inspired Eno to use the TV screen as a monitor and enabled him to experiment with the opposite of the fast-moving narratives typical of TV to create evolving images with an almost imperceptible rate of change. "2 Fifth Avenue", ("a linear four-screen installation with music from Music For Airports") resulted from Eno shooting "the view from his apartment window: without ... intervention," recording "what was in front of the camera for an unspecified period of time ... In a simple but crude form of experimental post production, the colour controls of the monitors on which the work was shown were adjusted to wash out the picture, producing a high-contrast black and white image in which colour appeared only in the darkest areas. ... Eno manipulated colour as though painting, observing: 'video for me is a way of configuring light, just as painting is a way of configuring paint.'"
From the outset, Eno's video works, were "more in the sphere of paintings than of cinema". The author and artist John Coulthart called Mistaken Memories of Medieval Manhattan (1980–81), which incorporated music from Ambient 4: On Land, "The first ambient film." He explains: "Eno filmed several static views of New York and its drifting cloudscape from his thirteenth-floor apartment in 1980–81. The low-grade equipment ... give[s] the images a hazy, impressionistic quality. Lack of a tripod meant filming with the camera lying on its side so the tape had to be re-viewed with a television monitor also turned on its side." And turning the TV on its side, says David A. Ross, "recontextualize[d] the television set, and ... subliminally shift[ed] the way the video image represents recognizable realities ... Natural phenomena like rain look quite different in this orientation; less familiar but curiously more real."
Thursday Afternoon was a return to using figurative form, for Eno had by now begun "to think that I could use my TVs as light sources rather than as image sources. ... TV was actually the most controllable light source that had ever been invented – because you could precisely specify the movement and behaviour of several million points of coloured light on a surface. The fact that this prodigious possibility had almost exclusively been used to reproduce figurative images in the service of narratives pointed to evolution of the medium from the theatre and cinema. What I thought was that this machine, which pumped out highly controllable light, was actually the first synthesiser, and that its use as an imager-retailer represented a subset of its possible range."
Turning the TV on its back, Eno played video colour fields of differing lengths of time that would slowly re-combine in different configurations. Placing ziggurats (3 dimensional constructions) of different lengths and sizes on top of the screens that defined each separate colour field, these served to project the internal light source upward. "The light from it was tangible as though caught in a cloud of vapour. Its slowly changing hues and striking colour collisions were addictive. We sat watching for ages, transfixed by this totally new experience of light as a physical presence."
Calling these light sculptures Crystals (first shown in Boston in 1983), Eno further developed them for the Pictures of Venice exhibition at Gabriella Cardazzo's Cavallino Gallery (Venice,1985). Placing plexiglass on top of the structures he found that these further diffused the light so the shapes outlined through this surface appeared to be described differently in the slowly changing fields of light.
By positioning sound sources in different places and different heights in the exhibition room Eno intended that the music would be something listened to from the inside rather than the outside. For the I Dormienti show in 1999 that featured sculptures of sleeping figures by Mimmo Paladino in the middle of the circular room, Eno placed speakers in each of the 12 tunnels running from it.
Envisioning the speakers themselves as instruments, led to Eno's 'speaker flowers' becoming a feature of many installations, including at the Museo dell' Ara Pacis (Rome, 2008), again with Mimmo Paladino and 'Speaker Flowers and Lightboxes' at Castello Svevo in Trani (Italy 2017). Re-imagining the speaker as a flower with a voice that could be heard as it moved in the breeze, he made 'bunches' of them, "sculptural objects [that] ... consist of tiny chassis speakers attached to tall metal stands that sway in response to the sound they emit." The first version of these were shown at the Stedelijk Museum in Amsterdam(1984).
Since On Land (1982), Eno has sought to blur the boundaries between music and non-music and incorporates environmental sounds into his work. He treats synthesised and recorded sounds for specific effects.
In the antithesis of 20th century shock art, Eno's works create environments that are: "Envisioned as extensions of everyday life while offering a refuge from its stresses." Creating a space to reflect was a stated aim in Eno's Quiet Club series of installations that have shown across the world, and include Music for Civic Recovery Centre at the David Toop curated Sonic Boom festival at the Hayward Gallery in 2000.
The Quiet Club series (1986-2001) grew from Eno's site-specific installations that included the Place series (1985-1989). These also featured light sculptures and audio with the addition of conventional materials, such as "tree trunks, fish bowls, ladders, rocks". Eno used these in unconventional ways to create new and unexpected experiences and modes of engagements, offering an extension of and refuge from, everyday life.
The continually flowing non-repeating music and art of Eno's installations mitigate against habituation to the work and maintain the visitors' engagement with it. "One of the things I enjoy about my shows is...lots of people sitting quietly watching something that has no story, few recognisable images and changes very slowly. It's somewhere between the experience of painting, cinema, music and meditation...I dispute the assumption that everyone's attention span is getting shorter: I find people are begging for experiences that are longer and slower, less "dramatic" and more sensual." Tanya Zimbardo writing on New Urban Spaces Series 4. "Compact Forest Proposal" for SF MOMA (2001) confirms: "During the first presentation of this work, as part of the exhibition 010101: Art in Technological Times at SFMOMA in 2001, visitors often spent considerable time in this dreamlike space."
In Eno's work, both art and music are released from their normal constraints. The music set up to randomly reconfigure is modal and abstract rather than tonal, and so the listener is freed from expectations set up by Western tonal harmonic conventions. The artworks in their continual slowly shifting combinations of colour (and in the case of 77 Million Paintings image re-configurations) themselves offer a continually engaging immersive experience through their unfolding fields of light.
77 Million Paintings
Developments in computer technology meant that the experience of Eno's unending non-repeatable generative art and music was no longer only possible in the public spaces of his exhibitions. With software developer and programmer Jake Dowie, Eno created a generative art/music installation 77 Million Paintings for the domestic environment. Developed for both PC and Mac, the process is explained by Nick Robertson in the accompanying booklet. "One way to approach this idea is to imagine that you have a large box full of painted components and you are allowed to blindly take out between one and four of these at any time and overlay them to make a complete painting. The selection of the elements and their duration in the painting is variable and arbitrarily determined…"
Most (nearly all) of the visual 'elements' were hand-painted by Eno onto glass slides, creating an organic heart to the work. Some of the slides had formed his earlier Natural Selections exhibition projected onto the windows of the Triennale in Milan. (1990). This exhibition marked the beginning of Eno's site specific installations that re-defined spaces on a large scale.
For the Triennale exhibition, Eno with Rolf Engel and Roland Blum at Atelier Markgraph, used new technology by Dataton that could be programmed to control the fade up and out times of the light sources. But, unlike the software developed for 77 Million, this was clumsy and limited the practical realisation of Eno's vision.
With the computer programmed to randomly select a combination of up to four images of different durations, the on screen painting continually reconfigures as each image slowly dissolves whilst another appears. The painting will be different for every viewer in every situation, uniquely defining each moment. Eno likens his role in creating this piece to one of a gardener planting seeds. And like a gardener he watches to see how they grow, waiting to see if further intervention is necessary. In the liner notes Nick Robertson explains: "Every user will buy exactly the same pack of 'seeds' but they will all grow in different ways and into distinct paintings, the vast majority of which, the artist himself has not even seen. …The original in art is no longer solely bound up in the physical object, but rather in the way the piece lives and grows."
Although designed for the domestic environment, 77 Million Paintings has been (and continues to be) exhibited in multi-screen installations across the world. It has also been projected onto architectural structures, including the sails of the Sydney Opera House (2009), Carioca Aqueduct (the Arcos di Lapa) Brazil (2012) and the giant Lovell Telescope at the Jodrell Bank Observatory (2016). During an exhibition at Fabrica Brighton, (2010) the orthopaedic surgeon Robin Turner noticed the calming effect the work had on the visitors. Turner asked Eno to provide a version for the Montefiore hospital in Hove. Since then 77 Million and Eno's latest "Light Boxes" have been commissioned for use in hospitals.
Montefiore Hospital Installations
In 2013, Eno created two permanent light and sound installations at Montefiore Hospital in Hove, East Sussex, England. In the hospital's reception area "77 Million Paintings for Montefiore" consists of eight plasma monitors mounted on the wall in a diagonally radiating flower-like pattern. They display an evolving collage of coloured patterns and shapes whilst Eno's generative ambient music plays discreetly in the background. The other aptly named "Quiet Room for Montefiore" (available for patients, visitors and staff) is a space set apart for meditative reflection. It is a moderately sized room with three large panels displaying dissolves of subtle colours in patterns that are reminiscent of Mondrian paintings. The environment brings Eno's ambient music into focus and facilitates the visitors' cognitive drift, freeing them to contemplate or relax.
Spore
Eno composed most of the music for the Electronic Arts video game Spore (2008), assisted by his long-term collaborator, the musician and programmer Peter Chilvers. Much of the music is generative and responsive to the player's position within the game.
iOS apps
Inspired by possibilities presented to Eno and Chilvers whilst working together on the generative soundtrack for the video game Spore (2008), the two began to release generative music in the Apple App format. They set up the website generativemusic.com and created generative music applications for the iPhone, iPod Touch, and iPad:
Bloom (2008)
Trope (2009)
Scape (2012)
Reflection (2016)
In 2009, Chilvers and Sandra O'Neill also created an App entitled Air (released through generativemusic.com as well)—based on concepts developed by Eno in his Ambient 1: Music for Airports album.
Reflection
The generative version of Reflection is the fourth iOS App created by Brian Eno and Peter Chilvers: of generativemusic.com. Unlike other Apps they released Reflection provides no real options other than Play/Pause – later, in its initial update, Airplay and Sleep Timer options were added. As Apple had started increasing prices for Apps sold in UK, they lowered its price. For those who'd bought the app at a higher price, Eno and Chilvers provided links to a free download of a four track album called 'Sisters' (each track with a 15:14 duration). The following appears on the app's Apple iTunes page:
Previous to the updates for the App, the iTunes page used the following from Eno.
The version of Reflection available on the fixed formats (CD, Vinyl and download File) consists of two (joined) excerpts from the Reflection app. This was revealed in Brian's interview with Philip Sherburne:
Artworks: Light Boxes
Eno's "light boxes" utilise advances in LED technology that has enabled him to re-imagine his ziggurat light paintings - and early light boxes as featured in Kite Stories (1999) - for the domestic environment. The light boxes feature slowly changing combinations of colour fields that draw attention differently to the shapes outlined by delineating structures within. As the paintings slowly evolve each passing moment is defined differently, drawing the viewer's focus into the present moment. The writer and cultural essayist Michael Bracewell writes that the viewer "is also encouraged to engage with a generative sensor/aesthetic experience that reflects the ever-changing moods and randomness of life itself". He likens Eno's art to "Matisse or Rothko at their most enfolding."
First shown commercially at the Paul Stolper Gallery in London (forming the Light Music exhibition in 2016 that included lenticular paintings by Eno), light boxes have been shown across the world. They remain in permanent display in both private and public spaces. Recognised for their therapeutic contemplative benefits, Eno's light paintings have been commissioned for specially dedicated places of reflection including in Chelsea and Westminster hospital, the Montefiore Hospital in Hove and a three and a half metre lightbox for the sanctuary room in the Macmillan Horizon Centre in Brighton.
Obscure Records
Eno started the Obscure Records label in Britain in 1975 to release works by lesser-known composers. The first group of three releases included his own composition, Discreet Music, and the now-famous The Sinking of the Titanic (1969) and Jesus' Blood Never Failed Me Yet (1971) by Gavin Bryars. The second side of Discreet Music consisted of several versions of German baroque componist Johann Pachelbel's Canon, the composition which Eno had previously chosen to precede Roxy Music's appearances on stage and to which he applied various algorithmic transformations, rendering it almost unrecognisable. Side one consisted of a tape loop system for generating music from relatively sparse input. These tapes had previously been used as backgrounds in some of his collaborations with Robert Fripp, most notably on Evening Star. Ten albums were released on Obscure, including works by John Adams, Michael Nyman, and John Cage.
Other work
In 1995, Eno travelled with Edinburgh University's Professor Nigel Osborne to Bosnia in the aftermath of the Bosnian War, to work with war-traumatised children, many of whom had been orphaned in the conflict. Osborne and Eno led music therapy projects run by War Child in Mostar, at the Pavarotti centre, Bosnia 1995.
Eno appeared as Father Brian Eno at the "It's Great Being a Priest!" convention, in "Going to America", the final episode of the television sitcom Father Ted, which originally aired on 1 May 1998 on Channel 4.
In March 2008, Eno collaborated with the Italian artist Mimmo Paladino on a show of the latter's works with Eno's soundscapes at Ara Pacis in Rome, and in 2011, he joined Stephen Deazley and Edinburgh University music lecturer Martin Parker in an Icebreaker concert at Glasgow City Halls, heralded as a "long-awaited clash".
In 2013, Eno sold limited edition prints of artwork from his 2012 album Lux from his website.
In 2016, Eno was added to Edinburgh University's roll of honour and in 2017, he delivered the Andrew Carnegie Lecture at the university.
Eno continues to be active in other artistic fields. His sound installations have been exhibited in many prestigious venues around the world, including the Walker Art Center, Minneapolis; Contemporary Arts Museum Houston; New Museum of Contemporary Art, New York; Vancouver Art Gallery, Stedelijk Museum, Amsterdam, Centre Pompidou, Paris, Institute of Contemporary Arts, London, Baltic Art Centre, Gateshead, and the Sydney, São Paulo, and Venice Biennials.
In 2020-2021 Eno is working with a group of developers on audio-video conferencing software and service that addresses issues of corporate video conferencing software (like Zoom) when used for other purposes.
Awards and honors
Asteroid 81948 Eno, discovered by Marc Buie at Cerro Tololo in 2000, was named in his honor. The official was published by the Minor Planet Center on 18 May 2019 ().
In 2019, he was awarded Starmus Festival's Stephen Hawking Medal for Science Communication for Music & Arts.
Influence and Legacy
Eno is frequently referred to as one of popular music's most influential artists. Producer and film composer Jon Brion has said: "I think he's the most influential artist since the Beatles." Critic Jason Ankeny at AllMusic argues that Eno "forever altered the ways in which music is approached, composed, performed, and perceived, and everything from punk to techno to new age bears his unmistakable influence." Eno has spread his techniques and theories primarily through his production; his distinctive style informed a number of projects in which he has been involved, including Bowie's "Berlin Trilogy" (helping to popularize minimalism) and the albums he produced for Talking Heads (incorporating, on Eno's advice, African music and polyrhythms), Devo, and other groups. Eno's first collaboration with David Byrne, 1981's My Life in the Bush of Ghosts, utilised sampling techniques and broke ground by incorporating world music into popular Western music forms. Eno and Peter Schmidt's Oblique Strategies have been used by many bands, and Eno's production style has proven influential in several general respects: "his recording techniques have helped change the way that modern musicians;– particularly electronic musicians;– view the studio. No longer is it just a passive medium through which they communicate their ideas but itself a new instrument with seemingly endless possibilities." According to Vinyl Me, Please writer Jack Riedy, Eno's peak as an artist coincided with the album era – a period in popular music during which the album surpassed the single as the dominant recorded-music format – "and Eno took full advantage of the format to pursue all his musical ideas on wax."
Whilst inspired by the ideas of minimalist composers including John Cage, Terry Riley and Erik Satie, Eno coined the term ambient music to describe his own work and defined the term. The Ambient Music Guide states that he has brought from "relative obscurity into the popular consciousness" fundamental ideas about ambient music, including "the idea of modern music as subtle atmosphere, as chill-out, as impressionistic, as something that creates space for quiet reflection or relaxation." His groundbreaking work in electronic music has been said to have brought widespread attention to and innovations in the role of electronic technology in recording. Pink Floyd keyboardist Rick Wright said he "often eulogised" Eno's abilities.
Eno's "unconventional studio predilections", in common with those of Peter Gabriel, were an influence on the recording of "In the Air Tonight", the single which launched the solo career of Eno's former drummer Phil Collins. Collins said he "learned a lot" from working with Eno. Both Half Man Half Biscuit (in the song "Eno Collaboration" on the EP of the same name) and MGMT have written songs about Eno. LCD Soundsystem has frequently cited Eno as a key influence. The Icelandic singer Björk also credited Eno as a major influence.
Mora sti Fotia (Babies on Fire), one of the most influential Greek rock bands, was named after Eno's song "Baby's on Fire" from the 1973 album Here Come the Warm Jets.
In 2011, Belgian academics from the Royal Museum for Central Africa named a species of Afrotropical spider Pseudocorinna brianeno in his honour.
In September 2016, asked by the website Just Six Degrees to name a currently influential artist, Eno cited the conceptual, video and installation artist Jeremy Deller as a source of current inspiration: "Deller's work is often technically very ambitious, involving organising large groups of volunteers and helpers, but he himself is almost invisible in the end result. I'm inspired by this quietly subversive way of being an artist, setting up situations and then letting them play out. To me it's a form of social generative art where the 'generators' are people and their experiences, and where the role of the artist is to create a context within which they collide and create."
Personal life
Eno has married twice. In March 1967, at the age of 18, Eno married Sarah Grenville. The couple had a daughter, Hannah Louise (b. 1967), before their divorce. In 1988, Eno married his then-manager Anthea Norman-Taylor. They have two daughters, Irial Violet (b. 1990) and Darla Joy (b. 1991).
In an interview with Michael Bonner, published in the May 2020 issue of Uncut, Eno referred to Ray Hearn as his current manager, and also referred to his girlfriend.
Eno has referred to himself as "kind of an evangelical atheist" but has also professed an interest in religion. In 1996, Eno and others started the Long Now Foundation to educate the public about the very long-term future of society and to encourage long-term thinking in the exploration of enduring solutions to global issues. In 2005, through the Long Now foundation's Long Bets, he won a $500 bet by challenging someone who predicted a Democrat would be president of the United States in 2005.
In 1991, Eno appeared on BBC Radio 4's Desert Island Discs. His chosen book was Contingency, Irony, and Solidarity by Richard Rorty and his luxury item was a radio telescope.
Politics
In 2007, Eno joined the Liberal Democrats as youth adviser under Nick Clegg.
Eno is now a member of the Labour Party. In August 2015, he endorsed Jeremy Corbyn's campaign in the Labour Party leadership election. He said at a rally in Camden Town Hall: "I don't think electability really is the most important thing. What's important is that someone changes the conversation and moves us off this small-minded agenda." He later wrote in The Guardian: "He's [Corbyn] been doing this with courage and integrity and with very little publicity. This already distinguishes him from at least half the people in Westminster, whose strongest motivation seems to have been to get elected, whatever it takes."
In 2006, Eno was one of more than 100 artists and writers who signed an open letter calling for an international boycott of Israeli political and cultural institutions. and in January 2009 he spoke out against Israel's military action on the Gaza Strip by writing an opinion for CounterPunch and participating in a large-scale protest in London. In 2014, Eno again protested publicly against what he called a "one-sided exercise in ethnic cleansing" and a "war [with] no moral justification," in reference to the 2014 military operation of Israel into Gaza. He was also a co-signatory, along with Archbishop Desmond Tutu, Noam Chomsky, Alice Walker and others, to a letter published in The Guardian that labelled the conflict as an "inhumane and illegal act of military aggression" and called for "a comprehensive and legally binding military embargo on Israel, similar to that imposed on South Africa during apartheid."
In 2013, Eno became a patron of Videre Est Credere (Latin for "to see is to believe"), a UK human rights charity. Videre describes itself as "give[ing] local activists the equipment, training and support needed to safely capture compelling video evidence of human rights violations. This captured footage is verified, analysed and then distributed to those who can create change." He participates alongside movie producers Uri Fruchtmann and Terry Gilliam – along with executive director of Greenpeace UK John Sauven.
Eno was appointed President of Stop the War Coalition in 2017. He has had a long involvement with the organisation since it was set up in 2001.
He is also a trustee of the environmental law firm ClientEarth, Somerset House, and the Institute for Innovation and Public Purpose, set up by Mariana Mazzucato.
Eno opposes United Kingdom's withdrawal from the European Union. Following the June 2016 referendum result when the British public voted to leave, Eno was among a group of British musicians who signed a letter to the Prime Minister Theresa May calling for a second referendum.
In November 2019, along with other public figures, Eno signed a letter supporting Labour Party leader Jeremy Corbyn describing him as "a beacon of hope in the struggle against emergent far-right nationalism, xenophobia and racism in much of the democratic world" and endorsed him for in the 2019 UK general election. In December 2019, along with 42 other leading cultural figures, he signed a letter endorsing the Labour Party under Corbyn's leadership in the 2019 general election. The letter stated that "Labour's election manifesto under Jeremy Corbyn's leadership offers a transformative plan that prioritises the needs of people and the planet over private profit and the vested interests of a few."
Brian Eno is an early and prominent member of Democracy in Europe Movement 2025 (DiEM25) where he contributes, issues statements, and takes part in media events and discussions.
Selected discography
This is an incomplete list.
Solo studio albums
Here Come the Warm Jets (1974), Island
Taking Tiger Mountain (By Strategy) (1974), Island
Another Green World (1975), Island
Discreet Music (1975), Obscure
Before and After Science (1977), Polydor
Ambient 1: Music for Airports (1978), Polydor
Music for Films (1978), Polydor
Ambient 4: On Land (1982), EG
Apollo: Atmospheres and Soundtracks (1983), E.G.
More Music for Films (1983), E.G.
Thursday Afternoon (1985), E.G.
Nerve Net (1992), Opal, All Saints
The Shutov Assembly (1992), Opal, All Saints
Neroli (1993), All Saints
Headcandy (1994), BMG
The Drop (1997), All Saints
Another Day on Earth (2005), Hannibal
Small Craft on a Milk Sea (2010) Warp (with Leo Abrahams and Jon Hopkins)
Lux (2012), Warp
The Ship (2016), Warp
Reflection (2017), Warp
Ambient installation albums
Extracts from Music for White Cube, London 1997 (1997), Opal
Lightness: Music for the Marble Palace (1997), Opal
I Dormienti (1999), Opal
Kite Stories (1999), Opal
Music for Civic Recovery Centre (2000), Opal
Compact Forest Proposal (2001), Opal
January 07003: Bell Studies for the Clock of the Long Now (2003), Opal
Making Space (2010), Opal
See also
List of ambient music artists
Footnotes
References
Sources
Further reading
External links
Eno's work in sound and light, past and present
Paul Morley interviews Eno in The Guardian, 17 January 2010
Interview with Brian Eno from The Guardian, 19 May 2006
Brian Eno: The Philosophy of Surrender interview November 2008
MoreDarkThanShark.org's webpage "Brian Eno – Installations".
1948 births
20th-century English painters
21st-century English painters
801 (band) members
Alumni of the University of Southampton
Ambient composers
Ambient musicians
Aphorists
Art pop musicians
Art rock musicians
Astralwerks artists
Brit Award winners
E.G. Records artists
English agnostics
English anti-war activists
English anti–Iraq War activists
English atheists
English contemporary artists
English electronic musicians
English experimental musicians
English male composers
English male painters
English people of Belgian descent
English record producers
Former Roman Catholics
Glam rock musicians
Grammy Award winners
Island Records artists
Living people
Minimalist composers
People educated at St Joseph's College, Ipswich
People from Woodbridge, Suffolk
Polydor Records artists
Progressive pop musicians
Roxy Music members
Virgin Records artists
Warp (record label) artists
British world music musicians
Labour Party (UK) people
Hansa Records artists
Rykodisc artists
Warner Records artists
All Saints Records artists |
356777 | https://en.wikipedia.org/wiki/Role-playing%20video%20game | Role-playing video game | A role-playing video game (commonly referred to as simply a role-playing game or RPG, as well as a computer role-playing game or CRPG) is a video game genre where the player controls the actions of a character (or several party members) immersed in some well-defined world, usually involving some form of character development by way of recording statistics. Many role-playing video games have origins in tabletop role-playing games and use much of the same terminology, settings and game mechanics. Other major similarities with pen-and-paper games include developed story-telling and narrative elements, player character development, complexity, as well as replay value and immersion. The electronic medium removes the necessity for a gamemaster and increases combat resolution speed. RPGs have evolved from simple text-based console-window games into visually rich 3D experiences.
Characteristics
Role-playing video games use much of the same terminology, settings and game mechanics as early tabletop role-playing games such as Dungeons & Dragons.
Players control a central game character, or multiple game characters, usually called a party, and attain victory by completing a series of quests or reaching the conclusion of a central storyline. Players explore a game world, while solving puzzles and engaging in combat. A key feature of the genre is that characters grow in power and abilities, and characters are typically designed by the player. RPGs rarely challenge a player's physical coordination or reaction time, with the exception of action role-playing games.
Role-playing video games typically rely on a highly developed story and setting, which is divided into a number of quests. Players control one or several characters by issuing commands, which are performed by the character at an effectiveness determined by that character's numeric attributes. Often these attributes increase each time a character gains a level, and a character's level goes up each time the player accumulates a certain amount of experience.
Role-playing video games also typically attempt to offer more complex and dynamic character interaction than what is found in other video game genres. This usually involves additional focus on the artificial intelligence and scripted behavior of computer-controlled non-player characters.
Story and setting
The premise of many role-playing games tasks the player with saving the world, or whichever level of society is threatened. There are often twists and turns as the story progresses, such as the surprise appearance of estranged relatives, or enemies who become friends or vice versa. The game world is often rooted in speculative fiction (i.e. fantasy or science fiction), which allows players to do things they cannot do in real life and helps players suspend their disbelief about the rapid character growth. To a lesser extent, settings closer to the present day or near future are possible.
The story often provides much of the entertainment in the game. Because these games have strong storylines, they can often make effective use of recorded dialog and voiceover narration. Players of these games tend to appreciate long cutscenes more than players of faster action games. While most games advance the plot when the player defeats an enemy or completes a level, role-playing games often progress the plot based on other important decisions. For example, a player may make the decision to join a guild, thus triggering a progression in the storyline that is usually irreversible. New elements in the story may also be triggered by mere arrival in an area, rather than completing a specific challenge. The plot is usually divided so that each game location is an opportunity to reveal a new chapter in the story.
Pen-and-paper role-playing games typically involve a player called the gamemaster (or GM for short) who can dynamically create the story, setting, and rules, and react to a player's choices. In role-playing video games, the computer performs the function of the gamemaster. This offers the player a smaller set of possible actions, since computers can't engage in imaginative acting comparable to a skilled human gamemaster. In exchange, the typical role-playing video game may have storyline branches, user interfaces, and stylized cutscenes and gameplay to offer a more direct storytelling mechanism. Characterization of non-player characters in video games is often handled using a dialog tree. Saying the right things to the right non-player characters will elicit useful information for the player, and may even result in other rewards such as items or experience, as well as opening up possible storyline branches. Multiplayer online role-playing games can offer an exception to this contrast by allowing human interaction among multiple players and in some cases enabling a player to perform the role of a gamemaster.
Exploration and quests
Exploring the world is an important aspect of many RPGs. Players will walk through, talking to non-player characters, picking up objects, and avoiding traps. Some games such as NetHack, Diablo, and the FATE series randomize the structure of individual levels, increasing the game's variety and replay value. Role-playing games where players complete quests by exploring randomly generated dungeons and which include permadeath are called roguelikes, named after the 1980 video game Rogue.
The game's story is often mapped onto exploration, where each chapter of the story is mapped onto a different location. RPGs usually allow players to return to previously visited locations. Usually, there is nothing left to do there, although some locations change throughout the story and offer the player new things to do in response. Players must acquire enough power to overcome a major challenge in order to progress to the next area, and this structure can be compared to the boss characters at the end of levels in action games.
The player typically must complete a linear sequence of certain quests in order to reach the end of the game's story. Many RPGs also often allow the player to seek out optional side-quests and character interactions. Quests of this sort can be found by talking to a non-player character, and there may be no penalty for abandoning or ignoring these quests other than a missed opportunity or reward.
Items and inventory
Players can find loot (such as clothing, weapons, and armor) throughout the game world and collect it. Players can trade items for currency and better equipment. Trade takes place while interacting with certain friendly non-player characters, such as shopkeepers, and often uses a specialized trading screen. Purchased items go into the player's inventory. Some games turn inventory management into a logistical challenge by limiting the size of the player's inventory, thus forcing the player to decide what they must carry at the time. This can be done by limiting the maximum weight that a player can carry, by employing a system of arranging items in a virtual space, or by simply limiting the number of items that can be held.
Character actions and abilities
Most of the actions in an RPG are performed indirectly, with the player selecting an action and the character performing it by their own accord. Success at that action depends on the character's numeric attributes. Role-playing video games often simulate dice-rolling mechanics from non-electronic role-playing games to determine success or failure. As a character's attributes improve, their chances of succeeding at a particular action will increase.
Many role-playing games allow players to play as an evil character. Although robbing and murdering indiscriminately may make it easier to get money, there are usually consequences in that other characters will become uncooperative or even hostile towards the player. Thus, these games allow players to make moral choices, but force players to live with the consequences of their actions. Games often let the player control an entire party of characters. However, if winning is contingent upon the survival of a single character, then that character effectively becomes the player's avatar. An example of this would be in Baldur's Gate, where if the character created by the player dies, the game ends and a previous save needs to be loaded.
Although some single-player role-playing games give the player an avatar that is largely predefined for the sake of telling a specific story, many role-playing games make use of a character creation screen. This allows players to choose their character's sex, their race or species, and their character class. Although many of these traits are cosmetic, there are functional aspects as well. Character classes will have different abilities and strengths. Common classes include fighters, spellcasters, thieves with stealth abilities, and clerics with healing abilities, or a mixed class, such as a fighter who can cast simple spells. Characters will also have a range of physical attributes such as dexterity and strength, which affect a player's performance in combat. Mental attributes such as intelligence may affect a player's ability to perform and learn spells, while social attributes such as charisma may limit the player's choices while conversing with non-player characters. These attribute systems often strongly resemble the Dungeons & Dragons ruleset.
Some role-playing games make use of magical powers, or equivalents such as psychic powers or advanced technology. These abilities are confined to specific characters such as mages, spellcasters, or magic-users. In games where the player controls multiple characters, these magic-users usually complement the physical strength of other classes. Magic can be used to attack, to defend, or to temporarily change an enemy or ally's attributes. While some games allow players to gradually consume a spell, as ammunition is consumed by a gun, most games offer players a finite amount of mana which can be spent on any spell. Mana is restored by resting or by consuming potions. Characters can also gain other non-magical skills, which stay with the character as long as he lives.
Experience and levels
Although the characterization of the game's avatar will develop through storytelling, characters may also become more functionally powerful by gaining new skills, weapons, and magic. This creates a positive-feedback cycle that is central to most role-playing games: The player grows in power, allowing them to overcome more difficult challenges, and gain even more power. This is part of the appeal of the genre, where players experience growing from an ordinary person into a superhero with amazing powers. Whereas other games give the player these powers immediately, the player in a role-playing game will choose their powers and skills as they gain experience.
Role-playing games usually measure progress by counting experience points and character levels. Experience is usually earned by defeating enemies in combat, with some games offering experience for completing certain quests or conversations. Experience becomes a form of score, and accumulating a certain amount of experience will cause the character's level to go up. This is called "levelling up", and gives the player an opportunity to raise one or more of his character's attributes. Many RPGs allow players to choose how to improve their character, by allocating a finite number of points into the attributes of their choice. Gaining experience will also unlock new magic spells for characters that use magic.
Some role-playing games also give the player specific skill points, which can be used to unlock a new skill or improve an existing one. This may sometimes be implemented as a skill tree. As with the technology trees seen in strategy video games, learning a particular skill in the tree will unlock more powerful skills deeper in the tree.
Three different systems of rewarding the player characters for solving the tasks in the game can be set apart: the experience system (also known as the "level-based" system), the training system (also known as the "skill-based" system) and the skill-point system (also known as "level-free" system)
The experience system, by far the most common, was inherited from pen-and-paper role-playing games and emphasizes receiving "experience points" (often abbreviated "XP" or "EXP") by winning battles, performing class-specific activities, and completing quests. Once a certain amount of experience is gained, the character advances a level. In some games, level-up occurs automatically when the required amount of experience is reached; in others, the player can choose when and where to advance a level. Likewise, abilities and attributes may increase automatically or manually.
The training system is similar to the way the Basic Role-Playing system works. The first notable video game to use this was Dungeon Master, which emphasized developing the character's skills by using them—meaning that if a character wields a sword for some time, he or she will become proficient with it.
Finally, in the skill-point system (as used in Vampire: The Masquerade – Bloodlines for example) the character is rewarded with "skill points" for completing quests, which then can be directly used to buy skills and attributes without having to wait until the next level up.
Combat
Older games often separated combat into its own mode of gameplay, distinct from exploring the game world. More recent games tend to maintain a consistent perspective for exploration and combat. Some games, especially earlier video games, generate battles from random encounters; more modern RPGs are more likely to have persistent wandering monsters that move about the game world independently of the player. Most RPGs also use stationary boss monsters in key positions, and automatically trigger battles with them when the PCs enter these locations or perform certain actions. Combat options typically involve positioning characters, selecting which enemy to attack, and exercising special skills such as casting spells.
In a classical turn-based system, only one character may act at a time; all other characters remain still, with a few exceptions that may involve the use of special abilities. The order in which the characters act is usually dependent on their attributes, such as speed or agility. This system rewards strategic planning more than quickness. It also points to the fact that realism in games is a means to the end of immersion in the game world, not an end in itself. A turn-based system makes it possible, for example, to run within range of an opponent and kill him before he gets a chance to act, or duck out from behind hard cover, fire, and retreat back without an opponent being able to fire, which are of course both impossibilities. However, tactical possibilities have been created by this unreality that did not exist before; the player determines whether the loss of immersion in the reality of the game is worth the satisfaction gained from the development of the tactic and its successful execution. Fallout has been praised as being "the shining example of a good turn-based combat system".
Real-time combat can import features from action games, creating a hybrid action RPG game genre. But other RPG battle systems such as the Final Fantasy battle systems have imported real-time choices without emphasizing coordination or reflexes. Other systems combine real-time combat with the ability to pause the game and issue orders to all characters under his/her control; when the game is unpaused, all characters follow the orders they were given. This "real-time with pause" system (RTwP) has been particularly popular in games designed by BioWare. The most famous RTwP engine is the Infinity Engine. Other names for "real-time with pause" include "active pause" and "semi real-time". Tactical RPG maker Apeiron named their system Smart Pause Mode (SPM) because it would automatically pause based on a number of user-configurable settings. Fallout Tactics: Brotherhood of Steel and Arcanum: Of Steamworks and Magick Obscura offered players the option to play in either turn-based or RTwP mode via a configuration setting. The latter also offered a "fast turn-based" mode, though all three of the game's modes were criticized for being poorly balanced and oversimplified.
Early Ultima games featured timed turns: they were strictly turn-based, but if the player waited more than a second or so to issue a command, the game would automatically issue a pass command, allowing the monsters to take a turn while the PCs did nothing.
There is a further subdivision by the structure of the battle system; in many early games, such as Wizardry, monsters and the party are arrayed into ranks, and can only attack enemies in the front rank with melee weapons. Other games, such as most of the Ultima series, employed duplicates of the miniatures combat system traditionally used in the early role-playing games. Representations of the player characters and monsters would move around an arena modeled after the surrounding terrain, attacking any enemies that are sufficiently near.
Interface and graphics
Players typically navigate the game world from a first or third-person perspective in 3D RPGs. However, an isometric or aerial top-down perspective is common in party-based RPGs, in order to give the player a clear view of their entire party and their surroundings. Role-playing games require the player to manage a large amount of information, and frequently make use of a windowed interface. For example, spell-casting characters will often have a menu of spells they can use. On the PC, players typically use the mouse to click on icons and menu options, while console games have the player navigate through menus using a game controller.
History and classification
The role-playing video game genre began in the mid-1970s on mainframe computers, inspired by pen-and-paper role-playing games such as Dungeons & Dragons. Several other sources of inspiration for early role-playing video games also included tabletop wargames, sports simulation games, adventure games such as Colossal Cave Adventure, fantasy writings by authors such as J. R. R. Tolkien, traditional strategy games such as chess, and ancient epic literature dating back to Epic of Gilgamesh which followed the same basic structure of setting off in various quests in order to accomplish goals.
After the success of role-playing video games such as Ultima and Wizardry, which in turn served as the blueprint for Dragon Quest and Final Fantasy, the role-playing genre eventually diverged into two styles, Eastern role-playing games and Western role-playing games, due to cultural differences, though roughly mirroring the platform divide between consoles and computers, respectively. Finally, while the first RPGs offered strictly a single player experience, the popularity of multiplayer modes rose sharply during the early to mid-1990s with action role-playing games such as Secret of Mana and Diablo. With the advent of the Internet, multiplayer games have grown to become massively multiplayer online role-playing games (MMORPG), including Lineage, Final Fantasy XI, and World of Warcraft.
Mainframe computers
The role-playing video game genre began in the mid-1970s, as an offshoot of early university mainframe text-based RPGs on PDP-10 and Unix-based computers, such as Dungeon, pedit5 and dnd. In 1980, a very popular dungeon crawler, Rogue, was released. Featuring ASCII graphics where the setting, monsters and items were represented by letters and a deep system of gameplay, it inspired a whole genre of similar clones on mainframe and home computers called "roguelikes".
Personal computers
One of the earliest role-playing video games on a microcomputer was Dungeon n Dragons, written by Peter Trefonas and published by CLOAD (1980). This early game, published for a TRS-80 Model 1, is just 16K long and includes a limited word parser command line, character generation, a store to purchase equipment, combat, traps to solve, and a dungeon to explore. Other contemporaneous CRPGs (Computer Role Playing Games) were Temple of Apshai, Odyssey: The Compleat Apventure and Akalabeth: World of Doom, the precursor to Ultima. Some early microcomputer RPGs (such as Telengard (1982) or Sword of Fargoal) were based on their mainframe counterparts, while others (such as Ultima or Wizardry, the most successful of the early CRPGs) were loose adaptations of D&D. They also include both first-person displays and overhead views, sometimes in the same game (Akalabeth, for example, uses both perspectives). Most of the key features of RPGs were developed in this early period, prior to the release of Ultima III: Exodus, one of the prime influences on both computer and console RPG development. For example, Wizardry features menu-driven combat, Tunnels of Doom features tactical combat on a special "combat screen", and Dungeons of Daggorath features real-time combat which takes place on the main dungeon map.
Starting in 1984 with Questron and 50 Mission Crush, SSI produced many series of CRPGs. Their 1985 game Phantasie is notable for introducing automapping and in-game scrolls providing hints and background information. They also released Pool of Radiance in 1988, the first of several "Gold Box" CRPGs based on the Advanced Dungeons & Dragons rules. These games feature a first-person display for movement, combined with an overhead tactical display for combat. One common feature of RPGs from this era, which Matt Barton calls the "Golden Age" of computer RPGs, is the use of numbered "paragraphs" printed in the manual or adjunct booklets, containing the game's lengthier texts; the player can be directed to read a certain paragraph, instead of being shown the text on screen. The ultimate exemplar of this approach is Sir-Tech's Star Saga trilogy (of which only two games were released); the first game contains 888 "textlets" (usually much longer than a single paragraph) spread across 13 booklets, while the second contains 50,000 paragraphs spread across 14 booklets. Most of the games from this era are turn-based, although Dungeon Master and its imitators have real-time combat. Other classic titles from this era include The Bard's Tale (1985), Wasteland (1988), the start of the Might and Magic (1986-2014) series and the continuing Ultima (1981-1999) series.
Later, in the middle to late 1990s, isometric, sprite-based RPGs became commonplace, with video game publishers Interplay Entertainment and Blizzard North playing a lead role with such titles as the Baldur's Gate, Icewind Dale and the action-RPG Diablo series, as well as the dialogue-heavy Planescape: Torment and cult classics Fallout and Fallout 2. This era also saw a move toward 3D game engines with such games as Might and Magic VI: The Mandate of Heaven and The Elder Scrolls: Arena. TSR, dissatisfied with SSI's later products, such as Dark Sun: Wake of the Ravager and Menzoberranzan, transferred the AD&D license to several different developers, and eventually gave it to BioWare, who used it in Baldur's Gate (1998) and several later games. By the 2000s, 3D engines had become dominant.
Video game consoles
The earliest RPG on a console was Dragonstomper on the Atari 2600 in 1982. Another early RPG on a console was Bokosuka Wars, originally released for the Sharp X1 computer in 1983 and later ported to the MSX in 1984, the NES in 1985 and the Sharp X68000 as New Bokosuka Wars. The game laid the foundations for the tactical role-playing game genre, or "simulation RPG" genre as it is known in Japan. It was also an early example of a real-time, action role-playing game. In 1986, Chunsoft created the NES title Dragon Quest (called Dragon Warrior in North America until the eighth game), which drew inspiration from computer RPGs Ultima and Wizardry and is regarded as the template for future Japanese role-playing video games released since then.
Also in 1986, Legend of Zelda was released for the NES
In 1987, the genre came into its own with the release of several highly influential console RPGs distinguishing themselves from computer RPGs, including the genre-defining Phantasy Star, released for the Master System. Shigeru Miyamoto's Zelda II: The Adventure of Link for the Famicom Disk System was one of the earliest action role-playing games, combining the action-adventure game framework of its predecessor The Legend of Zelda with the statistical elements of turn-based RPGs. Most RPGs at this time were turn-based. Faxanadu was another early action RPG for the NES, released as a side-story to the computer action RPG Dragon Slayer II: Xanadu. Square's Final Fantasy for the NES introduced side-view battles, with the player characters on the right and the enemies on the left, which soon became the norm for numerous console RPGs. In 1988, Dragon Warrior III introduced a character progression system allowing the player to change the party's character classes during the course of the game. Another "major innovation was the introduction of day/night cycles; certain items, characters, and quests are only accessible at certain times of day." In 1989, Phantasy Star II for the Genesis established many conventions of the genre, including an epic, dramatic, character-driven storyline dealing with serious themes and subject matter.
Console RPGs distinguished themselves from computer RPGs to a greater degree in the early 1990s. As console RPGs became more heavily story-based than their computer counterparts, one of the major differences that emerged during this time was in the portrayal of the characters. Console RPGs often featured intricately related characters who had distinctive personalities and traits, with players assuming the roles of people who cared about each other, fell in love or even had families. Romance in particular was a theme that was common in most console RPGs at the time but absent from most computer RPGs. During the 1990s, console RPGs had become increasingly dominant, exerting a greater influence on computer RPGs than the other way around. Console RPGs had eclipsed computer RPGs for some time, though computer RPGs began making a comeback towards the end of the decade with interactive choice-filled adventures.
The next major revolution came in the late 1990s, which saw the rise of optical disks in fifth generation consoles. The implications for RPGs were enormous—longer, more involved quests, better audio, and full-motion video. This was first clearly demonstrated in 1997 by the phenomenal success of Final Fantasy VII, which is considered one of the most influential games of all time. With a record-breaking production budget of around $45 million, the ambitious scope of Final Fantasy VII raised the possibilities for the genre, with its dozens of minigames and much higher production values. The latter includes innovations such as the use of 3D characters on pre-rendered backgrounds, battles viewed from multiple different angles rather than a single angle, and for the first time full-motion CGI video seamlessly blended into the gameplay, effectively integrated throughout the game. The game was soon ported to the PC and gained much success there, as did several other originally console RPGs, blurring the line between the console and computer platforms.
Cultural differences
Computer-driven role-playing games had their start in Western markets, with games generally geared to be played on home computers. By 1985, series like Wizardry and Ultima represented the state of role-playing games. In Japan, home computers had yet to take as great a hold as they had in the West due to their cost; there was little market for Western-developed games and there were a few Japanese-developed games for personal computers during this time such as The Black Onyx (1984) which followed the Wizardry/Ultima format. With the release of the low-cost Famicom console (called the Nintendo Entertainment System overseas), a new opportunity arose to bring role-playing games to Japan. Dragon Quest (1986) was the first such attempt to recreate a role-playing game for a console, and requires several simplifications to fit within the more limited memory and capabilities of the Famicom compared to computers; players in Dragon Quest controlled only a single character, the amount of control over this character limited due to the simplicity of the Famicom controller, and a less-realistic art style was chosen to better visualize the characters within a tile-based graphics system. Dragon Quest was highly successful in Japan, leading to further entries in the series and other titles such as Final Fantasy that followed the same simplifications made in RPGs for Dragon Quest.
Because of these differences, the role-playing genre eventually began to be classified into two fairly distinct styles: computer RPG and console RPG. In the early 2000s, however, as the platform differences began to blur, computer RPGs and console RPGs were eventually classified as Western role-playing games (or WRPGs) and Japanese role-playing games (or JRPGs), respectively.
Though sharing fundamental premises, Western RPGs tend to feature darker graphics, older characters, and a greater focus on roaming freedom, realism, and the underlying game mechanics (e.g. "rules-based" or "system-based"); whereas Eastern RPGs tend to feature brighter, anime-like or chibi graphics, younger characters, turn-based or faster-paced action gameplay, and a greater focus on tightly-orchestrated, linear storylines with intricate plots (e.g. "action-based" or "story-based"). Further, Western RPGs are more likely to allow players to create and customize characters from scratch, and since the late 1990s have had a stronger focus on extensive dialog tree systems (e.g. Planescape: Torment). On the other hand, Japanese RPGs tend to limit players to developing pre-defined player characters, and often do not allow the option to create or choose one's own playable characters or make decisions that alter the plot. In the early 1990s, Japanese RPGs were seen as being much closer to fantasy novels, but by the late 1990s had become more cinematic in style (e.g. Final Fantasy series). At the same time, Western RPGs started becoming more novelistic in style (e.g. Planescape: Torment), but by the late 2000s had also adopted a more cinematic style (e.g. Mass Effect series).
One reason given for these differences is that many early Japanese console RPGs can be seen as forms of interactive manga (Japanese comics) or anime wrapped around Western rule systems at the time, in addition to the influence of visual novel adventure games. As a result, Japanese console RPGs differentiated themselves with a stronger focus on scripted narratives and character drama, alongside streamlined gameplay. In recent years, these trends have in turn been adopted by Western RPGs, which have begun moving more towards tightly structured narratives, in addition to moving away from "numbers and rules" in favor of streamlined combat systems similar to action games. In addition, a large number of Western independent games are modelled after Japanese RPGs, especially those of the 16-bit era, partly due to the RPG Maker game development tools.
Another oft-cited difference is the prominence or absence of kawaisa, or "cuteness", in Japanese culture, and different approaches with respect to character aesthetics. Western RPGs tend to maintain a serious and gritty tone, whereas JRPG protagonists tend to be designed with an emphasis on aesthetic beauty, and even male characters are often young, androgynous, shōnen or bishōnen in appearance. JRPGs often have cute (and even comic-relief type) characters or animals, juxtaposed (or clashing) with more mature themes and situations; and many modern JRPGs feature characters designed in the same style as those in manga and anime. The stylistic differences are often due to differing target audiences: Western RPGs are usually geared primarily towards teenage to adult males, whereas Japanese RPGs are usually intended for a much larger demographic, including female audiences, who, for example, accounted for nearly a third of Final Fantasy XIIIs fanbase.
Modern Japanese RPGs are more likely to feature turn-based battles; while modern Western RPGs are more likely to feature real-time combat. In the past, the reverse was often true: real-time action role-playing games were far more common among Japanese console RPGs than Western computer RPGs up until the late 1990s, due to gamepads usually being better suited to real-time action than the keyboard and mouse. There are of course exceptions, such as Final Fantasy XII (2006) and Devil Summoner: Raidou Kuzunoha vs. the Soulless Army (2006), two modern Eastern RPGs that feature real-time combat; and The Temple of Elemental Evil (2003), a modern Western RPG that features turn-based combat.
Some journalists and video game designers have questioned this cultural classification, arguing that the differences between Eastern and Western games have been exaggerated. In an interview held at the American Electronic Entertainment Expo, Japanese video game developer Tetsuya Nomura (who worked on Final Fantasy and Kingdom Hearts) emphasized that RPGs should not be classified by country-of-origin, but rather described simply for what they are: role-playing games. Hironobu Sakaguchi, creator of Final Fantasy and The Last Story, noted that, while "users like to categorise" Japanese RPGs as "turn-based, traditional styles" and Western RPGs as "born from first-person shooters," there "are titles that don't fit the category," pointing to Chrono Trigger (which he also worked on) and the Mana games. He further noted that there have been "other games similar to the style of Chrono Trigger," but that "it's probably because the games weren't localised and didn't reach the Western audience." Xeno series director Tetsuya Takahashi, in reference to Xenoblade Chronicles, stated that "I don’t know when exactly people started using the term 'JRPG,' but if this game makes people rethink the meaning of this term, I’ll be satisfied." The writer Jeremy Parish of 1UP.com states that "Xenoblade throws into high relief the sheer artificiality of the gaming community's obsession over the differences between" Western and Japanese RPGs, pointing out that it "does things that don't really fit into either genre. Gamers do love their boundaries and barriers and neat little rules, I know, but just because you cram something into a little box doesn't mean it belongs there." Nick Doerr of Joystiq criticizes the claim that Japanese RPGs are "too linear," pointing out that non-linear Japanese RPGs are not uncommon—for instance, the Romancing SaGa series. Likewise, Rowan Kaiser of Joystiq points out that linear Western RPGs were common in the 1990s, and argues that many of the often mentioned differences between Eastern and Western games are stereotypes that are generally "not true" and "never was", pointing to classic examples like Lands of Lore and Betrayal at Krondor that were more narrative-focused than the typical Western-style RPGs of the time. In 2015, IGN noted in an interview with Xenoblade Chronicles Xs development team that the label "JRPG" is most commonly used to refer to RPGs "whose presentation mimics the design sensibilities" of anime and manga, that it's "typically the presentation and character archetypes" that signal "this is a JRPG."
Criticisms
Due to the cultural differences between Western and Japanese variations of role-playing games, both have often been compared and critiqued by those within the video games industry and press.
In the late 1980s, when traditional American computer RPGs such as Ultima and Defender of the Crown were ported to consoles, they received mixed reviews from console gamers, as they were "not perceived, by many of the players, to be as exciting as the Japanese imports," and lacked the arcade and action-adventure elements commonly found in Japanese console RPGs at the time. In the early 1990s, American computer RPGs also began facing criticism for their plots, where "the party sticks together through thick and thin" and always "act together as a group" rather than as individuals, and where non-player characters are "one-dimensional characters," in comparison to the more fantasy novel approach of Squaresoft console RPGs such as Final Fantasy IV. However in 1994, game designer Sandy Petersen noted that, among computer gamers, there was criticism against cartridge-based console JRPGs being "not role-playing at all" due to popular examples such as Secret of Mana and especially The Legend of Zelda using "direct" arcade-style action combat systems instead of the more "abstract" turn-based battle systems associated with computer RPGs. In response, he pointed out that not all console RPGs are action-based, pointing to Final Fantasy and Lufia. Another early criticism, dating back to the Phantasy Star games in the late 1980s, was the frequent use of defined player characters, in contrast to the Wizardry and Gold Box games where the player's avatars (such as knights, clerics, or thieves) were blank slates.
As Japanese console RPGs became increasingly more dominant in the 1990s, and became known for being more heavily story and character-based, American computer RPGs began to face criticism for having characters devoid of personality or background, due to representing avatars which the player uses to interact with the world, in contrast to Japanese console RPGs which depicted characters with distinctive personalities. American computer RPGs were thus criticized for lacking "more of the traditional role-playing" offered by Japanese console RPGs, which instead emphasized character interactions. In response, North American computer RPGs began making a comeback towards the end of the 1990s with interactive choice-filled adventures.
In more recent years, several writers have criticized JRPGs as not being "true" RPGs, for heavy usage of scripted cutscenes and dialogue, and a frequent lack of branching outcomes.[Turner] Japanese RPGs are also sometimes criticized for having relatively simple battle systems in which players are able to win by repetitively mashing buttons.[Turner] As a result, Japanese-style role-playing games are held in disdain by some Western gamers, leading to the term "JRPG" being held in the pejorative. Some observers have also speculated that Japanese RPGs are stagnating or declining in both quality and popularity, including remarks by BioWare co-founder Greg Zeschuk and writing director Daniel Erickson that JRPGs are stagnating—and that Final Fantasy XIII is not even really an RPG; criticisms regarding seemingly nebulous justifications by some Japanese designers for newly changed (or, alternately, newly un-changed) features of recent titles; calls among some gaming journalists to "fix" JRPGs' problems; as well as claims that some recent titles such as Front Mission Evolved are beginning to attempt—and failing to—imitate Western titles. In an article for PSM3, Brittany Vincent of RPGFan.com felt that "developers have mired the modern JRPG in unoriginality", citing Square Enix CEO Yoichi Wada who stated that "they’re strictly catering to a particular audience", the article noting the difference in game sales between Japan and North America before going on to suggest JRPGs may need to "move forward". This criticism has also occurred in the wider media with an advertisement for Fallout: New Vegas (Obsidian Entertainment) in Japan openly mocked Japanese RPGs' traditional characteristics in favor of their own title. Nick Doerr of Joystiq noted that Bethesda felt that Japanese RPGs "are all the same" and "too linear," to which he responded that "[f]or the most part, it's true" but noted there are also non-linear Japanese RPGs such as the Romancing SaGa series. Such criticisms have produced responses such as ones by Japanese video game developers, Shinji Mikami and Yuji Horii, to the effect that JRPGs were never as popular in the West to begin with, and that Western reviewers are biased against turn-based systems. Jeff Fleming of Gamasutra also states that Japanese RPGs on home consoles are generally showing signs of staleness, but notes that handheld consoles such as the Nintendo DS have had more original and experimental Japanese RPGs released in recent years.
Western RPGs have also received criticism in recent years. They remain less popular in Japan, where, until recently, Western games in general had a negative reputation. In Japan, where the vast majority of early console role-playing video games originate, Western RPGs remain largely unknown. The developer Motomu Toriyama criticized Western RPGs, stating that they "dump you in a big open world, and let you do whatever you like [which makes it] difficult to tell a compelling story." Hironobu Sakaguchi noted that "users like to categorise" Western RPGs as "a sort of different style, born from first person shooters." In recent years, some have also criticized Western RPGs for becoming less RPG-like, instead with further emphasis on action. Christian Nutt of GameSpy states that, in contrast to Japanese RPGs, Western RPGs' greater control over the development and customization of playable characters has come at the expense of plot and gameplay, resulting in what he felt was generic dialogue, lack of character development within the narrative and weaker battle systems.[Nutt] He also states that Western RPGs tend to focus more on the underlying rules governing the battle system rather than on the experience itself.[Nutt] Tom Battey of Edge Magazine noted that the problems often cited against Japanese RPGs (mentioned above) also often apply to many Western RPGs as well as games outside of the RPG genre. BioWare games have been criticized for "lack of innovation, repetitive structure and lack of real choice." Western RPGs, such as Bethesda games, have also been criticized for lacking in "narrative strength" or "mechanical intricacy" due to the open-ended, sandbox structure of their games.
Despite the criticisms leveled at both variations, Rowan Kaiser of Joystiq argued that many of the often mentioned differences between Eastern and Western games are stereotypes that are generally not true, noting various similarities between several Western titles (such as Lands of Lore, Betrayal at Krondor, and Dragon Age) and several classic Eastern titles (such as Final Fantasy and Phantasy Star), noting that both these Western and Japanese titles share a similar emphasis on linear storytelling, pre-defined characters and "bright-colored" graphics. The developer Hironobu Sakaguchi also noted there are many games from both that don't fit such categorizations, such as his own Chrono Trigger as well as the Mana games, noting there have been many other such Japanese role-playing games that never released in Western markets.
Controversy
In what is viewed as the largely secular nature of Japanese culture has resulted in heavy usage of themes, symbols, and characters taken from a variety of religions, including Christianity and Japanese Shinto. This tends to be problematic when JRPGs are exported to Western countries where the topics of religion and blasphemy remain sensitive, such as the United States. A JRPG can exhibit elements that would be controversial in the West, such as Xenogears or Final Fantasy Tactics featuring antagonists that bear similarities to the Abrahamic God and the Catholic Church, respectively; and Nintendo has made efforts in the past to remove references such as these prior to introducing their games into the North American market.
Subgenres
Action RPGs
Typically action RPGs feature each player directly controlling a single character in real time, and feature a strong focus on combat and action with plot and character interaction kept to a minimum. Early action RPGs tended to follow the template set by 1980s Nihon Falcom titles such as the Dragon Slayer and Ys series, which feature hack and slash combat where the player character's movements and actions are controlled directly, using a keyboard or game controller, rather than using menus. This formula was refined by the action-adventure game, The Legend of Zelda (1986), which set the template used by many subsequent action RPGs, including innovations such as an open world, nonlinear gameplay, battery backup saving, and an attack button that animates a sword swing or projectile attack on the screen. The game was largely responsible for the surge of action-oriented RPGs released since the late 1980s, both in Japan and North America. The Legend of Zelda series would continue to exert an influence on the transition of both console and computer RPGs from stat-heavy, turn-based combat towards real-time action combat in the following decades.
A different variation of the action RPG formula was popularized by Diablo (1996), where the majority of commands—such as moving and attacking—are executed using mouse clicks rather than via menus, though learned spells can also be assigned to hotkeys. In many action RPGs, non-player characters serve only one purpose, be it to buy or sell items or upgrade the player's abilities, or issue them with combat-centric quests. Problems players face also often have an action-based solution, such as breaking a wooden door open with an axe rather than finding the key needed to unlock it, though some games place greater emphasis on character attributes such as a "lockpicking" skill and puzzle-solving.
One common challenge in developing action RPGs is including content beyond that of killing enemies. With the sheer number of items, locations and monsters found in many such games, it can be difficult to create the needed depth to offer players a unique experience tailored to his or her beliefs, choices or actions. This is doubly true if a game makes use of randomization, as is common. One notable example of a game which went beyond this is Deus Ex (2000) which offered multiple solutions to problems using intricately layered story options and individually constructed environments. Instead of simply bashing their way through levels, players were challenged to act in character by choosing dialog options appropriately, and by using the surrounding environment intelligently. This produced an experience that was unique and tailored to each situation as opposed to one that repeated itself endlessly.
At one time, action RPGs were much more common on consoles than on computers. Though there had been attempts at creating action-oriented computer RPGs during the late 1980s and early 1990s, often in the vein of Zelda, very few saw any success, with the 1992 game Ultima VII being one of the more successful exceptions in North America. On the PC, Diablo's effect on the market was significant: it had many imitators and its style of combat went on to be used by many games that came after. For many years afterwards, games that closely mimicked the Diablo formula were referred to as "Diablo clones". Three of the four titles in the series were still sold together as part of the Diablo Battle Chest over a decade after Diablo's release. Other examples of action RPGs for the PC include Dungeon Siege, Sacred, Torchlight and Hellgate: London—the last of which was developed by a team headed by former Blizzard employees, some of whom had participated in the creation of the Diablo series. Like Diablo and Rogue before it, Torchlight and Hellgate: London made use of procedural generation to generate game levels.
Also included within this subgenre are role-playing shooters—games that incorporate elements of role-playing games and shooter games (including first-person and third-person). Recent examples include the Mass Effect series, Borderlands 2 and The 3rd Birthday.
First-person party-based RPGs
This subgenre consists of RPGs where the player leads a party of adventurers in first-person perspective, typically through a dungeon or labyrinth in a grid-based environment. Examples include the aforementioned Wizardry, Might and Magic and Bard's Tale series; as well as the Etrian Odyssey and Elminage series. Games of this type are sometimes called "blobbers", since the player moves the entire party around the playing field as a single unit, or "blob".
Most "blobbers" are turn-based, but some titles such as the Dungeon Master, Legend of Grimrock and Eye of the Beholder series are played in real-time. Early games in this genre lacked an automap feature, forcing players to draw their own maps in order to keep track of their progress. Environmental and spatial puzzles are common, meaning players may need to, for instance, move a stone in one part of the level in order to open a gate in another part of the level.
MMORPGs
Though many of the original RPGs for the PLATO mainframe system in the late 1970s also supported multiple, simultaneous players, the popularity of multiplayer modes in mainstream RPGs did not begin to rise sharply until the early to mid-1990s. For instance, Secret of Mana (1993), an early action role-playing game by Square, was one of the first commercial RPGs to feature cooperative multiplayer gameplay, offering two-player and three-player action once the main character had acquired his party members. Later, Diablo (1996) would combine CRPG and action game elements with an Internet multiplayer mode that allowed up to four players to enter the same world and fight monsters, trade items, or fight against each other.
Also during this time period, the MUD genre that had been spawned by MUD1 in 1978 was undergoing a tremendous expansion phase due to the release and spread of LPMud (1989) and DikuMUD (1991). Soon, driven by the mainstream adoption of the Internet, these parallel trends merged in the popularization of graphical MUDs, which would soon become known as massively multiplayer online role-playing games or MMORPGs, beginning with games like Meridian 59 (1995), Nexus: The Kingdom of the Winds (1996), Ultima Online (1997), Lineage (1998), and EverQuest (1999), and leading to more modern phenomena such as RuneScape (2001), Ragnarok Online(2002), Final Fantasy XI (2003), Eve Online (2003) Disney's Toontown Online (2003) and World of Warcraft (2004).
Although superficially similar to single-player RPGs, MMORPGs lend their appeal more to the socializing influences of being online with hundreds or even thousands of other players at a time, and trace their origins more from MUDs than from CRPGs like Ultima and Wizardry. Rather than focusing on the "old school" considerations of memorizing huge numbers of stats and esoterica and battling it out in complex, tactical environments, players instead spend much of their time forming and maintaining guilds and clans. The distinction between CRPGs and MMORPGs and MUDs can as a result be very sharp, likenable to the difference between "attending a renaissance fair and reading a good fantasy novel".
Further, MMORPGs have been criticized for diluting the "epic" feeling of single-player RPGs and related media among thousands of concurrent adventurers. Stated simply: every player wants to be "The Hero", slay "The Monster", rescue "The Princess", or obtain "The Magic Sword". But when there are thousands of players all playing the same game, clearly not everyone can be the hero. This problem became obvious to some in the game EverQuest, where groups of players would compete and sometimes harass each other in order to get monsters in the same dungeon to drop valuable items, leading to several undesirable behaviors such as kill stealing, spawn camping, and ninja looting. In response—for instance by Richard Garriott in Tabula Rasa (2007)—developers began turning to instance dungeons as a means of reducing competition over limited resources, as well as preserving the gaming experience—though this mechanic has its own set of detractors.
Lastly, there exist markets such as Korea and China that, while saturated with MMORPGs, have so far proved relatively unreceptive to single-player RPGs. For instance, Internet-connected personal computers are relatively common in Korea when compared to other regions—particularly in the numerous "PC bangs" scattered around the country, where patrons are able to pay to play multiplayer video games—possibly due to historical bans on Japanese imports, as well as a culture that traditionally sees video games as "frivolous toys" and computers as educational. As a result, some have wondered whether the stand-alone, single-player RPG is still viable commercially—especially on the personal computer—when there are competing pressures such as big-name publishers' marketing needs, video game piracy, a change in culture, and the competitive price-point-to-processing-power ratio (at least initially) of modern console systems.
Roguelikes and roguelites
Roguelike is a subgenre of role-playing video games, characterized by procedural generation of game levels, turn-based gameplay, tile-based graphics, permanent death of the player-character, and typically based on a high fantasy narrative setting. Roguelikes descend from the 1980 game Rogue, particularly mirroring Rogues character- or sprite-based graphics. These games were popularized among college students and computer programmers of the 1980s and 1990s, leading to a large number of variants but adhering to these common gameplay elements. Some of the more well-known variants include Hack, NetHack, Ancient Domains of Mystery, Moria, Angband, and Tales of Maj'Eyal. The Japanese series of Mystery Dungeon games by Chunsoft, inspired by Rogue, also fall within the concept of roguelike games.
More recently, with more powerful home computers and gaming systems, new variations of roguelikes incorporating other gameplay genres, thematic elements and graphical styles have become popular, typically retaining the notion of procedural generation. These titles are sometimes labeled as "roguelike-like", "rogue-lite", or "procedural death labyrinths" to reflect the variation from titles which mimic the gameplay of traditional roguelikes more faithfully. Other games, like Diablo and UnReal World, took inspiration from roguelikes.
Sandbox RPGs
Sandbox RPGs, or open world RPGs, allow the player a great amount of freedom and usually feature a more open free-roaming world (meaning the player is not confined to a single path restricted by rocks or fences etc.). Sandbox RPGs possess similarities to other sandbox games, such as the Grand Theft Auto series, with a large number of interactable NPCs, large amount of content and typically some of the largest worlds to explore and longest play-times of all RPGs due to an impressive amount of secondary content not critical to the game's main storyline. Sandbox RPGs often attempt to emulate an entire region of their setting. Popular examples of this subgenre include the Dragon Slayer series by Nihon Falcom, the early Dragon Quest games by Chunsoft, The Legend of Zelda by Nintendo, Wasteland by Interplay Entertainment, the SaGa and Mana series by Squaresoft, System Shock and System Shock 2 by Looking Glass Studios and Irrational Games, Deus Ex by Ion Storm, The Elder Scrolls and Fallout series by Bethesda Softworks and Interplay Entertainment, Fable by Lionhead Studios, the Gothic series by Piranha Bytes, the Xenoblade Chronicles series by Monolith Soft, and the Souls series by From Software.
Tactical RPGs
This subgenre of turn-based role-playing games principally refers to games which incorporate elements from strategy games as an alternative to traditional role-playing game (RPG) systems. Tactical RPGs are descendants of traditional strategy games, such as chess, and table-top role-playing and strategic war games, such as Chainmail, which were mainly tactical in their original form. The format of a tactical CRPG is also like a traditional RPG in its appearance, pacing and rule structure. Like standard RPGs, the player controls a finite party and battles a similar number of enemies. And like other RPGs, death is usually temporary, albeit some have permanent death of party members. But this genre incorporates strategic gameplay such as tactical movement on an isometric grid. Tactical RPGs tend not to feature multiplayer play.
A number of early Western role-playing video games used a highly tactical form of combat, including parts of the Ultima series, which introduced party-based, tiled combat in Ultima III: Exodus (1983). Ultima III would go on to be ported to many other platforms and influence the development of later titles, as would Bokosuka Wars (1983), considered a pioneer in the strategy/simulation RPG genre, according to Nintendo. Conventionally, however, the term tactical RPG (known as simulation RPG in Japan) refers to the distinct subgenre that was born in Japan; as the early origins of tactical RPGs are difficult to trace from the American side of the Pacific, where much of the early RPG genre developed.
Many tactical RPGs can be both extremely time-consuming and extremely difficult. Hence, the appeal of most tactical RPGs is to the hardcore, not casual, computer and video game player. Traditionally, tactical RPGs have been quite popular in Japan but have not enjoyed the same degree of success in North America and elsewhere. However, the audience for Japanese tactical RPGs has grown substantially since the mid-90s, with PS1 and PS2 titles such as Final Fantasy Tactics, Suikoden Tactics, Vanguard Bandits, and Disgaea enjoying a surprising measure of popularity, as well as hand-held war games like Fire Emblem. (Final Fantasy Tactics for the PS1 is often considered the breakthrough title outside Japan.) Older TRPGs are also being re-released via software emulation—such as on the Wii Virtual Console—and on handheld game consoles, giving games a new lease on life and exposure to new audiences. Japanese video games such as these are as a result no longer nearly as rare a commodity in North America as they were during the 1990s.
Western video games have utilized similar mechanics for years, as well, and were largely defined by X-COM: UFO Defense (1994) in much the same way as Eastern video games were by Fire Emblem. Titles such as X-COM have generally allowed greater freedom of movement when interacting with the surrounding environment than their Eastern counterparts. Other similar examples include the Jagged Alliance (1994–2013) and Silent Storm (2003–2005) series. According to a few developers, it became increasingly difficult during the 2000s to develop games of this type for the PC in the West (though several had been developed in Eastern Europe with mixed results); and even some Japanese console RPG developers began to complain about a bias against turn-based systems. Reasons cited include Western publishers' focus on developing real-time and action-oriented games instead.
Lastly, there are a number of "full-fledged" CRPGs which could be described as having "tactical combat". Examples from the classic era of CRPGs include parts of the aforementioned Ultima series; SSI's Wizard's Crown (1985) and The Eternal Dagger (1987); the Gold Box games of the late '80s and early '90s, many of which were later ported to Japanese video game systems; and the Realms of Arkania (1992-1996) series based on the German The Dark Eye pen-and-paper system. More recent examples include Wasteland 2, Shadowrun: Dragonfall and Divinity: Original Sin—all released in 2014. Partly due to the release of these games 2014 has been called "the first year of the CRPG renaissance".
Hybrid genres
Finally, a steadily increasing number of other non-RP video games have adopted aspects traditionally seen in RPGs, such as experience point systems, equipment management, and choices in dialogue, as developers push to fill the demand for role-playing elements in non-RPGs. The blending of these elements with a number of different game engines and gameplay styles have created a myriad of hybrid game categories formed by mixing popular gameplay elements featured in other genres such as first-person shooters, platformers, and turn-based and real-time strategy games. Examples include first-person shooters such as parts of the Deus Ex (starting in 2000) and S.T.A.L.K.E.R. (starting in 2007) series; real-time strategy games such as SpellForce: The Order of Dawn (2003) and Warhammer 40,000: Dawn of War II (2009); puzzle video games such as Castlevania Puzzle (2010) and Puzzle Quest: Challenge of the Warlords (2007); and turn-based strategy games like the Steel Panthers (1995–2006) series, which combined tactical military combat with RPG-derived unit advancement. As a group, hybrid games have been both praised and criticized; being referred to by one critic as the "poor man's" RPG for omitting the dialogue choices and story-driven character development of major AAA titles; and by another critic as "promising" for shedding the conventions of more established franchises in an attempt to innovate.
Relationship to other genres
RPGs seldom test a player's physical skill. Combat is typically a tactical challenge rather than a physical one, and games involve other non-action gameplay such as choosing dialog options, inventory management, or buying and selling items.
Although RPGs share some combat rules with wargames, RPGs are often about a small group of individual characters. Wargames tend to have large groups of identical units, as well as non-humanoid units such as tanks and airplanes. Role-playing games do not normally allow the player to produce more units. However, the Heroes of Might and Magic series crosses these genres by combining individual heroes with large numbers of troops in large battles.
RPGs rival adventure games in terms of their rich storylines, in contrast to genres that do not rely upon storytelling such as sports games or puzzle games. Both genres also feature highly detailed characters, and a great deal of exploration. However, adventure games usually have a well-defined character, whereas while RPGs may do so, many allow the player to design their characters. Adventure games usually focus on one character, whereas RPGs often feature an entire party. RPGs also feature a combat system, which adventure games usually lack. Whereas both adventure games and RPGs may focus on the personal or psychological growth of characters, RPGs tend to emphasize a complex eternal economy where characters are defined by increasing numerical attributes.
Gameplay elements strongly associated with this genre, such as statistical character development, have been widely adapted to other video game genres. For example, Grand Theft Auto: San Andreas, an action-adventure game, uses resource statistics (abbreviated as "stats") to define a wide range of attributes including stamina, weapon proficiency, driving, lung capacity, and muscle tone, and uses numerous cutscenes and quests to advance the story.
Warcraft III: Reign of Chaos, a real-time strategy game, features heroes that can complete quests, obtain new equipment, and "learn" new abilities as they advance in level. A community-created mod based on Warcraft III, Defense of the Ancients (DotA), served as significant inspiration for the multiplayer online battle arena (MOBA) genre. Due to its Warcraft III origins, MOBA is a fusion of role-playing games, real-time strategy games, and action games, with RPG elements built in its core gameplay. A key features, such as control over one character in a party, growth in power over the course of match, learning new thematic abilities, using of mana, leveling and accumulation of experience points, equipment and inventory management, completing quests, and fighting with the stationary boss monsters, have resemblance with role-playing games.
According to Satoru Iwata, former president of Nintendo, turn-based RPGs have been unfairly criticized as being outdated, and action-based RPGs can frustrate players who are unable to keep up with the battles. According to Yuji Horii, creator of the popular Dragon Quest series and Ryutaro Ichimura, producer of Square Enix, turn-based RPGs allow the player time to make decisions without feeling rushed or worry about real-life distractions.
Popularity
The best-selling RPG series worldwide is Pokémon, which has sold over 300 million units as of November 2017. The second and third best-selling RPG franchises worldwide are Square Enix's Final Fantasy and Dragon Quest series, with over 110 million units and over 64 million units sold as of March 31, 2014, respectively. Pokémon Red, Blue, and Green alone sold approximately 23.64 million copies (10.23 million in Japan, 9.85 million in US, 3.56 million in UK). Nearly all the games in the main Final Fantasy series and all the games in the main Dragon Quest series (as well as many of the spin-off games) have sold over a million copies each, with some games selling more than four million copies. Square Enix's best-selling title is Final Fantasy VII, which has sold over 10 million copies worldwide as of 2010.
Among the best-selling PC RPGs overall is the massively multiplayer online game World of Warcraft with 11.5 million subscribers as of May 2010. Among single player PC RPGs, Diablo II has sold the largest amount, with the most recently cited number being over 4 million copies as of 2001. However, copies of the Diablo: Battle Chest continued to be sold in retail stores, with the compilation appearing on the NPD Group's top 10 PC games sales, list as recently as 2010. Further, Diablo: Battle Chest was the 19th best selling PC game of 2008—a full seven years after the game's initial release; and 11 million users still played Diablo II and StarCraft over Battle.net in 2010. As a franchise, the Diablo series has sold over 20 million copies, not including Diablo III which was released for Windows and OS X in 2012.
The Dragon Quest series was awarded with six world records in the 2008 Gamer's Edition of the Guinness Book of World Records, including "Best Selling Role Playing Game on the Super Famicom", "Fastest Selling Game in Japan", and "First Video Game Series to Inspire a Ballet". Likewise, the Pokémon series received eight records, including "Most Successful RPG Series of All Time". Diablo II was recognized in the 2000 standard edition of the Guinness Book of World Records for being the fastest selling computer game ever sold, with more than 1 million units sold in the first two weeks of availability; though this number has been surpassed several times since. A number of RPGs are also being exhibited in the Barbican Art Gallery's "Game On" exhibition (starting in 2002) and the Smithsonian's "The Art of Video Games" exhibit (starting in 2012); and video game developers are now finally able to apply for grants from the US National Endowment of the Arts.
According to Metacritic, as of May 2011, the highest-rated video game by reviewers is the Xbox 360 version of Mass Effect 2, with an average metascore of 96 out of 100. According to GameRankings, the four top-rated video game RPGs, as of May 2010, are Mass Effect 2 with an average rating of 95.70% for the Xbox 360 version and 94.24% for the PC version; Fallout 3: Game of the Year Edition with an average rating of 95.40% for the PlayStation 3 version; Chrono Trigger with an average rating of 95.10%; and Star Wars: Knights of the Old Republic with an average rating of 94.18% for the Xbox version. Sales numbers for these six aforementioned titles are 10 million units sold worldwide for Final Fantasy VII as of May 2010; 161,161 units of Xenoblade Chronicles sold in Japan as of December 2010; 1.6 million units sold worldwide for Mass Effect 2 as of March 2010, just three months after release; 4.7 million units for Fallout 3 on all three platforms as of November 2008, also only a few months after publication; 3 million units for both the Xbox and PC versions of Star Wars: Knights of the Old Republic as of November 2004; and more than 2.65 million units for the SNES and PlayStation versions of Chrono Trigger as of March 2003, along with 790,000 copies for the Nintendo DS version as of March 31, 2009. Among these titles, none were PC-exclusives, three were North American multi-platform titles released for consoles like the Xbox and Xbox 360, and three were Japanese titles released for consoles like the SNES, PlayStation and Wii.
Final Fantasy VII topped GamePro's "26 Best RPGs of All Time" list in 2008, IGN's 2000 "Reader's Choice Game of the Century" poll, and the GameFAQs "Best Game Ever" audience polls in 2004 and 2005. It was also selected in Empire magazine's "100 Greatest Games of All Time" list as the highest-ranking RPG, at #2 on the list. On IGN's "Top 100 Games Of All Time" list in 2007, the highest ranking RPG is Final Fantasy VI at 9th place; and in both the 2006 and 2008 IGN Readers' Choice polls, Chrono Trigger is the top ranked RPG, in 2nd place. Final Fantasy VI is also the top ranked RPG in Game Informer'''s list of its 200 best games of all time list, in 8th place; and is also one of the eight games to get a cover for the magazine's 200th issue. The 2006 Famitsu readers' poll is dominated by RPGs, with nearly a dozen titles appearing in the top twenty; while most were Japanese, a few Western titles also made a showing. The highest-ranking games on the list were Final Fantasy X, followed by Final Fantasy VII and Dragon Warrior III. For the past decade, the Megami Tensei series topped several "RPGs of the Decade" lists. RPGFan's "Top 20 RPGs of the Past Decade" list was topped by Shin Megami Tensei: Digital Devil Saga & Digital Devil Saga 2 followed by Persona 3, while RPGamer's "Top RPGs of the Decade" list was topped by Persona 3, followed by Final Fantasy X and World of Warcraft.
Lastly, while in recent years Western RPGs have consistently been released on consoles such as the Xbox and Xbox 360, these systems have not shown as much market dominance in Eastern markets such as Japan, and only a few Western RPG titles have been localized to Japanese. Further, RPGs were not the dominant genre on the most popular of the seventh generation video game consoles, the Nintendo Wii, although their presence among handheld systems such as the Nintendo DS is considerably greater.
Notable developers
Notable early RPG developers include Don Daglow for creating the first role-playing video game, Dungeon, in 1975; Yuji Horii for creating the Dragon Quest series; Hironobu Sakaguchi for creating the Final Fantasy series; Richard Garriott for creating the Ultima series; and Brenda Romero for writing and design work on the Wizardry series. Other notable RPG developers include Bethesda Game Studios, creators of Fallout 3, Fallout 4, and The Elder Scrolls series; Ray Muzyka and Greg Zeschuk for founding BioWare; and CD Projekt, creators of The Witcher series and Cyberpunk 2077. Finally, Ryozo Tsujimoto (Monster Hunter series) and Katsura Hashino (Persona series) were cited as "Japanese Game Developers You Should Know" by 1UP.com in 2010.
Crowdfunding
Since 2009
there has been a trend of crowdfunding video games using services such as Kickstarter. Role-playing games that have been successfully crowdfunded include Serpent in the Staglands (2015), The Banner Saga series (2015-2018), Dead State (2014), Wasteland 2 (2014), Shadowrun Returns and its sequels (2012-2015), the Pillars of Eternity series (2015-2018), the Divinity: Original Sin series (2014-2017) and Torment: Tides of Numenera (2017). Due to the release of Wasteland 2, Divinity: Original Sin, The Banner Saga and Dead State (as well as some more traditionally funded titles such as Might and Magic X, Lords of Xulima and The Dark Eye: Blackguards) 2014 was called "the first year of the CRPG renaissance" by PC Gamer. However, it has been speculated that the spike in funded projects at around this time was the result of a "Kickstarter bubble", and that a subsequent slump in project funding was due to "Kickstarter fatigue".
The highest crowdfunded CRPG as of May 2017 is Torment: Tides of Numenera'' with $4,188,927 raised via Kickstarter. Kickstarted games have been released for the personal computer, video game console, and mobile platforms.
Footnotes
References
External links
The History of Computer Role-Playing Games at Gamasutra
Role-playing
Video game terminology
Articles containing video clips |
3254510 | https://en.wikipedia.org/wiki/Scala%20%28programming%20language%29 | Scala (programming language) | Scala ( ) is a strong statically typed general-purpose programming language which supports both object-oriented programming and functional programming. Designed to be concise, many of Scala's design decisions are aimed to address criticisms of Java.
Scala source code can be compiled to Java bytecode and run on a Java virtual machine (JVM). Scala provides language interoperability with Java so that libraries written in either language may be referenced directly in Scala or Java code. Like Java, Scala is object-oriented, and uses a syntax termed curly-brace which is similar to the language C. Since Scala 3, there is also an option to use the off-side rule (indenting) to structure blocks, and its use is advised. Martin Odersky has said that this turned out to be the most productive change introduced in Scala 3.
Unlike Java, Scala has many features of functional programming languages like Scheme, Standard ML, and Haskell, including currying, immutability, lazy evaluation, and pattern matching. It also has an advanced type system supporting algebraic data types, covariance and contravariance, higher-order types (but not higher-rank types), and anonymous types. Other features of Scala not present in Java include operator overloading, optional parameters, named parameters, and raw strings. Conversely, a feature of Java not in Scala is checked exceptions, which has proved controversial.
The name Scala is a portmanteau of scalable and language, signifying that it is designed to grow with the demands of its users.
History
The design of Scala started in 2001 at the École Polytechnique Fédérale de Lausanne (EPFL) (in Lausanne, Switzerland) by Martin Odersky. It followed on from work on Funnel, a programming language combining ideas from functional programming and Petri nets. Odersky formerly worked on Generic Java, and javac, Sun's Java compiler.
After an internal release in late 2003, Scala was released publicly in early 2004 on the Java platform, A second version (v2.0) followed in March 2006.
On 17 January 2011, the Scala team won a five-year research grant of over €2.3 million from the European Research Council. On 12 May 2011, Odersky and collaborators launched Typesafe Inc. (later renamed Lightbend Inc.), a company to provide commercial support, training, and services for Scala. Typesafe received a $3 million investment in 2011 from Greylock Partners.
Platforms and license
Scala runs on the Java platform (Java virtual machine) and is compatible with existing Java programs. As Android applications are typically written in Java and translated from Java bytecode into Dalvik bytecode (which may be further translated to native machine code during installation) when packaged, Scala's Java compatibility makes it well-suited to Android development, more so when a functional approach is preferred.
The reference Scala software distribution, including compiler and libraries, is released under the Apache license.
Other compilers and targets
Scala.jsis a Scala compiler that compiles to JavaScript, making it possible to write Scala programs that can run in web browsers or Node.js. The compiler, in development since 2013, was announced as no longer experimental in 2015 (v0.6). Version v1.0.0-M1 was released in June 2018 and version 1.1.1 in September 2020.
Scala Native is a Scala compiler that targets the LLVM compiler infrastructure to create executable code that uses a lightweight managed runtime, which uses the Boehm garbage collector. The project is led by Denys Shabalin and had its first release, 0.1, on 14 March 2017. Development of Scala Native began in 2015 with a goal of being faster than just-in-time compilation for the JVM by eliminating the initial runtime compilation of code and also providing the ability to call native routines directly.
A reference Scala compiler targeting the .NET Framework and its Common Language Runtime was released in June 2004, but was officially dropped in 2012.
Examples
"Hello World" example
The Hello World program written in Scala has this form:
object HelloWorld extends App {
println("Hello, World!")
}
Unlike the stand-alone Hello World application for Java, there is no class declaration and nothing is declared to be static; a singleton object created with the object keyword is used instead.
When the program is stored in file HelloWorld.scala, the user compiles it with the command:
$ scalac HelloWorld.scala
and runs it with
$ scala HelloWorld
This is analogous to the process for compiling and running Java code. Indeed, Scala's compiling and executing model is identical to that of Java, making it compatible with Java build tools such as Apache Ant.
A shorter version of the "Hello World" Scala program is:
println("Hello, World!")
Scala includes an interactive shell and scripting support. Saved in a file named HelloWorld2.scala, this can be run as a script using the command:
$ scala HelloWorld2.scala
Commands can also be entered directly into the Scala interpreter, using the option :
$ scala -e 'println("Hello, World!")'
Expressions can be entered interactively in the REPL:
$ scala
Welcome to Scala 2.12.2 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131).
Type in expressions for evaluation. Or try :help.
scala> List(1, 2, 3).map(x => x * x)
res0: List[Int] = List(1, 4, 9)
scala>
Basic example
The following example shows the differences between Java and Scala syntax. The function mathFunction takes an integer, squares it, and then adds the cube root of that number to the natural log of that number, returning the result (i.e., ):
Some syntactic differences in this code are:
Scala does not require semicolons to end statements.
Value types are capitalized: Int, Double, Boolean instead of int, double, boolean.
Parameter and return types follow, as in Pascal, rather than precede as in C.
Methods must be preceded by def.
Local or class variables must be preceded by val (indicates an immutable variable) or var (indicates a mutable variable).
The return operator is unnecessary in a function (although allowed); the value of the last executed statement or expression is normally the function's value.
Instead of the Java cast operator (Type) foo, Scala uses foo.asInstanceOf[Type], or a specialized function such as toDouble or toInt.
Instead of Java's import foo.*;, Scala uses import foo._.
Function or method foo() can also be called as just foo; method thread.send(signo) can also be called as just thread send signo; and method foo.toString() can also be called as just foo toString.
These syntactic relaxations are designed to allow support for domain-specific languages.
Some other basic syntactic differences:
Array references are written like function calls, e.g. array(i) rather than array[i]. (Internally in Scala, the former expands into array.apply(i) which returns the reference)
Generic types are written as e.g. List[String] rather than Java's List<String>.
Instead of the pseudo-type void, Scala has the actual singleton class Unit (see below).
Example with classes
The following example contrasts the definition of classes in Java and Scala.
The code above shows some of the conceptual differences between Java and Scala's handling of classes:
Scala has no static variables or methods. Instead, it has singleton objects, which are essentially classes with only one instance. Singleton objects are declared using object instead of class. It is common to place static variables and methods in a singleton object with the same name as the class name, which is then known as a companion object. (The underlying class for the singleton object has a $ appended. Hence, for class Foo with companion object object Foo, under the hood there's a class Foo$ containing the companion object's code, and one object of this class is created, using the singleton pattern.)
In place of constructor parameters, Scala has class parameters, which are placed on the class, similar to parameters to a function. When declared with a val or var modifier, fields are also defined with the same name, and automatically initialized from the class parameters. (Under the hood, external access to public fields always goes through accessor (getter) and mutator (setter) methods, which are automatically created. The accessor function has the same name as the field, which is why it's unnecessary in the above example to explicitly declare accessor methods.) Note that alternative constructors can also be declared, as in Java. Code that would go into the default constructor (other than initializing the member variables) goes directly at class level.
Default visibility in Scala is public.
Features (with reference to Java)
Scala has the same compiling model as Java and C#, namely separate compiling and dynamic class loading, so that Scala code can call Java libraries.
Scala's operational characteristics are the same as Java's. The Scala compiler generates byte code that is nearly identical to that generated by the Java compiler. In fact, Scala code can be decompiled to readable Java code, with the exception of certain constructor operations. To the Java virtual machine (JVM), Scala code and Java code are indistinguishable. The only difference is one extra runtime library, scala-library.jar.
Scala adds a large number of features compared with Java, and has some fundamental differences in its underlying model of expressions and types, which make the language theoretically cleaner and eliminate several corner cases in Java. From the Scala perspective, this is practically important because several added features in Scala are also available in C#.
Syntactic flexibility
As mentioned above, Scala has a good deal of syntactic flexibility, compared with Java. The following are some examples:
Semicolons are unnecessary; lines are automatically joined if they begin or end with a token that cannot normally come in this position, or if there are unclosed parentheses or brackets.
Any method can be used as an infix operator, e.g. "%d apples".format(num) and "%d apples" format num are equivalent. In fact, arithmetic operators like + and << are treated just like any other methods, since function names are allowed to consist of sequences of arbitrary symbols (with a few exceptions made for things like parens, brackets and braces that must be handled specially); the only special treatment that such symbol-named methods undergo concerns the handling of precedence.
Methods apply and update have syntactic short forms. foo()—where foo is a value (singleton object or class instance)—is short for foo.apply(), and foo() = 42 is short for foo.update(42). Similarly, foo(42) is short for foo.apply(42), and foo(4) = 2 is short for foo.update(4, 2). This is used for collection classes and extends to many other cases, such as STM cells.
Scala distinguishes between no-parens (def foo = 42) and empty-parens (def foo() = 42) methods. When calling an empty-parens method, the parentheses may be omitted, which is useful when calling into Java libraries that do not know this distinction, e.g., using foo.toString instead of foo.toString(). By convention, a method should be defined with empty-parens when it performs side effects.
Method names ending in colon (:) expect the argument on the left-hand-side and the receiver on the right-hand-side. For example, the 4 :: 2 :: Nil is the same as Nil.::(2).::(4), the first form corresponding visually to the result (a list with first element 4 and second element 2).
Class body variables can be transparently implemented as separate getter and setter methods. For trait FooLike { var bar: Int }, an implementation may be . The call site will still be able to use a concise foo.bar = 42.
The use of curly braces instead of parentheses is allowed in method calls. This allows pure library implementations of new control structures. For example, breakable { ... if (...) break() ... } looks as if breakable was a language defined keyword, but really is just a method taking a thunk argument. Methods that take thunks or functions often place these in a second parameter list, allowing to mix parentheses and curly braces syntax: Vector.fill(4) { math.random } is the same as Vector.fill(4)(math.random). The curly braces variant allows the expression to span multiple lines.
For-expressions (explained further down) can accommodate any type that defines monadic methods such as map, flatMap and filter.
By themselves, these may seem like questionable choices, but collectively they serve the purpose of allowing domain-specific languages to be defined in Scala without needing to extend the compiler. For example, Erlang's special syntax for sending a message to an actor, i.e. actor ! message can be (and is) implemented in a Scala library without needing language extensions.
Unified type system
Java makes a sharp distinction between primitive types (e.g. int and boolean) and reference types (any class). Only reference types are part of the inheritance scheme, deriving from java.lang.Object. In Scala, all types inherit from a top-level class Any, whose immediate children are AnyVal (value types, such as Int and Boolean) and AnyRef (reference types, as in Java). This means that the Java distinction between primitive types and boxed types (e.g. int vs. Integer) is not present in Scala; boxing and unboxing is completely transparent to the user. Scala 2.10 allows for new value types to be defined by the user.
For-expressions
Instead of the Java "foreach" loops for looping through an iterator, Scala has for-expressions, which are similar to list comprehensions in languages such as Haskell, or a combination of list comprehensions and generator expressions in Python. For-expressions using the yield keyword allow a new collection to be generated by iterating over an existing one, returning a new collection of the same type. They are translated by the compiler into a series of map, flatMap and filter calls. Where yield is not used, the code approximates to an imperative-style loop, by translating to foreach.
A simple example is:
val s = for (x <- 1 to 25 if x*x > 50) yield 2*x
The result of running it is the following vector:
Vector(16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50)
(Note that the expression 1 to 25 is not special syntax. The method to is rather defined in the standard Scala library as an extension method on integers, using a technique known as implicit conversions that allows new methods to be added to existing types.)
A more complex example of iterating over a map is:
// Given a map specifying Twitter users mentioned in a set of tweets,
// and number of times each user was mentioned, look up the users
// in a map of known politicians, and return a new map giving only the
// Democratic politicians (as objects, rather than strings).
val dem_mentions = for {
(mention, times) <- mentions
account <- accounts.get(mention)
if account.party == "Democratic"
} yield (account, times)
Expression (mention, times) <- mentions is an example of pattern matching (see below). Iterating over a map returns a set of key-value tuples, and pattern-matching allows the tuples to easily be destructured into separate variables for the key and value. Similarly, the result of the comprehension also returns key-value tuples, which are automatically built back up into a map because the source object (from the variable mentions) is a map. Note that if mentions instead held a list, set, array or other collection of tuples, exactly the same code above would yield a new collection of the same type.
Functional tendencies
While supporting all of the object-oriented features available in Java (and in fact, augmenting them in various ways), Scala also provides a large number of capabilities that are normally found only in functional programming languages. Together, these features allow Scala programs to be written in an almost completely functional style and also allow functional and object-oriented styles to be mixed.
Examples are:
No distinction between statements and expressions
Type inference
Anonymous functions with capturing semantics (i.e., closures)
Immutable variables and objects
Lazy evaluation
Delimited continuations (since 2.8)
Higher-order functions
Nested functions
Currying
Pattern matching
Algebraic data types (through case classes)
Tuples
Everything is an expression
Unlike C or Java, but similar to languages such as Lisp, Scala makes no distinction between statements and expressions. All statements are in fact expressions that evaluate to some value. Functions that would be declared as returning void in C or Java, and statements like while that logically do not return a value, are in Scala considered to return the type Unit, which is a singleton type, with only one object of that type. Functions and operators that never return at all (e.g. the throw operator or a function that always exits non-locally using an exception) logically have return type Nothing, a special type containing no objects; that is, a bottom type, i.e. a subclass of every possible type. (This in turn makes type Nothing compatible with every type, allowing type inference to function correctly.)
Similarly, an if-then-else "statement" is actually an expression, which produces a value, i.e. the result of evaluating one of the two branches. This means that such a block of code can be inserted wherever an expression is desired, obviating the need for a ternary operator in Scala:
For similar reasons, return statements are unnecessary in Scala, and in fact are discouraged. As in Lisp, the last expression in a block of code is the value of that block of code, and if the block of code is the body of a function, it will be returned by the function.
To make it clear that all functions are expressions, even methods that return Unit are written with an equals sign
def printValue(x: String): Unit = {
println("I ate a %s".format(x))
}
or equivalently (with type inference, and omitting the unnecessary braces):
def printValue(x: String) = println("I ate a %s" format x)
Type inference
Due to type inference, the type of variables, function return values, and many other expressions can typically be omitted, as the compiler can deduce it. Examples are val x = "foo" (for an immutable constant or immutable object) or var x = 1.5 (for a variable whose value can later be changed). Type inference in Scala is essentially local, in contrast to the more global Hindley-Milner algorithm used in Haskell, ML and other more purely functional languages. This is done to facilitate object-oriented programming. The result is that certain types still need to be declared (most notably, function parameters, and the return types of recursive functions), e.g.
def formatApples(x: Int) = "I ate %d apples".format(x)
or (with a return type declared for a recursive function)
def factorial(x: Int): Int =
if (x == 0)
1
else
x*factorial(x - 1)
Anonymous functions
In Scala, functions are objects, and a convenient syntax exists for specifying anonymous functions. An example is the expression x => x < 2, which specifies a function with one parameter, that compares its argument to see if it is less than 2. It is equivalent to the Lisp form (lambda (x) (< x 2)). Note that neither the type of x nor the return type need be explicitly specified, and can generally be inferred by type inference; but they can be explicitly specified, e.g. as (x: Int) => x < 2 or even (x: Int) => (x < 2): Boolean.
Anonymous functions behave as true closures in that they automatically capture any variables that are lexically available in the environment of the enclosing function. Those variables will be available even after the enclosing function returns, and unlike in the case of Java's anonymous inner classes do not need to be declared as final. (It is even possible to modify such variables if they are mutable, and the modified value will be available the next time the anonymous function is called.)
An even shorter form of anonymous function uses placeholder variables: For example, the following:
list map { x => sqrt(x) }
can be written more concisely as
list map { sqrt(_) }
or even
list map sqrt
Immutability
Scala enforces a distinction between immutable and mutable variables. Mutable variables are declared using the var keyword and immutable values are declared using the val keyword.
A variable declared using the val keyword can not be reassigned in the same way that a variable declared using the final keyword can't be reassigned in Java. val's are only shallowly immutable, that is, an object referenced by a val is not guaranteed to itself be immutable.
Immutable classes are encouraged by convention however, and the Scala standard library provides a rich set of immutable collection classes.
Scala provides mutable and immutable variants of most collection classes, and the immutable version is always used unless the mutable version is explicitly imported.
The immutable variants are persistent data structures that always return an updated copy of an old object instead of updating the old object destructively in place.
An example of this is immutable linked lists where prepending an element to a list is done by returning a new list node consisting of the element and a reference to the list tail.
Appending an element to a list can only be done by prepending all elements in the old list to a new list with only the new element.
In the same way, inserting an element in the middle of a list will copy the first half of the list, but keep a reference to the second half of the list. This is called structural sharing.
This allows for very easy concurrency — no locks are needed as no shared objects are ever modified.
Lazy (non-strict) evaluation
Evaluation is strict ("eager") by default. In other words, Scala evaluates expressions as soon as they are available, rather than as needed. However, it is possible to declare a variable non-strict ("lazy") with the lazy keyword, meaning that the code to produce the variable's value will not be evaluated until the first time the variable is referenced. Non-strict collections of various types also exist (such as the type Stream, a non-strict linked list), and any collection can be made non-strict with the view method. Non-strict collections provide a good semantic fit to things like server-produced data, where the evaluation of the code to generate later elements of a list (that in turn triggers a request to a server, possibly located somewhere else on the web) only happens when the elements are actually needed.
Tail recursion
Functional programming languages commonly provide tail call optimization to allow for extensive use of recursion without stack overflow problems. Limitations in Java bytecode complicate tail call optimization on the JVM. In general, a function that calls itself with a tail call can be optimized, but mutually recursive functions cannot. Trampolines have been suggested as a workaround. Trampoline support has been provided by the Scala library with the object scala.util.control.TailCalls since Scala 2.8.0 (released 14 July 2010). A function may optionally be annotated with @tailrec, in which case it will not compile unless it is tail recursive.
Case classes and pattern matching
Scala has built-in support for pattern matching, which can be thought of as a more sophisticated, extensible version of a switch statement, where arbitrary data types can be matched (rather than just simple types like integers, booleans and strings), including arbitrary nesting. A special type of class known as a case class is provided, which includes automatic support for pattern matching and can be used to model the algebraic data types used in many functional programming languages. (From the perspective of Scala, a case class is simply a normal class for which the compiler automatically adds certain behaviors that could also be provided manually, e.g., definitions of methods providing for deep comparisons and hashing, and destructuring a case class on its constructor parameters during pattern matching.)
An example of a definition of the quicksort algorithm using pattern matching is this:
def qsort(list: List[Int]): List[Int] = list match {
case Nil => Nil
case pivot :: tail =>
val (smaller, rest) = tail.partition(_ < pivot)
qsort(smaller) ::: pivot :: qsort(rest)
}
The idea here is that we partition a list into the elements less than a pivot and the elements not less, recursively sort each part, and paste the results together with the pivot in between. This uses the same divide-and-conquer strategy of mergesort and other fast sorting algorithms.
The match operator is used to do pattern matching on the object stored in list. Each case expression is tried in turn to see if it will match, and the first match determines the result. In this case, Nil only matches the literal object Nil, but pivot :: tail matches a non-empty list, and simultaneously destructures the list according to the pattern given. In this case, the associated code will have access to a local variable named pivot holding the head of the list, and another variable tail holding the tail of the list. Note that these variables are read-only, and are semantically very similar to variable bindings established using the let operator in Lisp and Scheme.
Pattern matching also happens in local variable declarations. In this case, the return value of the call to tail.partition is a tuple — in this case, two lists. (Tuples differ from other types of containers, e.g. lists, in that they are always of fixed size and the elements can be of differing types — although here they are both the same.) Pattern matching is the easiest way of fetching the two parts of the tuple.
The form _ < pivot is a declaration of an anonymous function with a placeholder variable; see the section above on anonymous functions.
The list operators :: (which adds an element onto the beginning of a list, similar to cons in Lisp and Scheme) and ::: (which appends two lists together, similar to append in Lisp and Scheme) both appear. Despite appearances, there is nothing "built-in" about either of these operators. As specified above, any string of symbols can serve as function name, and a method applied to an object can be written "infix"-style without the period or parentheses. The line above as written:
qsort(smaller) ::: pivot :: qsort(rest)
could also be written thus:
qsort(rest).::(pivot).:::(qsort(smaller))
in more standard method-call notation. (Methods that end with a colon are right-associative and bind to the object to the right.)
Partial functions
In the pattern-matching example above, the body of the match operator is a partial function, which consists of a series of case expressions, with the first matching expression prevailing, similar to the body of a switch statement. Partial functions are also used in the exception-handling portion of a try statement:
try {
...
} catch {
case nfe:NumberFormatException => { println(nfe); List(0) }
case _ => Nil
}
Finally, a partial function can be used alone, and the result of calling it is equivalent to doing a match over it. For example, the prior code for quicksort can be written thus:
val qsort: List[Int] => List[Int] = {
case Nil => Nil
case pivot :: tail =>
val (smaller, rest) = tail.partition(_ < pivot)
qsort(smaller) ::: pivot :: qsort(rest)
}
Here a read-only variable is declared whose type is a function from lists of integers to lists of integers, and bind it to a partial function. (Note that the single parameter of the partial function is never explicitly declared or named.) However, we can still call this variable exactly as if it were a normal function:
scala> qsort(List(6,2,5,9))
res32: List[Int] = List(2, 5, 6, 9)
Object-oriented extensions
Scala is a pure object-oriented language in the sense that every value is an object. Data types and behaviors of objects are described by classes and traits. Class abstractions are extended by subclassing and by a flexible mixin-based composition mechanism to avoid the problems of multiple inheritance.
Traits are Scala's replacement for Java's interfaces. Interfaces in Java versions under 8 are highly restricted, able only to contain abstract function declarations. This has led to criticism that providing convenience methods in interfaces is awkward (the same methods must be reimplemented in every implementation), and extending a published interface in a backwards-compatible way is impossible. Traits are similar to mixin classes in that they have nearly all the power of a regular abstract class, lacking only class parameters (Scala's equivalent to Java's constructor parameters), since traits are always mixed in with a class. The super operator behaves specially in traits, allowing traits to be chained using composition in addition to inheritance. The following example is a simple window system:
abstract class Window {
// abstract
def draw()
}
class SimpleWindow extends Window {
def draw() {
println("in SimpleWindow")
// draw a basic window
}
}
trait WindowDecoration extends Window { }
trait HorizontalScrollbarDecoration extends WindowDecoration {
// "abstract override" is needed here for "super()" to work because the parent
// function is abstract. If it were concrete, regular "override" would be enough.
abstract override def draw() {
println("in HorizontalScrollbarDecoration")
super.draw()
// now draw a horizontal scrollbar
}
}
trait VerticalScrollbarDecoration extends WindowDecoration {
abstract override def draw() {
println("in VerticalScrollbarDecoration")
super.draw()
// now draw a vertical scrollbar
}
}
trait TitleDecoration extends WindowDecoration {
abstract override def draw() {
println("in TitleDecoration")
super.draw()
// now draw the title bar
}
}
A variable may be declared thus:
val mywin = new SimpleWindow with VerticalScrollbarDecoration with HorizontalScrollbarDecoration with TitleDecoration
The result of calling mywin.draw() is:
in TitleDecoration
in HorizontalScrollbarDecoration
in VerticalScrollbarDecoration
in SimpleWindow
In other words, the call to draw first executed the code in TitleDecoration (the last trait mixed in), then (through the super() calls) threaded back through the other mixed-in traits and eventually to the code in Window, even though none of the traits inherited from one another. This is similar to the decorator pattern, but is more concise and less error-prone, as it doesn't require explicitly encapsulating the parent window, explicitly forwarding functions whose implementation isn't changed, or relying on run-time initialization of entity relationships. In other languages, a similar effect could be achieved at compile-time with a long linear chain of implementation inheritance, but with the disadvantage compared to Scala that one linear inheritance chain would have to be declared for each possible combination of the mix-ins.
Expressive type system
Scala is equipped with an expressive static type system that mostly enforces the safe and coherent use of abstractions. The type system is, however, not sound. In particular, the type system supports:
Classes and abstract types as object members
Structural types
Path-dependent types
Compound types
Explicitly typed self references
Generic classes
Polymorphic methods
Upper and lower type bounds
Variance
Annotation
Views
Scala is able to infer types by use. This makes most static type declarations optional. Static types need not be explicitly declared unless a compiler error indicates the need. In practice, some static type declarations are included for the sake of code clarity.
Type enrichment
A common technique in Scala, known as "enrich my library" (originally termed "pimp my library" by Martin Odersky in 2006; concerns were raised about this phrasing due to its negative connotations and immaturity), allows new methods to be used as if they were added to existing types. This is similar to the C# concept of extension methods but more powerful, because the technique is not limited to adding methods and can, for instance, be used to implement new interfaces. In Scala, this technique involves declaring an implicit conversion from the type "receiving" the method to a new type (typically, a class) that wraps the original type and provides the additional method. If a method cannot be found for a given type, the compiler automatically searches for any applicable implicit conversions to types that provide the method in question.
This technique allows new methods to be added to an existing class using an add-on library such that only code that imports the add-on library gets the new functionality, and all other code is unaffected.
The following example shows the enrichment of type Int with methods isEven and isOdd:
object MyExtensions {
implicit class IntPredicates(i: Int) {
def isEven = i % 2 == 0
def isOdd = !isEven
}
}
import MyExtensions._ // bring implicit enrichment into scope
4.isEven // -> true
Importing the members of MyExtensions brings the implicit conversion to extension class IntPredicates into scope.
Concurrency
Scala's standard library includes support for futures and promises, in addition to the standard Java concurrency APIs. Originally, it also included support for the actor model, which is now available as a separate open source platform Akka created by Lightbend Inc. Akka actors may be distributed or combined with software transactional memory (transactors). Alternative communicating sequential processes (CSP) implementations for channel-based message passing are Communicating Scala Objects, or simply via JCSP.
An Actor is like a thread instance with a mailbox. It can be created by system.actorOf, overriding the receive method to receive messages and using the ! (exclamation point) method to send a message.
The following example shows an EchoServer that can receive messages and then print them.
val echoServer = actor(new Act {
become {
case msg => println("echo " + msg)
}
})
echoServer ! "hi"
Scala also comes with built-in support for data-parallel programming in the form of Parallel Collections integrated into its Standard Library since version 2.9.0.
The following example shows how to use Parallel Collections to improve performance.
val urls = List("https://scala-lang.org", "https://github.com/scala/scala")
def fromURL(url: String) = scala.io.Source.fromURL(url)
.getLines().mkString("\n")
val t = System.currentTimeMillis()
urls.par.map(fromURL(_)) // par returns parallel implementation of a collection
println("time: " + (System.currentTimeMillis - t) + "ms")
Besides futures and promises, actor support, and data parallelism, Scala also supports asynchronous programming with software transactional memory, and event streams.
Cluster computing
The most well-known open-source cluster-computing solution written in Scala is Apache Spark. Additionally, Apache Kafka, the publish–subscribe message queue popular with Spark and other stream processing technologies, is written in Scala.
Testing
There are several ways to test code in Scala. ScalaTest supports multiple testing styles and can integrate with Java-based testing frameworks. ScalaCheck is a library similar to Haskell's QuickCheck. specs2 is a library for writing executable software specifications. ScalaMock provides support for testing high-order and curried functions. JUnit and TestNG are popular testing frameworks written in Java.
Versions
Comparison with other JVM languages
Scala is often compared with Groovy and Clojure, two other programming languages also using the JVM. Substantial differences between these languages exist in the type system, in the extent to which each language supports object-oriented and functional programming, and in the similarity of their syntax to that of Java.
Scala is statically typed, while both Groovy and Clojure are dynamically typed. This makes the type system more complex and difficult to understand but allows almost all type errors to be caught at compile-time and can result in significantly faster execution. By contrast, dynamic typing requires more testing to ensure program correctness, and thus is generally slower, to allow greater programming flexibility and simplicity. Regarding speed differences, current versions of Groovy and Clojure allow optional type annotations to help programs avoid the overhead of dynamic typing in cases where types are practically static. This overhead is further reduced when using recent versions of the JVM, which has been enhanced with an invoke dynamic instruction for methods that are defined with dynamically typed arguments. These advances reduce the speed gap between static and dynamic typing, although a statically typed language, like Scala, is still the preferred choice when execution efficiency is very important.
Regarding programming paradigms, Scala inherits the object-oriented model of Java and extends it in various ways. Groovy, while also strongly object-oriented, is more focused in reducing verbosity. In Clojure, object-oriented programming is deemphasised with functional programming being the main strength of the language. Scala also has many functional programming facilities, including features found in advanced functional languages like Haskell, and tries to be agnostic between the two paradigms, letting the developer choose between the two paradigms or, more frequently, some combination thereof.
Regarding syntax similarity with Java, Scala inherits much of Java's syntax, as is the case with Groovy. Clojure on the other hand follows the Lisp syntax, which is different in both appearance and philosophy. However, learning Scala is also considered difficult because of its many advanced features. This is not the case with Groovy, despite its also being a feature-rich language, mainly because it was designed to be mainly a scripting language.
Adoption
Language rankings
, JVM-based languages such as Clojure, Groovy, Kotlin, Scala are significantly less popular than the original Java language, which is usually ranked in the top three places, and which is also simultaneously evolving over time.
The Popularity of Programming Language Index, which tracks searches for language tutorials, ranked Scala 15th in April 2018 with a small downward trend, and 17th in Jan 2021. This makes Scala the 3rd most popular JVM-based language after Java and Kotlin, ranked 12th.
The TIOBE index of programming language popularity employs internet search engine rankings and similar publication-counting to determine language popularity. , it shows Scala in 31st place. In this ranking, Scala is ahead of Haskell (38th) and Erlang, but below Go (14th), Swift (15th), and Perl (19th).
The RedMonk Programming Language Rankings, which establishes rankings based on the number of GitHub projects and questions asked on Stack Overflow, ranks Scala 14th. Here, Scala is placed inside a second-tier group of languages–ahead of Go, PowerShell, and Haskell, and behind Swift, Objective-C, Typescript, and R.
In the 2018 edition of the State of Java survey, which collected data from 5160 developers on various Java-related topics, Scala places third in terms of use of alternative languages on the JVM. Relative to the prior year's edition of the survey, Scala's use among alternative JVM languages fell from 28.4% to 21.5%, overtaken by Kotlin, which rose from 11.4% in 2017 to 28.8% in 2018.
Back in 2013, when Scala was in version 2.10, the ThoughtWorks Technology Radar, which is an opinion based biannual report of a group of senior technologists, recommended Scala adoption in its languages and frameworks category. In July 2014, this assessment was made more specific and now refers to a “Scala, the good parts”, which is described as “To successfully use Scala, you need to research the language and have a very strong opinion on which parts are right for you, creating your own definition of Scala, the good parts.”.
Companies
In April 2009, Twitter announced that it had switched large portions of its backend from Ruby to Scala and intended to convert the rest.
Gilt uses Scala and Play Framework.
Foursquare uses Scala and Lift.
Coursera uses Scala and Play Framework.
Apple Inc. uses Scala in certain teams, along with Java and the Play framework.
The Guardian newspaper's high-traffic website guardian.co.uk announced in April 2011 that it was switching from Java to Scala.
The New York Times revealed in 2014 that its internal content management system Blackbeard is built using Scala, Akka, and Play.
The Huffington Post newspaper started to employ Scala as part of its content delivery system Athena in 2013.
Swiss bank UBS approved Scala for general production use.
LinkedIn uses the Scalatra microframework to power its Signal API.
Meetup uses Unfiltered toolkit for real-time APIs.
Remember the Milk uses Unfiltered toolkit, Scala and Akka for public API and real-time updates.
Verizon seeking to make "a next-generation framework" using Scala.
Airbnb develops open-source machine-learning software "Aerosolve", written in Java and Scala.
Zalando moved its technology stack from Java to Scala and Play.
SoundCloud uses Scala for its back-end, employing technologies such as Finagle (micro services), Scalding and Spark (data processing).
Databricks uses Scala for the Apache Spark Big Data platform.
Morgan Stanley uses Scala extensively in their finance and asset-related projects.
There are teams within Google and Alphabet Inc. that use Scala, mostly due to acquisitions such as Firebase and Nest.
Walmart Canada uses Scala for their back-end platform.
Duolingo uses Scala for their back-end module that generates lessons.
HMRC uses Scala for many UK Government tax applications.
M1 Finance uses Scala for their back-end platform.
Criticism
In March 2015, former VP of the Platform Engineering group at Twitter Raffi Krikorian, stated that he would not have chosen Scala in 2011 due to its learning curve. The same month, LinkedIn SVP Kevin Scott stated their decision to "minimize [their] dependence on Scala". In November 2011, Yammer moved away from Scala for reasons that included the learning curve for new team members and incompatibility from one version of the Scala compiler to the next.
See also
sbt, a widely used build tool for Scala projects
Play!, an open-source Web application framework that supports Scala
Akka, an open-source toolkit for building concurrent and distributed applications
Chisel, an open-source language built on Scala that is used for hardware design and generation.
References
Further reading
Programming languages
Articles with example Scala code
Concurrent programming languages
Free software programmed in Scala
Functional languages
Java programming language family
JVM programming languages
Object-oriented programming languages
Pattern matching programming languages
Programming languages created in 2003
Scripting languages
Software using the Apache license
Statically typed programming languages
2003 software
Cross-platform free software
Free compilers and interpreters
Source-to-source compilers |