id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
63801995 | https://en.wikipedia.org/wiki/Control%20Z | Control Z | Control Z is a Mexican teen drama streaming television series created by Carlos Quintanilla Sakar, Adriana Pelusi and Miguel García Moreno and developed by Lemon Studios for Netflix, that premiered on Netflix on 22 May 2020. The show stars Ana Valeria Becerril, Michael Ronda, Yankel Stevan and Zión Moreno. Shortly after its release, the series was renewed for a second season on 29 May 2020. Season 2 was released on 4 August 2021 and later that month the series was renewed for a third season set to be released in 2022.
Premise
During an assembly at Colegio Nacional (National School), a hacker exposes a huge secret of one of the students. This causes panic and humiliates a closeted transgender student. The hacker continues revealing students' secrets which causes numerous students to turn on one another. Sofía Herrera, an introverted teenager, tries to uncover who this hacker is before the dirt on her becomes public.
Cast and characters
Main
Ana Valeria Becerril as Sofía Herrera, her love interests was Raúl and now Javier. She is a very observant and quiet student, often seen as an outsider by her peers and when secrets are revealed, she sets out to investigate who the hacker is. She is smart and very justice-driven, but normally has to keep the peace between Raúl and Javier and she suffers from mental health problems due to her family situation.
Michael Ronda as Javier Williams, the new student and a love interest of Sofía. He quickly makes friends with Sofía on his first day at National School. He expresses romantic feelings for Sofía throughout the season and has a rivalry with Raúl, usually over the investigation and Sofía. He is the son of a retired footballer, Damian Williams, but refuses the coach's requests for him to play for the school team.
Yankel Stevan as Raúl León, a love interest of Sofía. Just like Javier he expresses romantic feelings for Sofía throughout the season. He comes from quite a wealthy family and is initially one of the popular kids but later helps Sofía track down the hacker. He strongly dislikes Javier.
Zión Moreno as Isabela de la Fuente (season 1), a transgender student and former girlfriend of Pablo. She was the most popular girl in the school and was liked by many boys, but after it was revealed that she was trans, she got harassed by multiple students and dumped by Pablo.
Luis Curiel as Luis Navarro (season 1; guest season 2), a quiet student who enjoys drawing. He is often bullied by Gerry, Ernesto and Dario, but is helped by Sofía and Javier.
Samantha Acuña as Alex, an openly yet socially distant lesbian student who is in a forbidden relationship with her Biology teacher Gabriela and is talented with computers.
Macarena García Romero as Natalia Alexander, one of the popular students, twin sister of María and one of Isabela's former best friends. She was in charge of organizing the NONA event until the secret about her using money raised from it in order to buy expensive items for herself was exposed. She is very self-centred, focused, and goes to extreme lengths to achieve what she wants.
Fiona Paloma as María Alexander, the twin sister of Natalia and another of Isabela's former best friends who is secretly having a fling with Pablo behind her back. She is often her sister's sidekick, but is nicer to people and always tries to make her friends happy. She feels guilty about being with Pablo while he was dating Isabela.
Andrés Baida as Pablo García, one of the popular students and ex-boyfriend of Isabela. He dumped Isabela in front of the school after she was exposed for being trans, but tries to reconcile with her multiple times later, despite cheating with María. He is loyal to his friends and encourages Gerry to stop bullying Luis, but is revealed to be an unfaithful boyfriend. In season 2, he is the most unforgiving of the student body when Raúl returns to the school despite his actions as the hacker that he incessantly uses violence as a mean to make him pay for everything he did, even after finding out that he's been hiding Gerry, who is wanted by the police for murdering Luis. He also tries unsuccessfully to reconcile with María for acting indifferently over her pregnancy.
Patricio Gallardo as Gerardo "Gerry" Granda, a former student at National School and ex-boyfriend of Rosita. He is a closeted sexually curious but ideally heterosexual boy who is exposed for watching gay pornography. He constantly bullied Luis along with his minions, Darío and Ernesto, and crosses the line when he sends Luis into a coma. He is very aggressive and bottles up a lot of anger and insecurity inside, but is shown to be empathetic and regrets his mistreatment of Luis.
Iván Aragón as Darío, one of Gerry's minions
Xabiani Ponce de León as Ernesto, another of Gerry's minions
Paty Maqueo as Rosa "Rosita" Restrepo, Gerry's ex-girlfriend who breaks up with him after his secret is exposed. She takes over as the organizer of the NONA after Natalia is revealed to be stealing the money.
Rodrigo Cachero as Miguel Quintanilla, the former principal of National School. He is dating Sofía's mother and proposes, but is rejected. He has sex with Susana in spitefulness. In season 2, he is engaged to Nora until she breaks it off after his affair with Susana comes to light.
Rocío Verdejo as Nora, Sofía's mother.
Mauro Sánchez Navarro as Bruno (season 1), the person in charge of technology at the school who assists the hacker in revealing the secrets of the students.
Lidia San José as Gabriela, a Biology teacher who is fired from National School after her relationship with Alex is exposed
Thanya López as Susana, one of the teachers at National School. She later becomes the new principal in season 2.
Renata del Castillo as Lulú, Quintanilla's secretary at National School
Arturo Barba as Fernando Herrera, Sofía's father and Nora's husband who fakes his own death
Kariam Castro as Valeria
Ariana Saavedra as Regina
Ana Sofía Gatica as Claudia (season 2–present)
Recurring
Alexander Holtmann as Lalo de la Fuente (season 1), Isabela's father
Nastassia Villasana as Bety de la Fuente (season 1), Isabela's mother
Marco Zunino as Damián Williams, Javier's father and a retired footballer
Susana Lozano as Gerry's mother
Ricardo Crespo as Gerry's father
Daniela Zavala as Alondra de León (season 1), Raúl's mother
Alejandro Ávila as Roberto de León (season 1), Raúl's father
Rodrigo Mejía as Natalia's father (seasons 1–2)
Citlali Galindo as Natalia's mother
Cristian Santin as Güero
Fabián Mejía as Salvador (season 1)
Cuitlahuac Santoyo as Gibrán
Sandra Burgos as Marta, Luis's mother
Pablo de la Rosa as Jordi
David Montalvo as Joaquín (season 1)
Pierre Louis as Felipe "Pipe" (season 2)
Episodes
Season 1 (2020)
Season 2 (2021)
Notes
References
External links
2020 Mexican television series debuts
2020s high school television series
2020s LGBT-related drama television series
2020s teen drama television series
Mexican drama television series
Mexican LGBT-related television shows
Spanish-language Netflix original programming
Television series about bullying
Television series about teenagers
Television series produced by Lemon Films
Transgender-related television shows
Works about computer hacking |
33579817 | https://en.wikipedia.org/wiki/ArchBang | ArchBang | ArchBang Linux is a simple lightweight rolling release Linux distribution based on a minimal Arch Linux operating system with the i3 window manager, but was previously using the Openbox windows manager. ArchBang is especially suitable for high performance on old or low-end hardware with limited resources. ArchBang's aim is to provide a simple out-of-the-box Arch-based Linux distribution with a pre-configured i3 desktop suite, adhering to Arch principles.
ArchBang has also been recommended as a fast installation method for people who have experience installing Arch Linux but want to avoid the more demanding default installation of Arch Linux when reinstalling it on another PC.
History
Inspired by CrunchBang Linux (which was derived from Debian), ArchBang was originally conceived and founded in a forum thread posted on the CrunchBang Forums by Willensky Aristide (a.k.a. Will X TrEmE). Aristide wanted a rolling release with the Openbox setup that Crunchbang came with. Arch Linux provided the light configurable rolling release system that was needed as a base for the Openbox desktop. With the encouragement and help of many in the CrunchBang community, and the addition of developer Pritam Dasgupta (a.k.a. sHyLoCk), the project began to take form. The goal was to make Arch Linux look like CrunchBang.
As of April 16, 2012, the new project leader is Stan McLaren.
Installation
ArchBang is available as an x86-64 ISO file for live CD installation or installed on a USB flash drive. The live CD is designed to allow the user to test the operating system prior to installation.
ArchBang comes with a modified Arch Linux graphical installation script for installation and also provides a simple, easy to follow, step-by-step installation guide.
Receptions
Jesse Smith reviewed the ArchBang 2011 for DistroWatch Weekly:
Smith also reviewed ArchBang 2013.09.01.
Whitson Gordon from Lifehacker wrote review about ArchBang in 2011:
References
External links
ArchBang on DistroWatch
ArchBang on OpenSourceFeed Gallery
Arch-based Linux distributions
Linux distributions without systemd
Operating system distributions bootable from read-only media
Pacman-based Linux distributions
Rolling Release Linux distributions
X86-64 Linux distributions
Linux distributions |
32813335 | https://en.wikipedia.org/wiki/Quod%20Libet%20%28software%29 | Quod Libet (software) | Quod Libet is a cross-platform free and open-source audio player, tag editor and library organizer. The main design philosophy is that the user knows how they want to organize their music best; the software is therefore built to be fully customizable and extensible using regular expressions and boolean logic. Quod Libet is based on GTK and written in Python, and uses the Mutagen tagging library. Ex Falso is the stand-alone tag-editing app (no audio) based on the same code and libraries.
Quod Libet is very scalable, able to handle libraries with tens of thousands of songs with ease. It provides a full feature set including support for Unicode, regular expression searching, key bindings to multimedia keys, fast but powerful tag editing, and a variety of plugins.
Quod Libet is available on most Linux distributions, macOS and Windows, requiring only PyGObject, Python, and an Open Sound System (OSS), ALSA or JACK compatible audio device. The XFCE desktop ISO image provided by the Debian project installs Quod Libet as the default audio player.
Features
Audio playback
Can deal with various audio back-ends via the plug-in architecture of GStreamer
Supports ReplayGain with smart selection based on either single track or full album, based on current view and play order
'Real' shuffle mode- entire playlist played before repeating
Ratings weighted random playback setting
Configurable play queue
Tag editing
Complete Unicode support
Changes to multiple files at once, even if files are in different formats
Ability to tag files based on filenames with fully configurable formats
Customizable renaming of files based on their tags and a user-supplied format
Human readable tag references, e.g. <artist> or <title> rather than %a or %t, with support for "if not-null x else y" logic (e.g. <albumartist|albumartist|artist>)
Fast track renumbering
Audio library
Audio Feeds / Podcast support
Authenticated SoundCloud support
Can save play counts
Can download and save lyrics
Fast refreshing of entire library based on changed files
Internet Radio / SHOUTcast support
Configurable song rating
User interface
Configurable interface to suit user preferences; Pango markup is used so that user can display tags in any way desired in the player
Launch additional "browsers" to keep different or multiple views on the library
Drag-n-drop support throughout interface.
Tray icon with full player control
Automatically recognize and display tags from many uncommon tags
Customisable Aggregation across albums or playlists (min, max, average, sum, Bayesian average)
Multiple ways to browse library:
Progressive search - library is filtered as searches are typed
Queries support boolean logic, numerical / date-based expressions, and regular expressions, and synthetic tags, that are derived internally (e.g. play count, rating, inclusion in playlist).
Playlists with integration throughout the player
Paned browser, using any fully customizable tags (e.g. genre, date, album artist...), allowing the user to [drill down] through their library as they prefer
View by album list with cover art
View by file-system directory, which includes songs not in your library
File formats
Include MP3, Ogg Vorbis, Opus, FLAC, ALAC, Musepack, MOD/XM/IT, WMA, Wavpack, MPEG-4 AAC
Unix-like control and query mechanisms
Status information available from the command line
Control of player using named pipe (FIFO) is possible
Text-based files available with current song information
Plugins
Quod Libet is currently bundled with over 80 Python-based plugins, including:
Automatic tagging via MusicBrainz and CDDB
Download and preview album art from a variety of online sources
On-screen display pop-ups
Last.fm/AudioScrobbler submission
Tag character encoding conversion
Intelligent title-casing of tags
Finding duplicate or near-duplicate songs across the entire library
Scan and save Replay Gain values across multiple albums at once (using GStreamer)
D-Bus-based Multimedia Shortcut Keys
Integrate with Sonos systems and Logitech Squeezebox
Export playlists to common formats (PLS, M3U, XSPF)
Publish to MQTT queues
See also
Comparison of free software for audio#Players
Exaile
DeaDBeeF
References
External links
Quod Libet on Bitbucket
Debian Package information page
2004 software
Applications using D-Bus
Audio software with JACK support
Audio player software that uses GTK
Cross-platform free software
Free audio software
Free media players
Free software programmed in Python
Linux media players
MacOS multimedia software
Software that uses PyGObject
Tag editors for Linux
Windows multimedia software |
342977 | https://en.wikipedia.org/wiki/Business%20process | Business process | A business process, business method or business function is a collection of related, structured activities or tasks by people or equipment in which a specific sequence produces a service or product (serves a particular business goal) for a particular customer or customers. Business processes occur at all organizational levels and may or may not be visible to the customers. A business process may often be visualized (modeled) as a flowchart of a sequence of activities with interleaving decision points or as a process matrix of a sequence of activities with relevance rules based on data in the process. The benefits of using business processes include improved customer satisfaction and improved agility for reacting to rapid market change. Process-oriented organizations break down the barriers of structural departments and try to avoid functional silos.
Overview
A business process begins with a mission objective (an external event) and ends with achievement of the business objective of providing a result that provides customer value. Additionally, a process may be divided into subprocesses (process decomposition), the particular inner functions of the process. Business processes may also have a process owner, a responsible party for ensuring the process runs smoothly from start to finish.
Broadly speaking, business processes can be organized into three types, according to von Rosing et al.:
Operational processes, which constitute the core business and create the primary value stream, e.g., taking orders from customers, opening an account, and manufacturing a component
Management processes, the processes that oversee operational processes, including corporate governance, budgetary oversight, and employee oversight
Supporting processes, which support the core operational processes, e.g., accounting, recruitment, call center, technical support, and safety training
A slightly different approach to these three types is offered by Kirchmer:
Operational processes, which focus on properly executing the operational tasks of an entity; this is where personnel "get the things done"
Management processes, which ensure that the operational processes are conducted appropriately; this is where managers "ensure efficient and effective work processes"
Governance processes, which ensure the entity is operating in full compliance with necessary legal regulations, guidelines, and shareholder expectations; this is where executives ensure the "rules and guidelines for business success" are followed
A complex business process may be decomposed into several subprocesses, which have their own attributes but also contribute to achieving the overall goal of the business. The analysis of business processes typically includes the mapping or modeling of processes and sub-processes down to activity/task level. Processes can be modeled through a large number of methods and techniques. For instance, the Business Process Modeling Notation is a business process modeling technique that can be used for drawing business processes in a visualized workflow. While decomposing processes into process types and categories can be useful, care must be taken in doing so as there may be crossover. In the end, all processes are part of a largely unified outcome, one of "customer value creation." This goal is expedited with business process management, which aims to analyze, improve, and enact business processes.
History
Adam Smith
An important early (1776) description of processes was that of economist Adam Smith in his famous example of a pin factory. Inspired by an article in Diderot's Encyclopédie, Smith described the production of a pin in the following way:
”One man draws out the wire; another straights it; a third cuts it; a fourth points it; a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on is a peculiar business; to whiten the pins is another ... and the important business of making a pin is, in this manner, divided into about eighteen distinct operations, which, in some manufactories, are all performed by distinct hands, though in others the same man will sometimes perform two or three of them.”
Smith also first recognized how the output could be increased through the use of labor division. Previously, in a society where production was dominated by handcrafted goods, one man would perform all the activities required during the production process, while Smith described how the work was divided into a set of simple tasks, which would be performed by specialized workers. The result of labor division in Smith’s example resulted in productivity increasing by 24,000 percent (sic), i.e. that the same number of workers made 240 times as many pins as they had been producing before the introduction of labor division.
It is worth noting that Smith did not advocate labor division at any price and per se. The appropriate level of task division was defined through experimental design of the production process. In contrast to Smith's view which was limited to the same functional domain and comprised activities that are in direct sequence in the manufacturing process, today's process concept includes cross-functionality as an important characteristic. Following his ideas, the division of labor was adopted widely, while the integration of tasks into a functional, or cross-functional, process was not considered as an alternative option until much later.
Frederick Winslow Taylor
American engineer, Frederick Winslow Taylor greatly influenced and improved the quality of industrial processes in the early twentieth century. His Principles of Scientific Management focused on standardization of processes, systematic training and clearly defining the roles of management and employees. His methods were widely adopted in the United States, Russia and parts of Europe and led to further developments such as “time and motion study” and visual task optimization techniques, such as Gantt charts.
Peter Drucker
In the latter part of the twentieth century, management guru Peter Drucker focused much of his work on simplification and decentralization of processes, which led to the concept of outsourcing. He also coined the concept of the "knowledge worker — as differentiated from manual workers — and how knowledge management would become part of an entity's processes.
Other definitions
Davenport (1993) defines a (business) process as:
”a structured, measured set of activities designed to produce a specific output for a particular customer or market. It implies a strong emphasis on how work is done within an organization, in contrast to a product focus’s emphasis on what. A process is thus a specific ordering of work activities across time and space, with a beginning and an end, and clearly defined inputs and outputs: a structure for action. ... Taking a process approach implies adopting the customer’s point of view. Processes are the structure by which an organization does what is necessary to produce value for its customers.”
This definition contains certain characteristics a process must possess. These characteristics are achieved by a focus on the business logic of the process (how work is done), instead of taking a product perspective (what is done). Following Davenport's definition of a process we can conclude that a process must have clearly defined boundaries, input and output, that it consists of smaller parts, activities, which are ordered in time and space, that there must be a receiver of the process outcome- a customer - and that the transformation taking place within the process must add customer value.
Hammer & Champy’s (1993) definition can be considered as a subset of Davenport’s. They define a process as:
”a collection of activities that takes one or more kinds of input and creates an output that is of value to the customer.”
As we can note, Hammer & Champy have a more transformation oriented perception, and put less emphasis on the structural component – process boundaries and the order of activities in time and space.
Rummler & Brache (1995) use a definition that clearly encompasses a focus on the organization’s external customers, when stating that
”a business process is a series of steps designed to produce a product or service. Most processes (...) are cross-functional, spanning the ‘white space’ between the boxes on the organization chart. Some processes result in a product or service that is received by an organization's external customer. We call these primary processes. Other processes produce products that are invisible to the external customer but essential to the effective management of the business. We call these support processes.”
The above definition distinguishes two types of processes, primary and support processes, depending on whether a process is directly involved in the creation of customer value, or concerned with the organization’s internal activities. In this sense, Rummler and Brache's definition follows Porter's value chain model, which also builds on a division of primary and secondary activities. According to Rummler and Brache, a typical characteristic of a successful process-based organization is the absence of secondary activities in the primary value flow that is created in the customer oriented primary processes. The characteristic of processes as spanning the white space on the organization chart indicates that processes are embedded in some form of organizational structure. Also, a process can be cross-functional, i.e. it ranges over several business functions.
Johansson et al. (1993). define a process as:
”a set of linked activities that take an input and transform it to create an output. Ideally, the transformation that occurs in the process should add value to the input and create an output that is more useful and effective to the recipient either upstream or downstream.”
This definition also emphasizes the constitution of links between activities and the transformation that takes place within the process. Johansson et al. also include the upstream part of the value chain as a possible recipient of the process output. Summarizing the four definitions above, we can compile the following list of characteristics for a business process:
Definability : It must have clearly defined boundaries, input and output.
Order : It must consist of activities that are ordered according to their position in time and space (a sequence).
Customer : There must be a recipient of the process' outcome, a customer.
Value-adding : The transformation taking place within the process must add value to the recipient, either upstream or downstream.
Embeddedness : A process cannot exist in itself, it must be embedded in an organizational structure.
Cross-functionality : A process regularly can, but not necessarily must, span several functions.
Frequently, identifying a process owner, (i.e., the person responsible for the continuous improvement of the process) is considered as a prerequisite. Sometimes the process owner is the same person who is performing the process.
Related concepts
Workflow
Workflow is the procedural movement of information, material, and tasks from one participant to another. Workflow includes the procedures, people and tools involved in each step of a business process. A single workflow may either be sequential, with each step contingent upon completion of the previous one, or parallel, with multiple steps occurring simultaneously. Multiple combinations of single workflows may be connected to achieve a resulting overall process.
Business process re-engineering
Business process re-engineering (BPR) was originally conceptualized by Hammer and Davenport as a means to improve organizational effectiveness and productivity. It can involve starting from a "blank slate" and completely recreating major business processes, or involve comparing the "as-is" process and the "to-be" process and mapping the path for change from one to the other. Often BPR will involve the use of information technology to secure significant performance improvement. The term unfortunately became associated with corporate "downsizing" in the mid-1990s.
Business process management (BPM)
Though the term has been used contextually to mixed effect, "business process management" (BPM) can generally be defined as a discipline involving a combination of a wide variety of business activity flows (e.g., business process automation, modeling, and optimization) that strives to support the goals of an enterprise within and beyond multiple boundaries, involving many people, from employees to customers and external partners. A major part of BPM's enterprise support involves the continuous evaluation of existing processes and the identification of ways to improve upon it, resulting in a cycle of overall organizational improvement.
Knowledge management
Knowledge management is the definition of the knowledge that employees and systems use to perform their functions and maintaining it in a format that can be accessed by others. The Duhon and the Gartner Group have defined it as "a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise's information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers."
Customer Service
Customer Service is a key component to an effective business and business plan. Customer service in the 21st century is always evolving, and it is important to grow with your customer base. Not only does a social media presence matter, but also clear communication, clear expectation setting, speed, and accuracy. If the customer service provided by a business is not effective, it can be detrimental to the business success.
Total quality management
Total quality management (TQM) emerged in the early 1980s as organizations sought to improve the quality of their products and services. It was followed by the Six Sigma methodology in the mid-1980s, first introduced by Motorola. Six Sigma consists of statistical methods to improve business processes and thus reduce defects in outputs. The "lean approach" to quality management was introduced by the Toyota Motor Company in the 1990s and focused on customer needs and reduction of wastage.
Creating a Strong Brand Presence through Social Media
Creating a strong brand presence through social media is an important component to running a successful business. Companies can market, gain consumer insights, and advertise through social media. “According to a Salesforce survey, 85% of consumers conduct research before they make a purchase online, and among the most used channels for research are websites (74%) and social media (38%). Consequently, businesses need to have an effective online strategy to increase brand awareness and grow.” (Paun, 2020)
Customers engage and interact through social media and businesses who are effectively part of social media drive more successful businesses. The most common social media sites that are used for business are Facebook, Instagram, and Twitter. Businesses with the strongest brand recognition and consumer engagement build social presences on all these platforms.
Resources:
Paun, Goran (2020). Building A Brand: Why A Strong Digital Presence Matters. Forbes. Sourced from: https://www.forbes.com/sites/forbesagencycouncil/2020/07/02/building-a-brand-why-a-strong-digital-presence-matters/?sh=13cfa8747f02
Information technology as an enabler for business process management
Advances in information technology over the years, have changed business processes within and between business enterprises. In the 1960s, operating systems had limited functionality, and any workflow management systems that were in use were tailor-made for the specific organization. The 1970s-1980s saw the development of data-driven approaches, as data storage and retrieval technologies improved. Data modeling rather than process modeling was the starting point for building an information system. Business processes had to adapt to information technology because process modeling was neglected. The shift towards process-oriented management occurred in the 1990s. Enterprise resource planning software with workflow management components such as SAP, Baan, PeopleSoft, Oracle and JD Edwards emerged, as did business process management systems (BPMS) later.
The world of e-business created a need to automate business processes across organizations, which in turn raised the need for standardized protocols and web services composition languages that can be understood across the industry. The Business Process Modeling Notation (BPMN) and Business Motivation Model (BMM) are widely used standards for business modeling. The Business Modeling and Integration Domain Task Force (BMI DTF) is a consortium of vendors and user companies that continues to work together to develop standards and specifications to promote collaboration and integration of people, systems, processes and information within and across enterprises.
The most recent trends in BPM are influenced by the emergence of cloud technology, the prevalence of social media, mobile technology, and the development of analytical techniques. Cloud-based technologies allow companies to purchase resources quickly and as required independent of their location. Social media, websites and smart phones are the newest channels through which organizations reach and support their customers. The abundance of customer data collected through these channels as well as through call center interactions, emails, voice calls, and customer surveys has led to a huge growth in data analytics which in turn is utilized for performance management and improving the ways in which the company services its customers.
Importance of the process chain
Business processes comprise a set of sequential sub-processes or tasks with alternative paths, depending on certain conditions as applicable, performed to achieve a given objective or produce given outputs. Each process has one or more needed inputs. The inputs and outputs may be received from, or sent to other business processes, other organizational units, or internal or external stakeholders.
Business processes are designed to be operated by one or more business functional units, and emphasize the importance of the “process chain” rather than the individual units.
In general, the various tasks of a business process can be performed in one of two ways:
manually
by means of business data processing systems such as ERP systems
Typically, some process tasks will be manual, while some will be computer-based, and these tasks may be sequenced in many ways. In other words, the data and information that are being handled through the process may pass through manual or computer tasks in any given order.
Policies, processes and procedures
The above improvement areas are equally applicable to policies, processes, detailed procedures (sub-processes/tasks) and work instructions. There is a cascading effect of improvements made at a higher level on those made at a lower level.
For example, if a recommendation to replace a given policy with a better one is made with proper justification and accepted in principle by business process owners, then corresponding changes in the consequent processes and procedures will follow naturally in order to enable implementation of the policies.
Reporting as an essential base for execution
Business processes must include up-to-date and accurate reports to ensure effective action. An example of this is the availability of purchase order status reports for supplier delivery follow-up as described in the section on effectiveness above. There are numerous examples of this in every possible business process.
Another example from production is the process of analysis of line rejections occurring on the shop floor. This process should include systematic periodical analysis of rejections by reason, and present the results in a suitable information report that pinpoints the major reasons, and trends in these reasons, for management to take corrective actions to control rejections and keep them within acceptable limits. Such a process of analysis and summarisation of line rejection events is clearly superior to a process which merely inquires into each individual rejection as it occurs.
Business process owners and operatives should realise that process improvement often occurs with introduction of appropriate transaction, operational, highlight, exception or M.I.S. reports, provided these are consciously used for day-to-day or periodical decision-making. With this understanding would hopefully come the willingness to invest time and other resources in business process improvement by introduction of useful and relevant reporting systems.
Supporting theories and concepts
Span of control
The span of control is the number of subordinates a supervisor manages within a structural organization. Introducing a business process concept has a considerable impact on the structural elements of the organization and thus also on the span of control.
Large organizations that are not organized as markets need to be organized in smaller units – departments – which can be defined according to different principles.
Information management concepts
Information management, and the organization infrastructure strategies related to it, are a theoretical cornerstone of the business process concept, requiring "a framework for measuring the level of IT support for business processes."
See also
Business analysis
Business method patent
Business process automation
Business Process Definition Metamodel
Business process mapping
Business process outsourcing
References
Further reading
Paul's Harmon, (2007). Business Process Change: 2nd Ed, A Guide for Business Managers and BPM and Six Sigma Professionals. Morgan Kaufmann
E. Obeng and S. Crainer S (1993). Making Re-engineering Happen. Financial Times Prentice Hall
Howard Smith and Peter Fingar (2003). Business Process Management. The Third Wave, MK Press
Slack et al., edited by: David Barnes (2000) The Open University, Understanding Business: Processes
Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.
Enterprise modelling |
37801448 | https://en.wikipedia.org/wiki/Ilan%20Sadeh | Ilan Sadeh | Ilan Sadeh (born June 1, 1953) is an Israeli IT theoretician, entrepreneur, and human rights activist. He holds the position of Associate Professor of Computer Sciences and Mathematics at the University for Information Science and Technology "St. Paul The Apostole" in Ohrid, North Macedonia.
Biography
Background and activities
Sadeh was the first to claim publicly in the Israeli media that Israel has no right to be called the "heir" to Holocaust victims and no right to represent Holocaust survivors. According to him, Zionist leaders have little cause for pride in their actions during the Second World War – Zionist financiers withheld funds, while the JDC refused to help save Europe's Jewry, instead prioritizing the needs of the Yishuv in Palestine.
The situation in Israel brought Sadeh to the conclusion that the political system must be replaced. He entered politics and led a movement in behalf of Holocaust survivors. He published a few articles in Israeli newspapers and had a public impact. Sadeh was elected a representative of that community and ran in the preliminary election of the Labour Party for the Knesset, or Israeli Parliament (1996), but was not elected. Following his activities, Sadeh was recently threatened and accused of being a traitor. Sadeh has taken libel action over the charges in Israeli Court (2011).
Mathematical background and Sadeh's contribution
The asymptotic equipartition property (AEP) or "Shannon–McMillan–Breiman theorem" is a general property of the output samples of a stochastic source and is the basis of Information Theory. It is fundamental to the concept of typical sequences used in theories of coding theory. AEP was first introduced by Shannon (1948), proved in weak convergence by McMillan (1953) and later refined to strong convergence by Breiman (1957, 1960).
Shannon Theorems are based on AEP. Shannon provided in 1959 the first source-compression coding theorems. But neither he nor his successors could present any algorithm that attains Shannon bound.
Only in 1990, Ornstein and Shields have proposed an algorithm that attains Shannon bound. They proved the convergence to Shannon bound known as "rate-distortion function". But their algorithm is far from being useful and assumes a-priori knowledge of source distribution.
In Sadeh's Ph.D. research (1990–1992) he proposed a universal algorithm that attains Shannon bound. That is, it does not require a priori knowledge of source distribution and asymptotically has some computational advantages. The algorithm is a generalization and merging of Ornstein Shields Algorithm and Wiener Ziv Algorithm (1989).
When he tried to prove convergence to Shannon bound, known also as "Rate Distortion Function", he realized that he could not rely on AEP or Shannon McMillan Breiman Theory.
So in 1992, he presented and proved a new "Limit Theorem" and named it "Lossy AEP" or "Extended Shannon McMillan Breiman Theorem".
That means that the basis of "Information theory" has been extended and generalized.
From that moment he has had a lot of clashes with Israeli academs: two of them submitted two very negative reviews to Math School in Tel Aviv University, stating that the Limit Theorem is wrong, and prevented the granting of his PhD in 1993 until 1996. Only after a long fight did he receive his PhD, almost three years after the submission of his dissertation!
Sadeh applied for patents in Israel (1992) and USA (1993) and got Israeli and USA Patents.
The Israeli American Clique influenced upon the systematic rejection of Sadeh's papers by IEEE.
His papers were accepted and published in various Mathematical Journals.
He has been invited and presented his results in a few conferences all over the world, including the IEEE Conference at Vancouver Canada 1995.
Research and development activities
Ilan Sadeh has had pioneering results in a few research and developments fields: "Smart camera," a long time before September 11 events, and "Homeland Security" projects, New Video compression, military applications for surveillance, seismic data processing and others.
Sadeh has established three start up companies: Meitav, Israel (1982), Visnet (1996) and Vipeg (2000). He has been intensively involved in establishing and R&D of new start-up companies, establishing the infrastructure, dealing with intellectual property issues, managing all activities, raising funding, coordinating consortium in EU FP5 FP6 programs.
However, being unable to compete with the "Fat Cat" companies, bureaucracy, civil industry and the military establishment in Israel, as well as with the European Companies that were promoting only MPEG4, Sadeh could not raise government support nor get the support of the Israeli Army, temporarily left Israel in 2006 and moved to North Macedonia in 2011.
Scientific achievements
He found and proved important Limit Theorems which are extensions of "Shannon–McMillan–Breiman theorem" (1992). These are the fundamental theorems of Information Theory. He applied Compression Algorithms based on Approximate String Matching.
He presented performance analysis based on large deviations theory (LDT) and presented the trade-off between compression rate, distortion level, and probability of error.
He proposed a new universal coding scheme ("Sadeh Algorithm") based on approximate string matching, Wiener Ziv Algorithm and Ornstein–Shields block-coding algorithm (1992).
Publications
I. Sadeh – "On Approximate String Matching"
IEEE Computer Society Data Compression Committee on Computer Communications 3, pp. 148–158 (1993). Universal algorithms for data compression.
I. Sadeh – "Operational rate distortion theory"
Journal of Applied Mathematics and Computer Science 5 (1), pp. 139–169 (1995).
He presented performance analysis based on LDT (Large Deviations Theory) and presented the trade-off between compression rate, distortion level and probability of error.
I. Sadeh – "Universal data compression based on approximate string matching,"
Journal of Applied Mathematics and Computer Science 5 (4), pp. 717–742 (1995).
Convergence Theorems of Universal algorithms for data compression.
I. Sadeh – "The rate distortion region for coding in stationary systems,"
Journal of Applied Mathematics and Computer Science 6 (1), pp. 101–114 (1996).
The exact bound relations between rates, distortion levels in multiple description system. The results are expansions of Shannon's bounds for multiterminal network.
I. Sadeh, A. Kazelman, M. Zak, "Universal voice compression algorithms based on approximate string matching,"
Journal of Applied Mathematics and Computer Science December 1995.
Presented sub-optimal universal coding schemes for voice coding.
I. Sadeh, "Bounds on Data Compression Ratio with a given Error Probability,"
Probability in the Engineering and Informational Sciences
Editor: Sheldon Ross, Cambridge University Press, 12 1998 pp. 189–210.
Presented the first application of Large Deviation Theory approach to the asymptotic expansions of Shannon's bounds.
I. Sadeh, "Universal-algorithm and theorems on approximate String matching,"
Probability in the Engineering and Informational Sciences
Editor: Sheldon Ross, Cambridge University Press,
He was the first to generalize Shannon McMillan Breiman Theorem (Lossy AEP).
He found important Limit Theorems. These theorems were "re-invented" by a member of the "Israeli Clique".
I. Sadeh, P. Novikov, M. Kaufman, "Gray scale movie compression based on approximate string matching,"
Image Processing and Communications, March 1996.
Presented sub-optimal universal coding schemes for video coding.
I. Sadeh, "Polynomial approximation of images,Computers and Mathematics with Applications, February 1996.
Presented a novel method for Image Coding based on Polynomial approximation of images. Theoretical and practical results were presented.
I. Sadeh, "Properties of image coding by polynomial representation,"Image Processing and Communications, March 1996.
More theoretical and practical results about Image Coding based on Polynomial approximation of images.
I. Sadeh, "Digital Data Compression in Computer Networks,"
Ph.D. Dissertation, School of Mathematical Sciences, Tel Aviv University, June 1993.
I. Sadeh, A. Averbuch "Bounds on parallel computation of multivariate polynomials" Proceedings on Theory of computing and systems. Published Springer-Verlag London, UK 1992, pages: 147–153
He found theoretical bounds on parallel computation of multivariate polynomial.
I. Sadeh "Optimal Data Compression Algorithm"Computers and Mathematics with Applications, September 1996, pages 57–72
He found important Limit Theorems for Approximate String Matching for data compression and practical sub optimal results.
I. Sadeh "On digital data compression – the asymptotic large deviations approach " Proceedings of the conference on Information Sciences and Systems 1992 Princeton university.
Presentation of Large Deviation Theory approach to the asymptotic expansions of Shannon's data compression bounds.
I. Sadeh "The rate distortion region for coding in stationary systems,"Journal of Applied Mathematics and Computer Science 1996 pp. 123–136
He presented new limit theorems for multiterminal systems and presented a new approach to the degraded diversity system problem.
I. Sadeh, "Polynomial approximation of images,"Computers and Mathematics with Applications'', February 1996
New theoretical and practical results about Image Coding based on Polynomial approximation of images.
I. Sadeh "Image encoding by polynomial approximation"
Proceedings of the conference on Information Sciences and Systems 1992 Princeton University
Conference paper – New theoretical and practical results about Image Coding based on Polynomial approximation of images.
I. Sadeh "Universal compression algorithms based on approximate string matching". Proceedings of the IEEE Information Theory Conference 1995 Vancouver Canada p. 84
Conference paper – he showed by using the extended Kac's Lemma, that the compression rate, asymptotically achieved by the "Sadeh Algorithm", converges in probability to Shannon's bound. The algorithm has been patented in the USA and Israel.
I. Sadeh "Operational rate distortion theory"
Proceedings of the IEEE Information Theory Conference 1995 Vancouver Canada, 196.
Presentation in Conference of First Large Deviation Theory approach to the asymptotic expansions of Shannon's data compression bounds.
I. Sadeh, "Approximate String Matching with applications to Universal Compression". Proceedings of the Conference on Control and Information at Hong Kong. Chinese University Press. 1995 pp 311 – 316
Conference paper – I have shown that the compression rate, asymptotically achieved by the "Sadeh Algorithm", converges in probability to Shannon's bound.
I. Sadeh, "Operational Rate Distortion Theory"
Proceedings of the Conference on Control and Information at Hong Kong Chinese University Press. 1995 pp. 305–310
Presentation in Conference of Large Deviation Theory approach to the asymptotic expansions of Shannon's theoretical bounds.
I. Sadeh "Methods and means for image and voice compression".
US patent 5836003
He had shown that the compression rate, asymptotically achieved by the "Sadeh Algorithm", converges in probability to Shannon's bound and showed suboptimal applications.
I. Sadeh US Patent 6018303
He have shown that the compression rate, asymptotically achieved by the "Sadeh Algorithm", converges in probability to Shannon's bound and showed suboptimal applications.
I. Sadeh, Israel Patent no. 103080.
Video and Voice coding algorithms.
I. Sadeh "Vehicle Navigation System" US Patent 4,593,359, 1986
A method and means for Tank Navigation. The method is operational even in severe electromagnetic environments, based on Sadeh's experience as Armored Forces Officer in Israel Army.
References
Jewish peace activists
Tel Aviv University alumni
Israeli computer scientists
Historians of mathematics
Historians of the Holocaust
Writers on Zionism
Technion – Israel Institute of Technology faculty
Israeli bioinformaticians
1953 births
Living people
Israeli expatriates in North Macedonia |
3612337 | https://en.wikipedia.org/wiki/Jess%20Mortensen | Jess Mortensen | Jesse Philo Mortensen (April 16, 1907 in Thatcher, Arizona – February 19, 1962) was an NCAA champion track athlete and coach. Mortensen is one of only three men to win Division I Men's Outdoor Track and Field Championship team titles as both an athlete and coach.
Biography
Mortensen enrolled at the University of Southern California (USC) in 1928. While at USC, he won eight varsity letters, three each in basketball and track and field and two in football. In basketball, he was selected as an All-Pacific Coast Conference player in 1928 and 1930. In football, he played at the left halfback position and was a member of the 1929 USC Trojans football team that defeated Pittsburgh in the 1930 Rose Bowl. In track and field, Mortensen was captain of the 1930 NCAA championship track team. He won the 1929 NCAA javelin title and set a world record in the decathlon in 1931.
After graduating from USC, Mortensen held coaching positions at Riverside Junior College, with the United States Navy during World War II, and after the war at the University of Denver and the United States Military Academy. He returned to become coach of the USC track and field team in 1951. He led the USC Trojans to seven NCAA titles in his 11 years as coach (1951–1961). His teams never lost a dual meet (64-0) and never finished worse than second in the conference meet. He was an assistant U.S. men's track coach in the 1956 Olympics. He also served as an assistant football coach at USC from 1951 to 1955. He coached track at the University of Denver and the United States Military Academy.
Mortensen is a member of the University of Southern California Athletic Hall of Fame, the National Track and Field Hall of Fame and the U.S. Track & Field and Cross Country Coaches Association Hall of Fame.
References
External links
Track and Field Hall of Fame bio and photo
1907 births
1962 deaths
All-American college men's basketball players
American men's basketball players
Basketball players from Arizona
Junior college men's track and field athletes in the United States
People from Thatcher, Arizona
Track and field athletes from California
USC Trojans football coaches
USC Trojans football players
USC Trojans men's basketball players
USC Trojans track and field coaches |
460698 | https://en.wikipedia.org/wiki/Renoise | Renoise | Renoise is a digital audio workstation (DAW) based upon the heritage and development of tracker software. Its primary use is the composition of music using sound samples, soft synths, and effects plug-ins. It is also able to interface with MIDI and OSC equipment. The main difference between Renoise and other music software is the characteristic vertical timeline sequencer used by tracking software.
History
Renoise was originally based on the code of another tracker called NoiseTrekker, made by Juan Antonio Arguelles Rius (Arguru). Then unnamed Renoise project was initiated by Eduard Müller (Taktik) and Zvonko Tesic (Phazze) during December 2000. The development team planned to take tracking software into a new standard of quality, enabling tracking scene composers to make audio of the same quality as other existing professional packages, while still keeping the proven interface that originated with Soundtracker in 1987. Version 1.0 was released in June 2002. Over the years the development team has grown to distribute the tasks of testing, administrative, support and web duties among several people.
Features
Renoise currently runs under recent versions of Windows (DirectSound or ASIO), Mac OS X (Core Audio) and Linux (ALSA or JACK).
Renoise has full MIDI and MIDI sync support, VST 3 plugin support, ASIO multi I/O cards support, integrated sampler and sample editor, internal real-time DSP effects with unlimited number of effects per track, master and send tracks, full automation of all commands, Hi-Fi wav/aiff rendering (up to 32-bit, 192 kHz), Rewire support, etc.
Supported sample formats
WAV, AIFF, FLAC, Ogg, MP3, CAF
Supported effects standards
VSTi, AU, LADSPA, DSSI
Renoise also features a Signal Follower and cross-track routing. The Signal Follower analyzes the audio output of a track and automates user-specified parameters based on the values it generates. Cross-track routing sends the automation of any Meta Device to any track. Computer Music magazine considered the combination of these two features to "open up some incredibly powerful control possibilities", and demonstrated how the signal triggered by a drum loop could control the filter cutoff frequency on a bass sound.
Renoise includes an arranging tool called the "pattern matrix", full cross-track modulation routing, built-in effects including a signal-follower metadevice that allows sidechain functionality, automatic softsynth-to-sample instrument rendering, and improved MIDI mapping.
Versions
Renoise is available as either a demo or a commercial version. The demo version excludes rendering to .WAV, ASIO support in Windows (DirectSound only) and a few other features. Also, the demo version has nag screens. The commercial version includes high quality WAV rendering (up to 32 bit 192 kHz) and ASIO support.
Development
With the introduction of Lua scripting in version 2.6, users can expand Renoise. They are encouraged to share their work on the centralized Renoise Tools web page.
XRNS file format
The XRNS file format is native to Renoise. It is based on the XML standard , and is readable in a normal text editor . This open XML-based file format also makes it possible for anyone to develop 3rd party applications and other systems in order to manipulate file content.
3rd party tools
A project for creating PHP scripts utilities for needed advanced edit tasks has been set up at SourceForge: XRNS-PHP project.
In August 2007, a functional XRNS2MIDI script was published by Renoise team member Bantai. It enables Renoise users, via an external frontend, to convert native songs into regular MIDI files (.mid) and thus exporting their work for use in conventional piano-roll sequencers such as Cubase or Reason.
Since version 2.6, it is possible to extend Renoise capabilities by writing plugins in the Lua programming language. A specific tools mini site has been created to showcase these. Almost any aspect of the program, except realtime audio data mangling, can be scripted using the native Renoise Lua API.
See also
List of music software
References
External links
Renoise Homepage
Audio editing software for Linux
Linux
Audio software with JACK support
Audio trackers
Digital audio editors for Linux
Digital audio workstation software
Linux software
Lua (programming language)-scriptable software
MacOS audio editors
Proprietary commercial software for Linux
Windows multimedia software |
546183 | https://en.wikipedia.org/wiki/The%20Art%20of%20Unix%20Programming | The Art of Unix Programming | The Art of Unix Programming by Eric S. Raymond is a book about the history and culture of Unix programming from its earliest days in 1969 to 2003 when it was published, covering both genetic derivations such as BSD and conceptual ones such as Linux.
The author utilizes a comparative approach to explaining Unix by contrasting it to other operating systems including desktop-oriented ones such as Microsoft Windows and the classic Mac OS to ones with research roots such as EROS and Plan 9 from Bell Labs.
The book was published by Addison-Wesley, September 17, 2003, and is also available online, under a Creative Commons license with additional clauses.
Contributors
The book contains many contributions, quotations and comments from UNIX gurus past and present. These include:
Ken Arnold (author of curses and Rogue)
Steve Bellovin
Stuart Feldman
Jim Gettys
Stephen C. Johnson
Brian Kernighan
David Korn
Mike Lesk
Doug McIlroy
Marshall Kirk McKusick
Keith Packard
Henry Spencer
Ken Thompson
See also
Unix philosophy
The Hacker Ethic and the Spirit of the Information Age
References
External links
Online book (HTML edition)
The Art of Unix Programming at FAQs
2003 non-fiction books
Books by Eric S. Raymond
Computer programming books
Creative Commons-licensed books
Unix books |
44960962 | https://en.wikipedia.org/wiki/2015%20Pac-12%20Football%20Championship%20Game | 2015 Pac-12 Football Championship Game | The 2015 Pac-12 Football Championship Game was played on Saturday, December 5, 2015 at Levi's Stadium in Santa Clara, California to determine the champion of the Pac-12 Conference in football for the 2015 season. It was the fifth championship game in Pac-12 Conference history. The game featured the South Division Co-champion USC Trojans against the North Division champion Stanford Cardinal. Stanford defeated USC 41–22 to win their third conference championship game.
History
The game was the fifth football conference championship for the Pac-12 Conference (or any of its predecessors). Last season, the Oregon Ducks defeated the Arizona Wildcats 51–13 for the conference championship and represented the conference in the 2015 Rose Bowl in Pasadena, California. The Ducks then advanced to play in the first CFP National Championship on January 12, 2015 against the Ohio State Buckeyes for the national championship.
Teams
The Stanford Cardinal faced the USC Trojans for the second time in the 2015 season, having won by a score of 41–31 at Los Angeles Memorial Coliseum earlier in the year.
Stanford
Stanford returned to the Pac-12 Championship for the third time since its inception. Stanford was led by fifth-year head coach David Shaw, QB Kevin Hogan, and RB Christian McCaffrey. McCaffrey averaged 255 all-purpose yards heading into the game.
USC
USC competed in the championship game for the first time since the conference expanded to 12 teams. USC and Utah tied for first place in the South Division, but USC won the tiebreaker by beating the Utes in the head-to-head matchup that season. Leading the team were senior QB Cody Kessler, freshman TB Ronald Jones II, sophomore WR JuJu Smith-Schuster, sophomore CB-WR-RET Adoree’ Jackson.
Game summary
Scoring summary
Statistics
See also
List of Pac-12 Conference football champions
Stanford–USC football rivalry
Notes
John Ondrasik of Five for Fighting performed during the game.
References
Championship
Pac-12 Football Championship Game
Stanford Cardinal football games
USC Trojans football games
Pac-12 Football
Pac-12 Football
Sports in Santa Clara, California |
284371 | https://en.wikipedia.org/wiki/Read-copy-update | Read-copy-update | In computer science, read-copy-update (RCU) is a synchronization mechanism that avoids the use of lock primitives while multiple threads concurrently read and update elements that are linked through pointers and that belong to shared data structures (e.g., linked lists, trees, hash tables).
Whenever a thread is inserting or deleting elements of data structures in shared memory, all readers are guaranteed to see and traverse either the older or the new structure, therefore avoiding inconsistencies (e.g., dereferencing null pointers).
It is used when performance of reads is crucial and is an example of space–time tradeoff, enabling fast operations at the cost of more space. This makes all readers proceed as if there were no synchronization involved, hence they will be fast, but also making updates more difficult.
Name and overview
The name comes from the way that RCU is used to update a linked structure in place.
A thread wishing to do this uses the following steps:
create a new structure,
copy the data from the old structure into the new one, and save a pointer to the old structure,
modify the new, copied, structure,
update the global pointer to refer to the new structure,
sleep until the operating system kernel determines that there are no readers left using the old structure, for example, in the Linux kernel, by using ,
once awakened by the kernel, deallocate the old structure.
So the structure is read concurrently with a thread copying in order to do an update, hence the name "read-copy update". The abbreviation "RCU" was one of many contributions by the Linux community. Other names for similar techniques include passive serialization and MP defer by VM/XA programmers and generations by K42 and Tornado programmers.
Detailed description
A key property of RCU is that readers can access a data structure even when it is in the process of being updated: RCU updaters cannot block readers or force them to retry their accesses. This overview starts by showing how data can be safely inserted into and deleted from linked structures despite concurrent readers. The first diagram on the right depicts a four-state insertion procedure, with time advancing from left to right.
The first state shows a global pointer named that is initially , colored red to indicate that it might be accessed by a reader at any time, thus requiring updaters to take care. Allocating memory for a new structure transitions to the second state. This structure has indeterminate state (indicated by the question marks) but is inaccessible to readers (indicated by the green color). Because the structure is inaccessible to readers, the updater may carry out any desired operation without fear of disrupting concurrent readers. Initializing this new structure transitions to the third state, which shows the initialized values of the structure's fields. Assigning a reference to this new structure to transitions to the fourth and final state. In this state, the structure is accessible to readers, and is therefore colored red. The primitive is used to carry out this assignment, and ensures that the assignment is atomic in the sense that concurrent readers will either see a pointer or a valid pointer to the new structure, but not some mash-up of the two values. Additional properties of are described later in this article.
This procedure demonstrates how new data may be inserted into a linked data structure even though readers are concurrently traversing the data structure before, during, and after the insertion. The second diagram on the right depicts a four-state deletion procedure, again with time advancing from left to right.
The first state shows a linked list containing elements , , and . All three elements are colored red to indicate that an RCU reader might reference any of them at any time. Using to remove element from this list transitions to the second state. Note that the link from element B to C is left intact in order to allow readers currently referencing element to traverse the remainder of the list. Readers accessing the link from element will either obtain a reference to element or element , but either way, each reader will see a valid and correctly formatted linked list. Element is now colored yellow to indicate that while pre-existing readers might still have a reference to element , new readers have no way to obtain a reference. A wait-for-readers operation transitions to the third state. Note that this wait-for-readers operation need only wait for pre-existing readers, but not new readers. Element is now colored green to indicate that readers can no longer be referencing it. Therefore, it is now safe for the updater to free element , thus transitioning to the fourth and final state.
It is important to reiterate that in the second state different readers can see two different versions of the list, either with or without element . In other words, RCU provides coordination in space (different versions of the list) as well as in time (different states in the deletion procedures). This is in stark contrast with more traditional synchronization primitives such as locking or transactions that coordinate in time, but not in space.
This procedure demonstrates how old data may be removed from a linked data structure even though readers are concurrently traversing the data structure before, during, and after the deletion. Given insertion and deletion, a wide variety of data structures can be implemented using RCU.
RCU's readers execute within read-side critical sections, which are normally delimited by and . Any statement that is not within an RCU read-side critical section is said to be in a quiescent state, and such statements are not permitted to hold references to RCU-protected data structures, nor is the wait-for-readers operation required to wait for threads in quiescent states. Any time period during which each thread resides at least once in a quiescent state is called a grace period. By definition, any RCU read-side critical section in existence at the beginning of a given grace period must complete before the end of that grace period, which constitutes the fundamental guarantee provided by RCU. In addition, the wait-for-readers operation must wait for at least one grace period to elapse. It turns out that this guarantee can be provided with extremely small read-side overheads, in fact, in the limiting case that is actually realized by server-class Linux-kernel builds, the read-side overhead is exactly zero.
RCU's fundamental guarantee may be used by splitting updates into removal and reclamation phases. The removal phase removes references to data items within a data structure (possibly by replacing them with references to new versions of these data items), and can run concurrently with RCU read-side critical sections. The reason that it is safe to run the removal phase concurrently with RCU readers is the semantics of modern CPUs guarantee that readers will see either the old or the new version of the data structure rather than a partially updated reference. Once a grace period has elapsed, there can no longer be any readers referencing the old version, so it is then safe for the reclamation phase to free (reclaim) the data items that made up that old version.
Splitting an update into removal and reclamation phases allows the updater to perform the removal phase immediately, and to defer the reclamation phase until all readers active during the removal phase have completed, in other words, until a grace period has elapsed.
So the typical RCU update sequence goes something like the following:
Ensure that all readers accessing RCU-protected data structures carry out their references from within an RCU read-side critical section.
Remove pointers to a data structure, so that subsequent readers cannot gain a reference to it.
Wait for a grace period to elapse, so that all previous readers (which might still have pointers to the data structure removed in the prior step) will have completed their RCU read-side critical sections.
At this point, there cannot be any readers still holding references to the data structure, so it now may safely be reclaimed (e.g., freed).
In the above procedure (which matches the earlier diagram), the updater is performing both the removal and the reclamation step, but it is often helpful for an entirely different thread to do the reclamation. Reference counting can be used to let the reader perform removal so, even if the same thread performs both the update step (step (2) above) and the reclamation step (step (4) above), it is often helpful to think of them separately.
RCU is perhaps the most common non-blocking algorithm for a shared data structure. RCU is completely wait-free for any number of readers.
Single-writer implementations RCU are also lock-free for the writer.
Some multi-writer implementations of RCU are lock-free.
Other multi-writer implementations of RCU serialize writers with a lock.
Uses
By early 2008, there were almost 2,000 uses of the RCU API within the Linux kernel including the networking protocol stacks and the memory-management system.
, there were more than 9,000 uses.
Since 2006, researchers have applied RCU and similar techniques to a number of problems, including management of metadata used in dynamic analysis, managing the lifetime of clustered objects, managing object lifetime in the K42 research operating system, and optimizing software transactional memory implementations. Dragonfly BSD uses a technique similar to RCU that most closely resembles Linux's Sleepable RCU (SRCU) implementation.
Advantages and disadvantages
The ability to wait until all readers are done allows RCU readers to use much lighter-weight synchronization—in some cases, absolutely no synchronization at all. In contrast, in more conventional lock-based schemes, readers must use heavy-weight synchronization in order to prevent an updater from deleting the data structure out from under them. The reason is that lock-based updaters typically update data in place, and must therefore exclude readers. In contrast, RCU-based updaters typically take advantage of the fact that writes to single aligned pointers are atomic on modern CPUs, allowing atomic insertion, removal, and replacement of data in a linked structure without disrupting readers. Concurrent RCU readers can then continue accessing the old versions, and can dispense with the atomic read-modify-write instructions, memory barriers, and cache misses that are so expensive on modern SMP computer systems, even in absence of lock contention. The lightweight nature of RCU's read-side primitives provides additional advantages beyond excellent performance, scalability, and real-time response. For example, they provide immunity to most deadlock and livelock conditions.
Of course, RCU also has disadvantages. For example, RCU is a specialized technique that works best in situations with mostly reads and few updates, but is often less applicable to update-only workloads. For another example, although the fact that RCU readers and updaters may execute concurrently is what enables the lightweight nature of RCU's read-side primitives, some algorithms may not be amenable to read/update concurrency.
Despite well over a decade of experience with RCU, the exact extent of its applicability is still a research topic.
Patents
The technique is covered by U.S. software patent 5,442,758, issued August 15, 1995 and assigned to Sequent Computer Systems, as well as by 5,608,893 (expired 2009-03-30), 5,727,209 (expired 2010-04-05), 6,219,690 (expired 2009-05-18), and 6,886,162 (expired 2009-05-25). The now-expired US Patent 4,809,168 covers a closely related technique. RCU is also the topic of one claim in the SCO v. IBM lawsuit.
Sample RCU interface
RCU is available in a number of operating systems, and was added to the Linux kernel in October 2002. User-level implementations such as liburcu are also available.
The implementation of RCU in version 2.6 of the Linux kernel is among the better-known RCU implementations, and will be used as an inspiration for the RCU API in the remainder of this article. The core API (Application Programming Interface) is quite small:
rcu_read_lock(): Marks an RCU-protected data structure so that it won't be reclaimed for the full duration of that critical section.
rcu_read_unlock(): Used by a reader to inform the reclaimer that the reader is exiting an RCU read-side critical section. Note that RCU read-side critical sections may be nested and/or overlapping.
synchronize_rcu(): Blocks until all pre-existing RCU read-side critical sections on all CPUs have completed. Note that synchronize_rcu will not necessarily wait for any subsequent RCU read-side critical sections to complete. For example, consider the following sequence of events:
CPU 0 CPU 1 CPU 2
----------------- ------------------------- ---------------
1. rcu_read_lock()
2. enters synchronize_rcu()
3. rcu_read_lock()
4. rcu_read_unlock()
5. exits synchronize_rcu()
6. rcu_read_unlock()
Since synchronize_rcu is the API that must figure out when readers are done, its implementation is key to RCU. For RCU to be useful in all but the most read-intensive situations, synchronize_rcu's overhead must also be quite small.
Alternatively, instead of blocking, synchronize_rcu may register a callback to be invoked after all ongoing RCU read-side critical sections have completed. This callback variant is called call_rcu in the Linux kernel.
rcu_assign_pointer(): The updater uses this function to assign a new value to an RCU-protected pointer, in order to safely communicate the change in value from the updater to the reader. This function returns the new value, and also executes any memory barrier instructions required for a given CPU architecture. Perhaps more importantly, it serves to document which pointers are protected by RCU.
rcu_dereference(): The reader uses rcu_dereference to fetch an RCU-protected pointer, which returns a value that may then be safely dereferenced. It also executes any directives required by the compiler or the CPU, for example, a volatile cast for gcc, a memory_order_consume load for C/C++11 or the memory-barrier instruction required by the old DEC Alpha CPU. The value returned by rcu_dereference is valid only within the enclosing RCU read-side critical section. As with rcu_assign_pointer, an important function of rcu_dereference is to document which pointers are protected by RCU.
The diagram on the right shows how each API communicates among the reader, updater, and reclaimer.
The RCU infrastructure observes the time sequence of rcu_read_lock, rcu_read_unlock, synchronize_rcu, and call_rcu invocations in order to determine when (1) synchronize_rcu invocations may return to their callers and (2) call_rcu callbacks may be invoked. Efficient implementations of the RCU infrastructure make heavy use of batching in order to amortize their overhead over many uses of the corresponding APIs.
Simple implementation
RCU has extremely simple "toy" implementations that can aid understanding of RCU. This section presents one such "toy" implementation that works in a non-preemptive environment.
void rcu_read_lock(void) { }
void rcu_read_unlock(void) { }
void call_rcu(void (*callback) (void *), void *arg)
{
// add callback/arg pair to a list
}
void synchronize_rcu(void)
{
int cpu, ncpus = 0;
for each_cpu(cpu)
schedule_current_task_to(cpu);
for each entry in the call_rcu list
entry->callback (entry->arg);
}
In the code sample, rcu_assign_pointer and rcu_dereference can be ignored without missing much. However, they are needed in order to suppress harmful compiler optimization and to prevent CPUs from reordering accesses.
#define rcu_assign_pointer(p, v) ({ \
smp_wmb(); /* Order previous writes. */ \
ACCESS_ONCE(p) = (v); \
})
#define rcu_dereference(p) ({ \
typeof(p) _value = ACCESS_ONCE(p); \
smp_read_barrier_depends(); /* nop on most architectures */ \
(_value); \
})
Note that rcu_read_lock and rcu_read_unlock do nothing. This is the great strength of classic RCU in a non-preemptive kernel: read-side overhead is precisely zero, as smp_read_barrier_depends() is an empty macro on all but DEC Alpha CPUs; such memory barriers are not needed on modern CPUs. The ACCESS_ONCE() macro is a volatile cast that generates no additional code in most cases. And there is no way that rcu_read_lock can participate in a deadlock cycle, cause a realtime process to miss its scheduling deadline, precipitate priority inversion, or result in high lock contention. However, in this toy RCU implementation, blocking within an RCU read-side critical section is illegal, just as is blocking while holding a pure spinlock.
The implementation of synchronize_rcu moves the caller of synchronize_cpu to each CPU, thus blocking until all CPUs have been able to perform the context switch. Recall that this is a non-preemptive environment and that blocking within an RCU read-side critical section is illegal, which imply that there can be no preemption points within an RCU read-side critical section. Therefore, if a given CPU executes a context switch (to schedule another process), we know that this CPU must have completed all preceding RCU read-side critical sections. Once all CPUs have executed a context switch, then all preceding RCU read-side critical sections will have completed.
Analogy with reader–writer locking
Although RCU can be used in many different ways, a very common use of RCU is analogous to reader–writer locking. The following side-by-side code display shows how closely related reader–writer locking and RCU can be.
/* reader-writer locking */ /* RCU */
1 struct el { 1 struct el {
2 struct list_head lp; 2 struct list_head lp;
3 long key; 3 long key;
4 spinlock_t mutex; 4 spinlock_t mutex;
5 int data; 5 int data;
6 /* Other data fields */ 6 /* Other data fields */
7 }; 7 };
8 DEFINE_RWLOCK(listmutex); 8 DEFINE_SPINLOCK(listmutex);
9 LIST_HEAD(head); 9 LIST_HEAD(head);
1 int search(long key, int *result) 1 int search(long key, int *result)
2 { 2 {
3 struct el *p; 3 struct el *p;
4 4
5 read_lock(&listmutex); 5 rcu_read_lock();
6 list_for_each_entry(p, &head, lp) { 6 list_for_each_entry_rcu(p, &head, lp) {
7 if (p->key == key) { 7 if (p->key == key) {
8 *result = p->data; 8 *result = p->data;
9 read_unlock(&listmutex); 9 rcu_read_unlock();
10 return 1; 10 return 1;
11 } 11 }
12 } 12 }
13 read_unlock(&listmutex); 13 rcu_read_unlock();
14 return 0; 14 return 0;
15 } 15 }
1 int delete(long key) 1 int delete(long key)
2 { 2 {
3 struct el *p; 3 struct el *p;
4 4
5 write_lock(&listmutex); 5 spin_lock(&listmutex);
6 list_for_each_entry(p, &head, lp) { 6 list_for_each_entry(p, &head, lp) {
7 if (p->key == key) { 7 if (p->key == key) {
8 list_del(&p->lp); 8 list_del_rcu(&p->lp);
9 write_unlock(&listmutex); 9 spin_unlock(&listmutex);
10 synchronize_rcu();
10 kfree(p); 11 kfree(p);
11 return 1; 12 return 1;
12 } 13 }
13 } 14 }
14 write_unlock(&listmutex); 15 spin_unlock(&listmutex);
15 return 0; 16 return 0;
16 } 17 }
The differences between the two approaches are quite small. Read-side locking moves to rcu_read_lock and rcu_read_unlock, update-side locking moves from a reader-writer lock to a simple spinlock, and a synchronize_rcu precedes the kfree.
However, there is one potential catch: the read-side and update-side critical sections can now run concurrently. In many cases, this will not be a problem, but it is necessary to check carefully regardless. For example, if multiple independent list updates must be seen as a single atomic update, converting to RCU will require special care.
Also, the presence of synchronize_rcu means that the RCU version of delete can now block. If this is a problem, call_rcu could be used like call_rcu (kfree, p) in place of synchronize_rcu. This is especially useful in combination with reference counting.
History
Techniques and mechanisms resembling RCU have been independently invented multiple times:
H. T. Kung and Q. Lehman described use of garbage collectors to implement RCU-like access to a binary search tree.
Udi Manber and Richard Ladner extended Kung's and Lehman's work to non-garbage-collected environments by deferring reclamation until all threads running at removal time have terminated, which works in environments that do not have long-lived threads.
Richard Rashid et al. described a lazy translation lookaside buffer (TLB) implementation that deferred reclaiming virtual-address space until all CPUs flushed their TLB, which is similar in spirit to some RCU implementations.
James P. Hennessy, Damian L. Osisek, and Joseph W. Seigh, II were granted US Patent 4,809,168 in 1989 (since lapsed). This patent describes an RCU-like mechanism that was apparently used in VM/XA on IBM mainframes.
William Pugh described an RCU-like mechanism that relied on explicit flag-setting by readers.
Aju John proposed an RCU-like implementation where updaters simply wait for a fixed period of time, under the assumption that readers would all complete within that fixed time, as might be appropriate in a hard real-time system. Van Jacobson proposed a similar scheme in 1993 (verbal communication).
J. Slingwine and P. E. McKenney received US Patent 5,442,758 in August 1995, which describes RCU as implemented in DYNIX/ptx and later in the Linux kernel.
B. Gamsa, O. Krieger, J. Appavoo, and M. Stumm described an RCU-like mechanism used in the University of Toronto Tornado research operating system and the closely related IBM Research K42 research operating systems.
Rusty Russell and Phil Rumpf described RCU-like techniques for handling unloading of Linux kernel modules.
D. Sarma added RCU to version 2.5.43 of the Linux kernel in October 2002.
Robert Colvin et al. formally verified a lazy concurrent list-based set algorithm that resembles RCU.
M. Desnoyers et al. published a description of user-space RCU.
A. Gotsman et al. derived formal semantics for RCU based on separation logic.
Ilan Frenkel, Roman Geller, Yoram Ramberg, and Yoram Snir were granted US Patent 7,099,932 in 2006. This patent describes an RCU-like mechanism for retrieving and storing quality of service policy management information using a directory service in a manner that enforces read/write consistency and enables read/write concurrency.
See also
Concurrency control
Copy on write
Lock (software engineering)
Lock-free and wait-free algorithms
Multiversion concurrency control
Pre-emptive multitasking
Real-time computing
Resource contention
Resource starvation
Synchronization
Notes
References
Bauer, R.T., (June 2009), "Operational Verification of a Relativistic Program" PSU Tech Report TR-09-04 (http://www.pdx.edu/sites/www.pdx.edu.computer-science/files/tr0904.pdf)
External links
Paul E. McKenney, Mathieu Desnoyers, and Lai Jiangshan: User-space RCU. Linux Weekly News.
Paul E. McKenney and Jonathan Walpole: What is RCU, Fundamentally?, What is RCU? Part 2: Usage, and RCU part 3: the RCU API. Linux Weekly News.
Paul E. McKenney's RCU web page
Hart, McKenney, and Demke Brown (2006). Making Lockless Synchronization Fast: Performance Implications of Memory Reclamation An IPDPS 2006 Best Paper comparing RCU's performance to that of other lockless synchronization mechanisms. Journal version (including Walpole as author).
(1995) "Apparatus and method for achieving reduced overhead mutual exclusion and maintaining coherency in a multiprocessor system utilizing execution history and thread monitoring"
Paul McKenney: Sleepable RCU. Linux Weekly News.
Operating system technology
Concurrency control |
10139301 | https://en.wikipedia.org/wiki/Friedrich%20Staphylus | Friedrich Staphylus | Friedrich Staphylus (27 August 1512 – 5 March 1564) was a German theologian, at first a Protestant and then a Catholic convert.
Biography
Staphylus was born at Osnabrück. His father, Ludeke Stapellage, was an official of the Bishop of Osnabrück. Left an orphan at an early age, he came under the care of an uncle at Danzig, then went to Lithuania and studied at Cracow, after which he studied theology and philosophy at Padua.
About 1536 he went to Wittenberg, obtained the Degree of magister artium in 1541 and at Melanchthon's recommendation became a tutor in the family of the Count of Eberstein. In 1546 Duke Albert of Prussia appointed Staphylus professor of theology at the new University of Königsberg, which the duke had founded in 1544.
At this time Staphylus was still under the influence of Martin Luther's opinions, as is shown by his academic disputation upon the doctrine of justification, "De justificationis articulo". However, at his installation as professor he obtained the assurance that he need not remain if the duke tolerated errors which "might be contrary to the Holy Scriptures and the primitivœ apostolicœ et catholicœ ecclesiœ consensum". This shows that even then he regarded with suspicion the development of Protestantism.
At Königsberg he had a violent theological dispute with Wilhelm Gnapheus. In 1547–48 he was the first rector elected by the university, but in 1548 he resigned his professorship, because he met with enmity, and was dissatisfied with religious conditions in Prussia. Still he continued to be one of the councillors of the duke. In 1549 he married at Breslau the daughter of John Hess, a reformer of that place.
Returning to Königsberg, a new dispute broke out between him and Osiander. The dogmatic dissension, which seemed to him to make everything uncertain, drove him continually more and more to the Catholic idea of Tradition and to the demand for the authoritative exposition of the Scriptures by the Church. He expressed these views in the treatise "Synodus sanctorum patrum antiquorum contra nova dogmata Andreæ Osiandri", which he wrote at Danzig in 1552. A severe illness hastened his conversion, which took place at Breslau at the end of 1552.
After this he first entered the service of the Bishop of Breslau, for whom he established a school at Neisse. In 1555 the Emperor Ferdinand I appointed him a member of the imperial council. At the Disputation of Worms in 1557 he opposed, as one of the Catholic collocutors, the once venerated Melanchthon. In his "Theologiæ Martini Lutheri trimembris epitome" (1558) he severely attacked the lack of union in Protestantism, the worship of Luther, and religious subjectivism. The treatise called forth a number of answers.
In 1560 Duke Albert of Bavaria, at the request of Canisius, appointed Staphylus professor of theology at the Bavarian University of Ingolstadt after Staphylus had received the Degree of Doctor of Theology and Canon Law in virtue of a papal dispensation, as he was married. As superintendent (curator) he reformed the university.
After this he took an active part in the Catholic restoration in Bavaria and Austria. He drew up several opinions on reform for the Council of Trent, as the "Counsel to Pius IV", while he declined to go to the council personally. In 1562 the pope sent him a gift of one hundred gulden, and the emperor raised him to the nobility. He died at Ingolstadt, aged 51.
His learning and eloquence are frankly acknowledged by his Lutheran fellow-countryman Hermann Hamelmann.
Works
Diodori Siculi fragmentary ex Greco in latinum versa.
Historia et Apologia Utriusque Partis, Catholicae Et Confessionariae, de dissolutione Colloquii nuper Wormatiae institu ad omnes Catholicae fidei Protectores. Vienna 1558.
Theologiae Martini Lutheran Trimembris Epitome. Worms 1558.
Aigentliche and warhaffte description Wess bey the beautiful Besingknuß as the Romans. Kay. May the Emperor Ferdinand IRER May dear brother unnd Kayser Carlen the fünfften high Löblich Most Gedächtnus 24 and 25 February the 59th Jars of Augsburg ordenlich ... held to everywhere laughed openly. Dillingen, 1559.
Historam de vita, morte et gestis Caroli V. Augsburg, 1559 ( online )
Defensio Pro Trimembri Theologia M. Lvtheri, Contra Aedificatores Babylonicae Tvrris. Phil Melanthonem, Shvvenckfeldianum Longinum, And. Musculum, Mat FLACC. Illyricum, Iacobum Andream Shmidelinum. Dillingen 1561.
Vanguard to rescue the book. From the right was understood of the divine worts, of interpreting the Bible Teütschen, vnd From ainigkeit the Lutheran Predicanten. Ingolstadt 1561.
Christian to report to the godly gemainen laity. Ingolstadt 1561.
Prodromus D. Friderici Staphyli, in Defensionem Apologiae suae, de vero germanoque scripturae sacrae intellectu etc. Latine redditus by F. Laurentium Surium Carthusianum. Cologne, 1562.
Hysterodromum.
Lucubrationes super plurimas sessions ad Concilium cum libris II De republica Christiana.
Oratio de bone litteris, 1550.
Synodus Patrum contra Osiandrum, 1553.
From letsten and large waste so should happen before the coming of the Antichrist. 1565.
Literature
Staphylus, Frederick or season. In: Zedler's Universal-Lexicon. Volume 39, Leipzig, 1744, column 1228 to 1230.
Paul Tschackert: Staphylus, Frederick. In: Allgemeine Deutsche Biographie (ADB). Volume 35, Duncker & Humblot, Leipzig, 1893, pp. 457–461.
Paul Tschackert: Staphylus, Frederick. In: Realencyklopädie für Protestantische Theologie and Kirche (RE). 3 Edition. Volume 18, Hinrichs, Leipzig, 1906, pp. 776–771.
Ute Mennecke-Haustein: Staphylus, Frederick. In: Theologische Realenzyklopädie (TRE). Volume 32, de Gruyter, Berlin / New York, 2001, , pp. 113–115.
References
Staphylus, In causa religionis sparsim editi libri in unum volumen digesti (Ingolstadt, 1613)
Tschackert, Urkundenbuch zur Reformationsgeschichte des Herzogtums Preussen, I and III (Leipzig, 1890), passim
Soffner, Friedrich Staphylus (Breslau, 1904)
External links
http://www.adwmainz.de/index.php?id=1103
http://de.wikisource.org/wiki/ADB:Staphylus,_Friedrich
1512 births
1564 deaths
German Christian theologians
16th-century German theologians
Converts to Roman Catholicism
Converts to Roman Catholicism from Lutheranism
German Roman Catholics
German male non-fiction writers
Clergy from Osnabrück
16th-century German writers
16th-century German male writers
Writers from Osnabrück |
22085107 | https://en.wikipedia.org/wiki/MyInfo | MyInfo | MyInfo is a personal information manager developed by Milenix Software. MyInfo collects, organizes, edit, stores, and retrieves personal-reference information like text documents, web snippets, e-mails, notes, and files from other applications.
Latest major version adds speed improvements, perspectives, updated user interface, multiple attachments per note, multiple sections per notebook and more.
MyInfo uses both hierarchical, and folder-like structures along with tags, categories and other meta-information for organizing its content. It is one of the several PIM applications for Windows to do so.
MyInfo imports data from different third-party applications, most notably AskSam.
The software is used as a free-form personal information manager, personal wiki, outliner, personal knowledge base, game master tool, GTD filing system and others.
Awards
Nominated for Best Business Application by Software Industry Conference for their Shareware Industry Awards in 2003.
References
Further reading
External links
1999 software
C++ software
Note-taking software
Outliners
Personal information managers
Portable software
Task management software
Windows-only software |
13148291 | https://en.wikipedia.org/wiki/Computer%20trespass | Computer trespass | Computer trespass is a computer crime in the United States involving unlawful access to computers. It is defined under the Computer Fraud and Abuse act. (U.S.C 18 § 1030)
Definition
A computer trespass is defined as accessing a computer without proper authorization and gaining financial information, information from a department or agency from any protected computer. Each state has its own laws regarding computer trespassing but they all echo the federal act in some manner.
Examples of state legislation
New York
To be found guilty of computer trespass in New York one must knowingly use a computer, computer service, or computer network without authorization and commit (or attempt) some further crime.
Ohio
(A) No person shall knowingly use or operate the property of another without the consent of the owner or person authorized to give consent.
(B) No person, in any manner and by any means, including, but not limited to, computer hacking, shall knowingly gain access to, attempt to gain access to, or cause access to be gained to any computer, computer system, computer network, cable service, cable system, telecommunications device, telecommunications service, or information service without the consent of, or beyond the scope of the express or implied consent of, the owner of the computer, computer system, computer network, cable service, cable system, telecommunications device, telecommunications service, or information service or other person authorized to give consent.
Punishment
Under federal law, the punishment for committing a computer trespass is imprisonment for no more than 10 or 20 years, depending on the severity of the crime committed. (subsection (a) (b) (c) (1) (A) (B))
Criticism of the Computer Fraud and Abuse act
Years after the CFAA was put into law, many have become uncomfortable with the law's language because of the drastic difference between today's technology and the technology of the 1980s. Legal scholars such as Orin Kerr and Tiffany Curtis have expressed such concerns. In one of Curtis's essays, "Computer Fraud and Abuse Act Enforcement: Cruel, Unusual, and Due for Reform." She expresses how, with the passage of time, the CFAA becomes a more cruel law with the current language used in the act. She then suggests that the United States Congress should take the act into review and refer it to the Eighth Amendment to update the language to better fit modern law and society. Kerr wrote in his essay, "Trespass, Not Fraud: The Need for New Sentencing Guidelines in CFAA Cases," that since the language is so vague in the act, you could be punished in a way that doesn't reflect the crime you committed. He stresses how the language stresses the financial fraud language more than any other part in the law, leaving the rest vague in meaning.
Notable computer breaches
2013 Yahoo! Data Breach
2014 eBay Data Breach
2013 Target Data breach
2017 Equifax data breach
See also
Computer Fraud and Abuse Act
Computer security
Cybercrime
List of data breaches
National Information Infrastructure Protection Act
References
Computer law |
920548 | https://en.wikipedia.org/wiki/Mission%20critical | Mission critical | A mission critical factor of a system is any factor (component, equipment, personnel, process, procedure, software, etc.) that is essential to business operation or to an organization. Failure or disruption of mission critical factors will result in serious impact on business operations or upon an organization, and even can cause social turmoil and catastrophes.
Mission critical systems
A mission critical system is a system that is essential to the survival of a business or organization. When a mission critical system fails or is interrupted, business operations are significantly impacted. Mission essential equipment and mission critical application are also known as mission critical system.
Examples of mission critical systems are: an online banking system, railway/aircraft operating and control systems, electric power systems, and many other computer systems that will adversely affect business and society when they fail.
A good example of a mission critical system is a navigational system for a spacecraft. The difference between mission critical and business critical lies in the major adverse impact and the very real possibilities of loss of life, serious injury and/or financial loss.
There are four different types of critical systems: mission critical, business critical, safety critical and security critical. The key difference between a safety critical system and mission critical system, is that safety critical system is a system that, if it fails, may result in serious environmental damage, injury, or loss of life, while mission critical system may result in failure in goal-directed activity. An example of a safety critical system is a chemical manufacturing plant control system. Mission critical system and business critical system are similar terms, but a business critical system fault can influence only a single company or an organization and can partially stop lifetime activity (hours or days). So it also can be used as a mission critical system in the business. Failure of it will cause very high cost loss for the business. Security critical system may lead to loss of sensitive data through theft or accidental loss. All these four systems are generalized as critical system.
As a rule in crisis management, if a triage-type decision is made in which certain components must be eliminated or delayed, e.g. because of resource or personnel constraints, mission critical ones must not be among them.
Examples
Every business companies and organizations will have mission critical systems if they are functioning. A downed filtration system will cause the water filtration company to malfunction. In this case, the water filtration system is a mission critical system. If a gas system is downed, many restaurants and bakeries will have to shut down until the system functions well again. In this case, the gas system is a mission critical system. There are various other mission critical systems that, if they malfunction, will have serious impacts on other industries or organizations.
Navigating system of an aircraft
The aircraft is highly dependent on the navigating system. Air navigation is accomplished with many methods. Dead reckoning utilizes visual checkpoints along with distance and time calculations. The flight computer system aids the pilots to calculate the time and distance of the checkpoint that they set. The radio navigation aid (NAVAIDS) enables the pilots to navigate more accurately than dead reckoning alone, and in conditions of low visibility, radio navigation is handy. GPS is also used by pilots and uses 24 U.S. Department of Defence satellites to provide precise locational data, which includes speed, position, and track.
If two-way radio communication malfunctions, the pilots have to follow the steps in the Title 14 of the Code of Federal Regulations (14 CFR) part 91. Pneumatic system failure, the associated loss of altitude, and various unfamiliar situations may cause stress and loss of situational awareness. In this case, pilot should use instruments such as navigators to seek more information about the situational data. In this case, the malfunction of the navigation system would be mission critical and would cause serious consequences.
Nuclear reactor safety system
Nuclear reactor is a system that controls and contains the sustained nuclear chain reaction. It is usually used for generating electricity, but can also be used for conducting research and producing medical isotopes. Nuclear reactors have been one of the most concerning systems for public safety worldwide because the malfunction of a nuclear reactor can cause a serious disaster. Controlling the nuclear reactor system is accomplished by stopping, decreasing, or increasing the chain reaction inside the nuclear reactor. Varying the water level in the vertical cylinder and moving adjuster rods are the methods of controlling the chain reaction when the reactor is operating. Temperatures, reactor power levels, and pressure are constantly monitored by the sensitive detectors.
History
The mission critical is a business's quintessence and if failed, will cause serious financial and reputational damages. Today, as the companies develop and world becomes more web-based community, the range of mission critical has extended. But the mission critical computing has been evolving since the pre-Web era (before 1995). In the entirely text-based pre-web internet, gopher was one of the ASCII-based end-user programs. The mission critical system was basically used in transactional applications during this era. A business process management software, ERP, and airline reservation systems were usually mission critical. These applications were run on dedicated system in the data center. There were limited number of end users and usually accessed via terminals and personal computers.
After the pre-Web era, the Web era (1995 - 2010) rose. The range of mission critical increased to include electronic devices and web applications. More users were able to use to the internet and electronic devices, so larger number of end users were able to access increased mission critical applications. Therefore, the customers are expecting limitless availability and stronger security in the devices they are using. The businesses also start to become more web-based and this correspondingly increases the criminal associated with the money and fraud. This increase in range of mission critical made the security to become stronger and increased the security industries. Between 1995 and 2010, number of web users globally increased from 16 million to 1.7 billion. This shows increase in global reliance on web system.
After the Web era, consumerization era (2010 and beyond) has risen. The range of mission critical is even more increased due to increase in social, mobile, and customer-facing applications. The consumerization of IT became greater, organizations increased and web and IT availabilities to the people increased. Social business, customer service, and customer support applications have increased greatly, so mission critical was expanded further. According to Gartner, native PC projects will be outnumbered by mobile development projects by the ratio of 4:1. Therefore, today's mission critical now encloses all subjects crucial for customer based service, business operation, employee productivity, and finance. The customers' expectations rose and small disruption can cause tremendous loss in the business. It was estimated that Amazon could have lost as much as $1,100 per second in net sales when it was suffering from an outage, and a five-minute outage of Google lost Google more than $545,000 . Failure in mission critical and even short time of outage can cause high price of downtime due to reputations damages. Longer periods of downtime of mission critical systems can result in even more serious problems to the industries or organizations.
Safety & Security
Mission critical systems should remain very secured in all industries or organization using it. Therefore, the industries are using various security systems to avoid mission critical failures. Mainframes or workstations based companies are all dependent on database and process control, so database and process control would be mission critical for them. Hospital patient recording, call centers, stock exchanges, data storage centers, flight control tower, and many other industries that are dependent on communication system and computer should be protected from the shutdown of the system and they are considered mission critical. All the companies and industries are unavoidable to the unexpected or extraordinary problems that can cause shutdown to the mission critical. To avoid this, using the safety systems is considered very important part in the business.
Transport Layer Security (TLS)
The Transport Layer Security (TLS; formerly, Secure Socket Layers, SSL) refers to the standard security technology of networking protocol that controls and manages client and server authentication, and encrypt communication. This is usually used in the online transaction websites such as PayPal and Bank of America, which if systems are downed or hacked, will cause serious problems to the society and the companies themselves. In TLS, public-key and symmetric-key (encryption) are used together to secure the connection between two machines. Usually it is utilized in mail services or client machines that communicates via internet. To use this technology, the web server requires a digital certificate and this can be obtained through completing several questions about the identity of the website and get public keys and private keys (cryptographic keys). The industries using this technology may be also required to pay certain amount of money annually.
Shutdown systems
Nuclear power plants need safety systems to avoid mission-critical failures. The worst possible consequence that can result is leakage of radioactive materials (U-235 or Pu-239). One of the systems to avoid mission critical failures for nuclear power plants is shutdown system. It has two different forms: rod controls and safety injection control. When a problem occurs in the nuclear power plant, the rod control shutdown system drops the rods automatically and stops the chain reaction. The safety injection control injects liquid immediately when the system faces the problem in nuclear reactor and stops the chain reaction. Both systems are usually automatically operated, but also can be manually activated.
Real time and mission critical
Real time and Mission critical are often confused by many people but they are not the same concept.
Real time
Real time is responsiveness of a computer that makes the computer to continually update on external processes, and should process the procedure or information in a specified time, or could result in serious consequences. Video games are examples of real time since they are rendered by computer so rapidly, it is hard to notice the delay by the user. Each frame must be rendered in a short time to maintain the experience of interactivity. The speed of rendering graphics may vary according to the computer systems.
Types of real time systems
Hard real-time system shouldn't miss the specified time or can result in serious consequences. It is non-negotiable in timing and it is "wrong answer" if the deadline is missed. The example of hard real time system is airbags for cars.
Soft real-time system has more loose deadline. The system can handle the problems and functions normally even though the deadline is missed, but their functionality depends on fast-paced processes. An example for soft-real time can be typing, which, if delayed, people will get annoyed, but answer still is correct.
Non real-time system doesn't have certain or absolute deadlines. However, the throughput of the activity of performance can still be very essential.
Differences
Real time is a software that if specific time is not met, it fails, but mission critical is a system if failed, will result in catastrophic consequences. Although they go hand-in-hand, since real time can be mission critical, they are not the same concepts. These two are often confused by many people, but they are different concepts, but associated with each other.
Mission critical personnel and mission critical systems planning
Social survival
From the perspective of social function (i.e.: preserving society's life support structure, and overall structure intact), Mission Critical aspects of social function would necessitate the provision of Basic Needs for society. Such basic needs are often said to include Food (this includes food production and distribution), Water, Clothing (not an immediate need in an Emergency), Sanitation (Sewage is an immediate need, but Physical waste/garbage/rubbish disposal is not an immediate need in an Emergency), Housing/Shelter, Energy (not immediate) and Health Needs (not immediate in a healthy population). The prior list is not exhaustive. Longer term needs might include Communications/Transport needs in a developed population. This list of needs is associated with Mission Critical Personnel in a clear manner - Food Production requires Farmers, Food Distribution requires transportation personnel, Water requires Water-Infrastructure maintenance personnel (a long term requirement IF existing water infrastructure has been maintained to a high standard), Clothing tends to require individuals to maintain Clothes Production infrastructure, similarly for Sanitation. In emergencies, housing/shelter requires someone to build the shelter and maintain it over the long term if necessary. Health needs are met by Doctors, Nurses and Surgeons. Implicit use of infrastructure requires personnel to maintain that infrastructure also - so Food Transportation requires not only that there are drivers for food trucks, but also that (over the long term) there are Highways maintenance personnel who can maintain the roads, traffic infrastructure and signs for the roads, this in turn requires power supply personnel to provide power for traffic lights, etc... In this light, Mission Critical Systems (like other Mission Critical Systems) have a complex Dependency network which enables analysis of the level of interconnected Dependencies between different aspects of a Mission Critical System, which can be useful in planning, or just for gaining a truthful picture of how Mission Criticality is organised in Complex systems. This would enable the determination of 'choke points' in a Complex system, points at which a Mission Critical System (or set of systems) is vulnerable in one sense or another. Ideas that relate to Human resources and Human resources planning (making use of Gantt charts for Project Management, etc...) are also relevant.
Mission Criticality depends upon the Timescale associated with Basic Needs or other deemed Mission Critical factors. Over the medium term (often taken to be 10 years) and the long term (which can go into 50-60 year timescales), the planning for Mission Critical Systems will clearly differ from short-term Mission Critical Systems planning. Mission Critical Personnel can be considered part of the Mission Critical Systems planning paradigm BUT require a different approach to technological or mechanical aspects of Mission Critical Systems (i.e.: they require Human resources planning).
Attributes of mission critical personnel
Psychometrics enables the determination and characterisation of various Psychological aspects of Mission Critical Personnel (e.g.: their IQ in the case of highly skilled work, such as Nuclear Physics, for example). Some jobs require Physical standards (for instance, in the army), or physical dexterity (e.g.: Surgeons). There exist methods and means of characterising the types of skills, qualities and other attributes that certain Mission Critical job roles require, and these can be used as benchmarks for determining whether certain individuals are well suited to a particular Mission Critical job role, or what assistance a less qualified (or even less capable) individual would need in performing a certain Mission Critical job role which might be beyond their abilities (such measures might have to be taken in emergency situations).
See also
Life-critical system
Critical Infrastructure Protection
Downtime
References
Risk management
Engineering failures
Maintenance |
70924 | https://en.wikipedia.org/wiki/Music%20technology%20%28electronic%20and%20digital%29 | Music technology (electronic and digital) | Digital music technology encompasses digital instruments, computers, electronic effects units, software, or digital audio equipment by a performer, composer, sound engineer, DJ, or record producer to produce, perform or record music. The term refers to electronic devices, instruments, computer hardware, and software used in performance, playback, recording, composition, mixing, analysis, and editing of music.
Education
Professional training
Courses in music technology are offered at many different Universities as part of degree programs focusing on performance, composition, music research at the undergraduate and graduate level. The study of music technology is usually concerned with the creative use of technology for creating new sounds, performing, recording, programming sequencers or other music-related electronic devices, and manipulating, mixing and reproducing music. Music technology programs train students for careers in "...sound engineering, computer music, audio-visual production and post-production, mastering, scoring for film and multimedia, audio for games, software development, and multimedia production." Those wishing to develop new music technologies often train to become an audio engineer working in R&D. Due to the increasing role of interdisciplinary work in music technology, individuals developing new music technologies may also have backgrounds or training in computer programming, computer hardware design, acoustics, record producing or other fields.
Use of music technology in education
Digital music technologies are widely used to assist in music education for training students in the home, elementary school, middle school, high school, college and university music programs. Electronic keyboard labs are used for cost-effective beginner group piano instruction in high schools, colleges, and universities. Courses in music notation software and basic manipulation of audio and MIDI can be part of a student's core requirements for a music degree. Mobile and desktop applications are available to aid the study of music theory and ear training. Digital pianos, such as those offered by Roland, provide interactive lessons and games using the built-in features of the instrument to teach music fundamentals.
History
Development of digital musical technologies can be traced back to the analog music technologies of the early 20th century, such as the electromechanical Hammond organ, which was invented in 1929. In the 2010s, the ontological range of music technology has greatly increased, and it may now be electronic, digital, software-based or indeed even purely conceptual.
Early pioneers included Luigi Russolo, Halim El-Dabh, Pierre Schaeffer, Pierre Henry, Edgard Varèse, Karlheinz Stockhausen, Ikutaro Kakehashi, King Tubby., and others who manipulated sounds using tape machines—splicing tape and changing its playback speed to alter pre-recorded samples. Pierre Schaefer was credited for inventing this method of composition, known as musique concréte, in 1948 in Paris, France. In this style of composition, existing material is manipulated to create new timbres. Musique concréte contrasts a later style that emerged in the mid-1950s in Cologne, Germany, known as elektonische musik. This style, invented by Karlheinz Stockhausen, involves creating new sounds without the use of pre-existing material. Unlike musique concréte, which primarily focuses on timbre, elektronische musik focuses on structure. Influences of these two styles still prevail today in today's modern music and music technology. The concept of the software digital audio workstation is the emulation of a traditional recording studio. Colored strips, known as regions, can be spliced, stretched, and re-ordered, analogous to tape. Similarly, software representations of classic synthesizers emulate their analog counterparts.
Digital synthesizer history
Through the 1970s and 1980s, Japanese synthesizer manufacturers produced more affordable synthesizers than those produced in America, with synthesizers made by Yamaha Corporation, Roland Corporation, Korg, Kawai and other companies. Yamaha's DX7 was one of the first mass-market, relatively inexpensive synthesizer keyboards. The DX7 is an FM synthesis based digital synthesizer manufactured from 1983 to 1989. It was the first commercially successful digital synthesizer. Its distinctive sound can be heard on many recordings, especially pop music from the 1980s. The monotimbral, 16-note polyphonic DX7 was the moderately priced model of the DX series keyboard synthesizers. Over 200,000 of the original DX7 were made,
and it remains one of the best-selling synthesizers of all time. The most iconic bass synthesizer is the Roland TB-303, widely used in acid house music. Other classic synthesizers include the Moog Minimoog, ARP Odyssey, Yamaha CS-80, Korg MS-20, Sequential Circuits Prophet-5, Fairlight CMI, PPG Wave, Roland TB-303, Roland Alpha Juno, Nord Modular and Korg M1.
MIDI history
At the NAMM show in Los Angeles of 1983, MIDI was released. A demonstration at the convention showed two previously incompatible analog synthesizers, the Prophet 600 and Roland Jupiter-6, communicating with each other, enabling a player to play one keyboard while getting the output from both of them. This was a massive breakthrough in the 1980s, as it allowed synths to be accurately layered in live shows and studio recordings. MIDI enables different electronic instruments and electronic music devices to communicate with each other and with computers. The advent of MIDI spurred a rapid expansion of the sales and production of electronic instruments and music software.
In 1985, several of the top keyboard manufacturers created the MIDI Manufacturers Association (MMA). This newly founded association standardized the MIDI protocol by generating and disseminating all the documents about it. With the development of the MIDI File Format Specification by Opcode, every music software company's MIDI sequencer software could read and write each other's files.
Since the 1980s, personal computers developed and became the ideal system for utilizing the vast potential of MIDI. This has created a large consumer market for software such as MIDI-equipped electronic keyboards, MIDI sequencers and digital audio workstations. With universal MIDI protocols, electronic keyboards, sequencers, and drum machines can all be connected together.
Computer music history
Computer and synthesizer technology joining together changed the way music is made, and is one of the fastest changing aspects of music technology today. Dr. Max Matthews, a telecommunications engineer at Bell Telephone Laboratories' Acoustic and Behavioural Research Department, is responsible for some of the first digital music technology in the 50s. Dr. Matthews also pioneered a cornerstone of music technology; analog to digital conversion.
At Bell Laboratories, Matthews conducted research to improve the telecommunications quality for long-distance phone calls. Owing to long-distance and low-bandwidth, audio quality over phone calls across the United States was poor. Thus, Matthews devised a method in which sound was synthesized via computer on the distant end rather than transmitted. Matthews was an amateur violinist, and during a conversation with his superior, John Pierce at Bell Labs, Pierce posed the idea of synthesizing music through a computer since Matthews had already synthesized speech. Matthews agreed, and beginning in the 1950s wrote a series of programs known as MUSIC. MUSIC consisted of two files—and orchestra file containing data telling the computer how to synthesize sound—and a score file instructing the program what notes to play using the instruments defined in the orchestra file. Matthews wrote five iterations of MUSIC, calling them MUSIC I-V respectively. Subsequently, as the program was adapted and expanded as it was written to run on various platforms, its name changed to reflect its new changes. This series of programs became known as the MUSICn paradigm. The concept of the MUSIC now exists in the form of Csound.
Later Max Matthews worked as an advisor to IRCAM in the late 1980s, The (Institut de Recherche et Coordination Acoustique/Musique, or Institute for Research and Coordination in Acoustics/Music in English). There, he taught Miller Puckette, a researcher. Puckette developed a program in which music could be programmed graphically. The program could transmit and receive MIDI messages to generate interactive music in real-time. Inspired by Matthews, Puckette named the program Max. Later, a researcher named David Zicarelli visited IRCAM, saw the capabilities of Max and felt it could be developed further. He took a copy of Max with him when he left and eventually added capabilities to process audio signals. Zicarelli named this new part of the program MSP after Miller Puckette. Zicarelli developed the commercial version of MaxMSP and sold it at his company, Cycling '74, beginning in 1997. The company has since been acquired by Ableton.
The first generation of professional commercially available computer music instruments, or workstations as some companies later called them, were very sophisticated elaborate systems that cost a great deal of money when they first appeared. They ranged from $25,000 to $200,000. The two most popular were the Fairlight, and the Synclavier.
It was not until the advent of MIDI that general-purpose computers started to play a role in music production. Following the widespread adoption of MIDI, computer-based MIDI editors and sequencers were developed. MIDI-to-CV/Gate converters were then used to enable analogue synthesizers to be controlled by a MIDI sequencer.
Reduced prices in personal computers caused the masses to turn away from the more expensive workstations. Advancements in technology have increased the speed of hardware processing and the capacity of memory units. Powerful programs for sequencing, recording, notating, and mastering music.
Vocal synthesis history
Coinciding with the history of computer music is the history of vocal synthesis. Prior to Max Matthews synthesizing speech with a computer, analog devices were used to recreate speech. In the 1930s, an engineer named Holmer Dudley invented the VODER (Voice Operated Demonstrator), an electro-mechanical device which generated a sawtooth wave and white-noise. Various parts of the frequency spectrum of the waveforms could be filtered to generate the sounds of speech. Pitch was modulated via a bar on a wrist strap worn by the operator. In the 1940s Dudley, invented the VOCODER (Voice Operated Coder). Rather than synthesizing speech from scratch, this machine operated by accepting incoming speech and breaking it into its spectral components. In the late 1960s and early 1970s, bands and solo artists began using the VOCODER to blend speech with notes played on a synthesizer.
Meanwhile, at Bell Laboratories, Max Matthews worked with researchers Kelly and Lachbaum to develop a model of the vocal tract to study how its prosperities contributed to speech generation. Using the model of the vocal tract, Matthews used linear predictive coding (LPC)—a method in which a computer estimates the formants and spectral content of each word based on information about the vocal model, including various applied filters representing the vocal tract—to make a computer (an IBM 704) sing for the first time in 1962. The computer performed a rendition of "Bicycle Built for Two."
In the 1970s at IRCAM in France, researchers developed a piece of software called CHANT (French for "sing"). CHANT was based FOF (Fomant ond Formatique) synthesis, in which the peak frequencies of a sound are created and shaped using granular synthesis—as opposed to filtering frequencies to create speech.
Through the 1980s and 1990s as MIDI devices became commercially available, speech was generated by mapping MIDI data to samples of the components of speech stored in sample libraries.
Synthesizers and drum machines
A synthesizer is an electronic musical instrument that generates electric signals that are converted to sound through instrument amplifiers and loudspeakers or headphones. Synthesizers may either imitate existing sounds (instruments, vocal, natural sounds, etc.), or generate new electronic timbres or sounds that did not exist before. They are often played with an electronic musical keyboard, but they can be controlled via a variety of other input devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled using a controller device.
Synthesizers use various methods to generate a signal. Among the most popular waveform synthesis techniques are subtractive synthesis, additive synthesis, wavetable synthesis, frequency modulation synthesis, phase distortion synthesis, physical modeling synthesis and sample-based synthesis. Other less common synthesis types include subharmonic synthesis, a form of additive synthesis via subharmonics (used by mixture trautonium), and granular synthesis, sample-based synthesis based on grains of sound, generally resulting in soundscapes or clouds. In the 2010s, synthesizers are used in many genres of pop, rock and dance music. Contemporary classical music composers from the 20th and 21st century write compositions for synthesizer.
Drum machines
A drum machine is an electronic musical instrument designed to imitate the sound of drums, cymbals, other percussion instruments, and often basslines. Drum machines either play back prerecorded samples of drums and cymbals or synthesized re-creations of drum/cymbal sounds in a rhythm and tempo that is programmed by a musician. Drum machines are most commonly associated with electronic dance music genres such as house music, but are also used in many other genres. They are also used when session drummers are not available or if the production cannot afford the cost of a professional drummer. In the 2010s, most modern drum machines are sequencers with a sample playback (rompler) or synthesizer component that specializes in the reproduction of drum timbres. Though features vary from model to model, many modern drum machines can also produce unique sounds, and allow the user to compose unique drum beats and patterns.
Electro-mechanical drum machines were first developed in 1949, with the invention of the Chamberlin Rhythmate. Transistorized electronic drum machines later appeared in the 1960s. The Ace Tone Rhythm Ace, created by Ikutaro Kakehashi, began appearing in popular music from the late 1960s, followed by drum machines from Korg and Ikutaro's later Roland Corporation also appearing in popular music from the early 1970s. Sly and the Family Stone's 1971 album There's a Riot Goin' On helped to popularize the sound of early drum machines, along with Timmy Thomas' 1972 R&B hit "Why Can't We Live Together" and George McCrae's 1974 disco hit "Rock Your Baby" which used early Roland rhythm machines.
Early drum machines sounded drastically different than the drum machines that gained their peak popularity in the 1980s and defined an entire decade of pop music. The most iconic drum machine was the Roland TR-808, widely used in hip hop and dance music. Other classic drum machines include the Alesis HR-16, Korg Mini Pops 120, E-MU SP-12, Elektron SPS1 Machinedrum, Roland CR-78, PAiA Programmable Drum Set, LinnDrum, Roland TR-909 and Oberheim DMX.
Sampling technology
Digital sampling technology, introduced in the 1980s, has become a staple of music production in the 2000s. Devices that use sampling, record a sound digitally (often a musical instrument, such as a piano or flute being played), and replay it when a key or pad on a controller device (e.g., an electronic keyboard, electronic drum pad, etc.) is pressed or triggered. Samplers can alter the sound using various audio effects and audio processing. Sampling has its roots in France with the sound experiments carried out by Musique Concrete practitioners.
In the 1980s, when the technology was still in its infancy, digital samplers cost tens of thousands of dollars and they were only used by the top recording studios and musicians. These were out of the price range of most musicians. Early samplers include the 12-bit Toshiba LMD-649 and the 8-bit Emulator I in 1981. The latter's successor, the Emulator II (released in 1984), listed for $8,000. Samplers were released during this period with high price tags, such as the K2000 and K2500.
The first affordable sampler, the AKAI S612, became available in the mid-1980s and retailed for US$895. Other companies soon released affordable samplers, including the Mirage Sampler, Oberheim DPX-1, and more by Korg, Casio, Yamaha, and Roland. Some important hardware samplers include the Akai Z4/Z8, Ensoniq ASR-10, Roland V-Synth, Casio FZ-1, Kurzweil K250, Akai MPC60, Ensoniq Mirage, Akai S1000, E-mu Emulator, and Fairlight CMI.
One of the biggest uses of sampling technology was by hip-hop music DJs and performers in the 1980s. Before affordable sampling technology was readily available, DJs
would use a technique pioneered by Grandmaster Flash to manually repeat certain parts in a song by juggling between two separate turntables. This can be considered as an early precursor of sampling. In turn, this turntablism technique originates from Jamaican dub music in the 1960s, and was introduced to American hip hop in the 1970s.
In the 2000s, most professional recording studios use digital technologies. In recent years, many samplers have only included digital technology. This new generation of digital samplers are capable of reproducing and manipulating sounds. Digital sampling plays an integral part in some genres of music, such as hip-hop and trap. Advanced sample libraries have made complete performances of orchestral compositions possible that sound similar to a live performance. Modern sound libraries allow musicians to have the ability to use the sounds of almost any instrument in their productions.
MIDI
MIDI has been the musical instrument industry standard interface since the 1980s through to the present day. It dates back to June 1981, when Roland Corporation founder Ikutaro Kakehashi proposed the concept of standardization between different manufacturers' instruments as well as computers, to Oberheim Electronics founder Tom Oberheim and Sequential Circuits president Dave Smith. In October 1981, Kakehashi, Oberheim and Smith discussed the concept with representatives from Yamaha, Korg and Kawai. In 1983, the MIDI standard was unveiled by Kakehashi and Smith.
Some universally accepted varieties of MIDI software applications include music instruction software, MIDI sequencing software, music notation software, hard disk recording/editing software, patch editor/sound library software, computer-assisted composition software, and virtual instruments. Current developments in computer hardware and specialized software continue to expand MIDI applications.
Computers in music technology
Following the widespread adoption of MIDI, computer-based MIDI editors and sequencers were developed. MIDI-to-CV/Gate converters were then used to enable analogue synthesizers to be controlled by a MIDI sequencer.
Reduced prices in personal computers caused the masses to turn away from the more expensive workstations. Advancements in technology have increased the speed of hardware processing and the capacity of memory units. Software developers write new, more powerful programs for sequencing, recording, notating, and mastering music.
Digital audio workstation software, such as Pro Tools, Logic, and many others, have gained popularity among the vast array of contemporary music technology in recent years. Such programs allow the user to record acoustic sounds with a microphone or software instrument, which may then be layered and organized along a timeline and edited on a flat-panel display of a computer. Recorded segments can be copied and duplicated ad infinitum, without any loss of fidelity or added noise (a major contrast from analog recording, in which every copy leads to a loss of fidelity and added noise). Digital music can be edited and processed using a multitude of audio effects. Contemporary classical music sometimes uses computer-generated sounds—either pre-recorded or generated and manipulated live—in conjunction or juxtaposed on classical acoustic instruments like the cello or violin. Music is scored with commercially available notation software.
In addition to the digital audio workstations and music notation software, which facilitate the creation of fixed media (material that does not change each time it is performed), software facilitating interactive or generative music continues to emerge. Composition based on conditions or rules (algorithmic composition) has given rise to software which can automatically generate music based on input conditions or rules. Thus, the resulting music evolves each time conditions change. Examples of this technology include software designed for writing music for video games—where music evolves as a player advances through a level or when certain characters appear—or music generated from artificial intelligence trained to convert biometrics like EEG or ECG readings into music. Because this music is based on user interaction, it will be different each time it is heard. Other examples of generative music technology include the use of sensors connected to computer and artificial intelligence to generate music based on captured data, such as environmental factors, the movements of dancers, or physical inputs from a digital device such as a mouse or game controller. Software applications offering capabilities for generative and interactive music include SuperCollider, MaxMSP/Jitter, and Processing. Interactive music is made possible through physical computing, where the data from the physical world affects a computer's output and vice versa.
Vocal synthesis
In the 2010s, vocal synthesis technology has taken advantage of the recent advances in artificial intelligence—deep listening and machine learning to better represent the nuances of the human voice. New high fidelity sample libraries combined with digital audio workstations facilitate editing in fine detail, such as shifting of formats, adjustment of vibrato, and adjustments to vowels and consonants. Sample libraries for various languages and various accents are available. With today's advancements in vocal synthesis, artists sometimes use sample libraries in lieu of backing singers.
Timeline
1917 : Leon Theremin invented the prototype of the Theremin
1944 : Halim El-Dabh produces earliest electroacoustic tape music
1952 : Harry F. Olson and Herbert Belar invent the RCA Synthesizer
1952 : Osmand Kendal develops the Composer-Tron for the Marconi Wireless Company
1956 : Raymond Scott develops the Clavivox
1958 : Evgeny Murzin along with several colleagues create the ANS synthesizer
1959 : Wurlitzer manufactures The Sideman, the first commercial electro-mechanical drum machine
1963 : Keio Electronics (later Korg) produces the DA-20
1963 : The Mellotron starts to be manufactured in London
1964 : Ikutaro Kakehashi debuts Ace Tone R-1 Rhythm Ace, the first electronic drum
1964 : The Moog Synthesizer is released
1965 : Nippon Columbia patents an early electronic drum machine
1966 : Korg releases Donca-Matic DE-20, an early electronic drum machine
1967 : Ace Tone releases FR-1 Rhythm Ace, the first drum machine to enter popular music
1967 : First PCM recorder developed by NHK
1968 : King Tubby pioneers dub music, an early form of popular electronic music
1969 : Matsushita engineer Shuichi Obata invents first direct-drive turntable, Technics SP-10
1970 : ARP 2600 is manufactured
1973 : Yamaha release Yamaha GX-1, the first polyphonic synthesizer
1974 : Yamaha build first digital synthesizer
1977 : Roland release MC-8, an early microprocessor-driven CV/Gate digital sequencer
1978 : Roland releases CR-78, the first microprocessor-driven drum machine
1979 : Casio releases VL-1, the first commercial digital synthesizer
1980 : Roland releases TR-808, the most widely used drum machine in popular music
1980 : Roland introduces DCB protocol and DIN interface with TR-808
1980 : Yamaha releases GS-1, the first FM digital synthesizer
1980 : Kazuo Morioka creates Firstman SQ-01, the first bass synth with a sequencer
1981 : Roland releases TB-303, a bass synthesizer that lays foundations for acid house music
1981 : Toshiba's LMD-649, the first PCM digital sampler, introduced with Yellow Magic Orchestra's Technodelic
1982 : Sony and Philips introduce compact disc
1982 : First MIDI synthesizers released, Roland Jupiter-6 and Prophet 600
1983 : Introduction of MIDI
1983 : Roland releases MSQ-700, the first MIDI sequencer
1983 : Roland releases TR-909, the first MIDI drum machine
1983 : Roland releases MC-202, the first groovebox
1983 : Yamaha releases DX7, the first commercially successful digital synthesizer
1985 : Akai releases the Akai S612, a digital sampler
1986 : The first digital consoles appear
1987 : Digidesign markets Sound Tools
1988 : Akai introduces the Music Production Controller (MPC) series of digital samplers
1994 : Yamaha unveils the ProMix 01
See also
List of music software
References
External links
Music Technology in Education
Music Technology Resources
Detailed history of electronic instruments and electronic music technology at '120 years of Electronic Music'
Sound recording
Audio electronics
Audio software
Music history
Musical instruments |
2878472 | https://en.wikipedia.org/wiki/SoftRAM | SoftRAM | SoftRAM and SoftRAM95 were system software products which claimed to double the available random-access memory in Microsoft Windows without the need for a hardware upgrade. However, it later emerged that the program did not even attempt to increase available memory. In July 1996, the developer of SoftRAM, Syncronys settled charges brought by the Federal Trade Commission of "false and misleading" claims in relation to the capability of the software. The product was rated the third "Worst Tech Product of All Time" by PC World in 2006. Around 100,000 to 600,000 copies of the software were sold overall.
As SoftRAM and SoftRAM95 were faulty, the company had to file for bankruptcy because they could not afford $10 rebates for affected consumers. The main owners of the company were Rainer Poertner (20.9%), Wendell Brown (13.5%) and a British Virgin Islands company called Mobius Capital Corp. which owned 55.3%. The proxy statement document also names a certain Daniel G. Taylor as the only person with indirect beneficial interest in Mobius Capital Corp, who received his law degree from Osgoode Hall in 1982.
SoftRAM
SoftRAM was designed for use with Windows 3.1. It was launched in March 1995 and sold more than 100,000 copies.
Most out-of-memory errors in Windows 3.x were caused by the first megabyte of memory in a computer, the conventional memory, becoming full. Windows needed to allocate a Program Segment Prefix (PSP) in this area of memory for each program started. Some utilities prevented DLLs from allocating memory here, leaving more space for user programs. This was a standard technique also used by other memory optimization tools. SoftRAM also claimed to increase the amount of virtual memory available by compressing the pages of virtual memory stored in the swap file on the hard disk, which has the added effect of reducing the number of swap file reads and writes. The software also increased the size of the Windows page file, something easily achievable for free and without the use of additional software by changing system settings.
SoftRAM95
SoftRAM95 was designed for Windows 95 and was released in August 1995. The company sold over 600,000 copies of SoftRAM95 at a list price of USD $79.95, GBP £60 or 170 DM.
When Windows 95 was launched, it was widely reported that software for the operating system would be "memory hungry", requiring at least 4 megabytes of memory and preferably 8. Syncronys positioned SoftRAM as a cheaper alternative to buying more memory for those who would otherwise be unable to run Windows 95.
FTC investigation
In December 1995, the German computing journal c't disassembled the program and determined that it did not even attempt to do what was claimed. In fact, the data passed through the VxD completely unaltered so that no compression whatsoever could have taken place. The actual drivers were in fact slightly modified versions of code examples taken from Microsoft's "Windows Development Kit". Still, the program would try to pretend that it increased system resources, by silently increasing the size of the swap file on Windows 3.1 and by giving false information on the current state of the system. Even worse, the program was compiled with the debug flag on and so ran slower than the original driver from Microsoft. A further test by PC Magazine revealed that SoftRAM took the same amount of time to move through systems that contained varying amounts of RAM; leading the magazine's technical editor to call SoftRAM completely "devoid of value". Another study by Dr. Dobb's Journal came to the same conclusions.
The Federal Trade Commission began an investigation in late 1995, ultimately concluding that Syncronys' claims about SoftRAM were "false and misleading". They also concluded that "SoftRAM95 does not increase RAM in a computer using Windows 95; nor does the product enhance the speed, capacity, or other performance measures of a computer using Windows 95". The investigation prompted the company to recall both SoftRAM and SoftRAM95 from the market in December 1995. Several individual customers filed suit against the company as well. Syncronys settled with the FTC and the suing customers in 1996. As part of the FTC settlement, it agreed to give US$10 rebates to any customers who requested them. Around that time, the software was called "Placebo Software", a program based on the Placebo Effect.
Bankruptcy
Syncronys filed for bankruptcy in July 1998 with $4.5 million of debt after releasing a dozen other poorly received tools. Their final release, UpgradeAID 98, claimed to allow users to downgrade from Windows 98 to Windows 95, duplicating an existing feature of Windows 98 for $39.95 (equivalent to $ in dollars). A large number of its creditors were customers who had not received their rebates for SoftRAM.
Syncronys replaced its board and leadership and operated under Chapter 7 bankruptcy until 2002. In 2006, the SEC revoked its securities and placed Syncronys in default for failing to file any financial reports since their 1998 Chapter 11 bankruptcy event.
References
External links
Utilities for Windows
Computer system optimization software
1995 software
False advertising |
77733 | https://en.wikipedia.org/wiki/Hesione | Hesione | In Greek mythology and later art, the name Hesione (/hɪˈsaɪ.əniː/; Ancient Greek: Ἡσιόνη) refers to various mythological figures, of whom the Trojan princess Hesione is most known.
Mythology
According to the Bibliotheca, the most prominent Hesione was a Trojan princess, daughter of King Laomedon of Troy, sister of Priam and second wife of King Telamon of Salamis. The first notable myth Hesione is cited in is that of Hercules, who saves her from a sea monster. However, her role becomes significant many years later when she is described as a potential trigger of the Trojan War.
Apollo and Poseidon were angry at King Laomedon because he refused to pay the wage he promised them for building Troy's walls. Apollo sent a plague and Poseidon a sea monster to destroy Troy. Oracles promised deliverance if Laomedon would expose his daughter Hesione to be devoured by the sea monster Cetus (in other versions, the lot happened to fall on her) and he exposed her by fastening her naked to the rocks near the sea. Heracles, Telamon and Oicles happened to arrive on their return from the expedition against the Amazons. Seeing her exposed, Heracles promised to save her on condition that Laomedon would give him the wonderful horses he had received from Zeus as compensation for Zeus' kidnapping of Ganymede. Laomedon agreed, and Heracles slew the monster. In some accounts, after being swallowed by it, he hacked at its innards for three days before it died. He emerged, having lost all his hair. However, Laomedon refused to give him the promised award.
In a later expedition, Heracles attacked Troy, slew Laomedon and all of Laomedon's sons except the youngest, Podarces. Heracles gave Laomedon's daughter Hesione as a prize to Telamon instead of keeping her for himself. He allowed her to take with her any captives that she wished; she chose her brother Podarces. Heracles allowed her to ransom him in exchange for her veil. Therefore, Podarces henceforth became known as Priam, from ancient Greek πρίασθαι priasthai, meaning "to buy". Heracles then bestowed the government of Troy on Priam. However, it is also claimed that Priam simply happened to be absent campaigning in Phrygia during Heracles' attack on Troy.
Hesione was taken home by Telamon, married him and bore him a son, Teucros, half-brother to Telamon's son from his first marriage, Ajax. Alternatively, she became pregnant with Trambelus while still on board the ship and then escaped; it is also possible, though, that the mother of Trambelus was not Hesione, but a certain Theaneira.
Many years later, when Hesione was an old woman, Priam sent Antenor and Anchises to Greece to demand Hesione's return, but they were rejected and driven away. Priam then sent Paris and Aeneas to retrieve her, but Paris got sidetracked and instead brought back Helen, queen of Sparta and wife of Menelaus. Priam was ultimately willing to accept the abduction of Helen, due to the Greeks' refusal to return Hesione.
Spurious references
The name Hesione in Dictys Cretensis 4.22 appears to be an error for Plesione of Dictys 1.9 and that in turn an error for Pleione.
References
Bibliography
Schwab, G. (2001). Gods and Heroes of Ancient Greece. New York: Pantheon Books.
Princesses in Greek mythology
Queens in Greek mythology
Trojans
Women in Greek mythology
Characters in Greek mythology
Mythology of Heracles
Human sacrifice |
23816215 | https://en.wikipedia.org/wiki/1985%20USC%20Trojans%20football%20team | 1985 USC Trojans football team | The 1985 USC Trojans football team represented the University of Southern California during the 1985 NCAA Division I-A football season.
Schedule
Personnel
Season summary
at Illinois
Baylor
at Arizona State
Oregon State
Stanford
at Notre Dame
Washington State
at California
at Washington
UCLA
vs. Oregon
Source:
Aloha Bowl (vs. Alabama)
1986 NFL Draft
The following players were drafted into professional football following the season.
References
USC
USC Trojans football seasons
USC Trojans football |
229388 | https://en.wikipedia.org/wiki/Corineus | Corineus | Corineus, in medieval British legend, was a prodigious warrior, a fighter of giants, and the eponymous founder of Cornwall.
According to Geoffrey of Monmouth's History of the Kings of Britain (1136), he led the descendants of the Trojans who fled with Antenor after the Trojan War and settled on the coasts of the Tyrrhenian Sea. After Brutus, a descendant of the Trojan prince Aeneas, had been exiled from Italy and liberated the enslaved Trojans in Greece, he encountered Corineus and his people, who joined him in his travels. In Gaul, Corineus provoked a war with Goffarius Pictus, king of Aquitania, by hunting in his forests without permission, and killed thousands single-handedly with his battle-axe. After defeating Goffarius, the Trojans crossed to the island of Albion, which Brutus renamed Britain after himself. Corineus settled in Cornwall, which was then inhabited by giants. Brutus and his army killed most of them, but their leader, Gogmagog, was kept alive for a wrestling match with Corineus. The fight took place near Plymouth, and Corineus killed him by throwing him over a cliff.
Corineus was the first of the legendary rulers of Cornwall. After Brutus died the rest of Britain was divided between his three sons, Locrinus (England), Kamber (Wales) and Albanactus (Scotland). Locrinus agreed to marry Corineus's daughter Gwendolen, but fell in love instead with Estrildis, a captured German princess. Corineus threatened war in response to this affront, and to pacify him Locrinus married Gwendolen, but kept Estrildis as his secret mistress. After Corineus died Locrinus divorced Gwendolen and married Estrildis, and Gwendolen responded by raising an army in Cornwall and making war against her ex-husband. Locrinus was killed in battle, and Gwendolen threw Estrildis and her daughter, Habren, into the River Severn.
The tale is preserved in the works of later writers, including Michael Drayton and John Milton.
As to Corineus's stature, he is represented as being the largest of Brutus's crew in the Middle English prose Brut Raphael Holinshed comments that Corineus was not a giant, but names a source, the Architrenius that describes Corineus as a man 12 cubits (18 feet) tall.
See also
Corinius
References
British traditional history
Characters in works by Geoffrey of Monmouth
Cornish folklore
Monarchs of Cornwall
Medieval legends
Gog and Magog |
18938636 | https://en.wikipedia.org/wiki/Decompiler | Decompiler | A decompiler is a computer program that translates an executable file to a high-level source file which can be recompiled successfully. It is therefore the opposite of a compiler, which translates a source file in to an executable. Decompilers are usually unable to perfectly reconstruct the original source code, thus frequently will produce obfuscated code. Nonetheless, decompilers remain an important tool in the reverse engineering of computer software.
Introduction
The term decompiler is most commonly applied to a program which translates executable programs (the output from a compiler) into source code in a (relatively) high level language which, when compiled, will produce an executable whose behavior is the same as the original executable program. By comparison, a disassembler translates an executable program into assembly language (and an assembler could be used for assembling it back into an executable program).
Decompilation is the act of using a decompiler, although the term can also refer to the output of a decompiler. It can be used for the recovery of lost source code, and is also useful in some cases for computer security, interoperability and error correction. The success of decompilation depends on the amount of information present in the code being decompiled and the sophistication of the analysis performed on it. The bytecode formats used by many virtual machines (such as the Java Virtual Machine or the .NET Framework Common Language Runtime) often include extensive metadata and high-level features that make decompilation quite feasible. The application of debug data, i.e. debug-symbols, may enable to reproduce the original names of variables and structures and even the line numbers. Machine language without such metadata or debug data is much harder to decompile.
Some compilers and post-compilation tools produce obfuscated code (that is, they attempt to produce output that is very difficult to decompile, or that decompiles to confusing output). This is done to make it more difficult to reverse engineer the executable.
While decompilers are normally used to (re-)create source code from binary executables, there are also decompilers to turn specific binary data files into human-readable and editable sources.
Design
Decompilers can be thought of as composed of a series of phases each of which contributes specific aspects of the overall decompilation process.
Loader
The first decompilation phase loads and parses the input machine code or intermediate language program's binary file format. It should be able to discover basic facts about the input program, such as the architecture (Pentium, PowerPC, etc.) and the entry point. In many cases, it should be able to find the equivalent of the main function of a C program, which is the start of the user written code. This excludes the runtime initialization code, which should not be decompiled if possible. If available the symbol tables and debug data are also loaded. The front end may be able to identify the libraries used even if they are linked with the code, this will provide library interfaces. If it can determine the compiler or compilers used it may provide useful information in identifying code idioms.
Disassembly
The next logical phase is the disassembly of machine code instructions into a machine independent intermediate representation (IR). For example, the Pentium machine instruction
mov eax, [ebx+0x04]
might be translated to the IR
eax := m[ebx+4];
Idioms
Idiomatic machine code sequences are sequences of code whose combined semantics are not immediately apparent from the instructions' individual semantics. Either as part of the disassembly phase, or as part of later analyses, these idiomatic sequences need to be translated into known equivalent IR. For example, the x86 assembly code:
cdq eax ; edx is set to the sign-extension≠edi,edi +(tex)push
xor eax, edx
sub eax, edx
could be translated to
eax := abs(eax);
Some idiomatic sequences are machine independent; some involve only one instruction. For example, clears the eax register (sets it to zero). This can be implemented with a machine independent simplification rule, such as a = 0.
In general, it is best to delay detection of idiomatic sequences if possible, to later stages that are less affected by instruction ordering. For example, the instruction scheduling phase of a compiler may insert other instructions into an idiomatic sequence, or change the ordering of instructions in the sequence. A pattern matching process in the disassembly phase would probably not recognize the altered pattern. Later phases group instruction expressions into more complex expressions, and modify them into a canonical (standardized) form, making it more likely that even the altered idiom will match a higher level pattern later in the decompilation.
It is particularly important to recognize the compiler idioms for subroutine calls, exception handling, and switch statements. Some languages also have extensive support for strings or long integers.
Program analysis
Various program analyses can be applied to the IR. In particular, expression propagation combines the semantics of several instructions into more complex expressions. For example,
mov eax,[ebx+0x04]
add eax,[ebx+0x08]
sub [ebx+0x0C],eax
could result in the following IR after expression propagation:
m[ebx+12] := m[ebx+12] - (m[ebx+4] + m[ebx+8]);
The resulting expression is more like high level language, and has also eliminated the use of the machine register eax. Later analyses may eliminate the ebx register.
Data flow analysis
The places where register contents are defined and used must be traced using data flow analysis. The same analysis can be applied to locations that are used for temporaries and local data. A different name can then be formed for each such connected set of value definitions and uses. It is possible that the same local variable location was used for more than one variable in different parts of the original program. Even worse it is possible for the data flow analysis to identify a path whereby a value may flow between two such uses even though it would never actually happen or matter in reality. This may in bad cases lead to needing to define a location as a union of types. The decompiler may allow the user to explicitly break such unnatural dependencies which will lead to clearer code. This of course means a variable is potentially used without being initialized and so indicates a problem in the original program.
Type analysis
A good machine code decompiler will perform type analysis. Here, the way registers or memory locations are used result in constraints on the possible type of the location. For example, an and instruction implies that the operand is an integer; programs do not use such an operation on floating point values (except in special library code) or on pointers. An add instruction results in three constraints, since the operands may be both integer, or one integer and one pointer (with integer and pointer results respectively; the third constraint comes from the ordering of the two operands when the types are different).
Various high level expressions can be recognized which trigger recognition of structures or arrays. However, it is difficult to distinguish many of the possibilities, because of the freedom that machine code or even some high level languages such as C allow with casts and pointer arithmetic.
The example from the previous section could result in the following high level code:
struct T1 *ebx;
struct T1 {
int v0004;
int v0008;
int v000C;
};
ebx->v000C -= ebx->v0004 + ebx->v0008;
Structuring
The penultimate decompilation phase involves structuring of the IR into higher level constructs such as while loops and if/then/else conditional statements. For example, the machine code
xor eax, eax
l0002:
or ebx, ebx
jge l0003
add eax,[ebx]
mov ebx,[ebx+0x4]
jmp l0002
l0003:
mov [0x10040000],eax
could be translated into:
eax = 0;
while (ebx < 0) {
eax += ebx->v0000;
ebx = ebx->v0004;
}
v10040000 = eax;
Unstructured code is more difficult to translate into structured code than already structured code. Solutions include replicating some code, or adding boolean variables.
Code generation
The final phase is the generation of the high level code in the back end of the decompiler. Just as a compiler may have several back ends for generating machine code for different architectures, a decompiler may have several back ends for generating high level code in different high level languages.
Just before code generation, it may be desirable to allow an interactive editing of the IR, perhaps using some form of graphical user interface. This would allow the user to enter comments, and non-generic variable and function names. However, these are almost as easily entered in a post decompilation edit. The user may want to change structural aspects, such as converting a while loop to a for loop. These are less readily modified with a simple text editor, although source code refactoring tools may assist with this process. The user may need to enter information that failed to be identified during the type analysis phase, e.g. modifying a memory expression to an array or structure expression. Finally, incorrect IR may need to be corrected, or changes made to cause the output code to be more readable.
Legality
The majority of computer programs are covered by copyright laws. Although the precise scope of what is covered by copyright differs from region to region, copyright law generally provides the author (the programmer(s) or employer) with a collection of exclusive rights to the program. These rights include the right to make copies, including copies made into the computer’s RAM (unless creating such a copy is essential for using the program).
Since the decompilation process involves making multiple such copies, it is generally prohibited without the authorization of the copyright holder. However, because decompilation is often a necessary step in achieving software interoperability, copyright laws in both the United States and Europe permit decompilation to a limited extent.
In the United States, the copyright fair use defence has been successfully invoked in decompilation cases. For example, in Sega v. Accolade, the court held that Accolade could lawfully engage in decompilation in order to circumvent the software locking mechanism used by Sega's game consoles. Additionally, the Digital Millennium Copyright Act (PUBLIC LAW 105–304) has proper exemptions for both Security Testing and Evaluation in §1201(i), and Reverse Engineering in §1201(f).
In Europe, the 1991 Software Directive explicitly provides for a right to decompile in order to achieve interoperability. The result of a heated debate between, on the one side, software protectionists, and, on the other, academics as well as independent software developers, Article 6 permits decompilation only if a number of conditions are met:
First, a person or entity must have a licence to use the program to be decompiled.
Second, decompilation must be necessary to achieve interoperability with the target program or other programs. Interoperability information should therefore not be readily available, such as through manuals or API documentation. This is an important limitation. The necessity must be proven by the decompiler. The purpose of this important limitation is primarily to provide an incentive for developers to document and disclose their products' interoperability information.
Third, the decompilation process must, if possible, be confined to the parts of the target program relevant to interoperability. Since one of the purposes of decompilation is to gain an understanding of the program structure, this third limitation may be difficult to meet. Again, the burden of proof is on the decompiler.
In addition, Article 6 prescribes that the information obtained through decompilation may not be used for other purposes and that it may not be given to others.
Overall, the decompilation right provided by Article 6 codifies what is claimed to be common practice in the software industry. Few European lawsuits are known to have emerged from the decompilation right. This could be interpreted as meaning one of three things:
) the decompilation right is not used frequently and the decompilation right may therefore have been unnecessary,
) the decompilation right functions well and provides sufficient legal certainty not to give rise to legal disputes or
) illegal decompilation goes largely undetected.
In a report of 2000 regarding implementation of the Software Directive by the European member states, the European Commission seemed to support the second interpretation.
Tools
Decompilers usually target a specific binary format. Some are native instruction sets (eg Intel x86, ARM, MIPS), others are bytecode for virtual machines (Dalvik, Java class files, WebAssembly, Ethereum).
Due to information loss during compilation, decompilation is almost never perfect, and not all decompilers perform equally well for a given binary format. There are studies comparing the performance of different decompilers.
See also
Binary recompiler
Linker (computing)
Abstract interpretation
Resource editor
Java decompilers
Mocha decompiler
JD Decompiler
JAD decompiler
Other decompilers
.NET Reflector
JEB Decompiler (Android Dalvik, Intel x86, ARM, MIPS, WebAssembly, Ethereum)
References
External links
Utility software types
Reverse engineering |
41321789 | https://en.wikipedia.org/wiki/Clara.io | Clara.io | Clara.io is web-based freemium 3D computer graphics software developed by Exocortex, a Canadian software company. The free or "Basic" component of their freemium offering, however, places severe restrictions, such as on saving models and importing texture maps, which are undisclosed in the company's own descriptions of their plans. Clara.io was announced in July 2013 and first presented as part of the official SIGGRAPH 2013 program later that month. By November 2013 when the open beta period started, Clara.io had 14,000 registered users. Clara.io claimed to have 26,000 registered users in January 2014, which grew to 85,000 by December 2014.
Features
Polygonal modeling
Constructive solid geometry
Key frame animation
Skeletal animation
Hierarchical scene graph
Texture mapping
Photorealistic rendering (streaming cloud rendering using V-Ray Cloud)
Scene publishing via HTML iframe embedding
FBX, Collada, OBJ, STL and Three.js import/export
Collaborative real-time editing
Revision control (versioning & history)
Scripting, Plugins & REST APIs
3D model library
Unlisted and Private scenes (paid subscriptions only).
Technology
Clara.io is developed using HTML5, JavaScript, WebGL and Three.js. Clara.io does not rely on any browser plugins and thus runs on any platform that has a modern standards compliant browser.
Screenshots
See also
3D modeling
Sketchfab
SketchUp
References
External links
2013 software
3D animation software
3D graphics software
3D publishing
Internet properties established in 2013
Video game development software
Web applications
WebGL |
51630493 | https://en.wikipedia.org/wiki/2016%E2%80%9317%20Little%20Rock%20Trojans%20men%27s%20basketball%20team | 2016–17 Little Rock Trojans men's basketball team | The 2016–17 Little Rock Trojans men's basketball team represented the University of Arkansas at Little Rock during the 2016–17 NCAA Division I men's basketball season. The Trojans, led by first-year head coach Wes Flanigan, played their home games at the Jack Stephens Center in Little Rock, Arkansas as members of the Sun Belt Conference. They finished the season 15–17, 6–12 in Sun Belt play to finish in tenth place. They lost in the first round of the Sun Belt Tournament to Louisiana–Lafayette.
Previous season
The Trojans finished the 2015–16 season 30–5, 17–3 in Sun Belt play to win the Sun Belt regular season championship. They defeated Louisiana–Lafayette and Louisiana–Monroe to win the Sun Belt Tournament. As a result, the Trojans received the conference's automatic bid to the NCAA Tournament as a No. 12 seed. In the First Round, they upset Purdue before losing in the Second Round to Iowa State.
Following the season, first-year head coach Chris Beard left the school to accept the head coaching position at UNLV. On March 31, 2016, the school hired Wes Flanigan as head coach.
Roster
Schedule and results
|-
!colspan=9 style=| Non-conference regular season
|-
!colspan=9 style=| Sun Belt Conference regular season
|-
!colspan=9 style=| Sun Belt Tournament
References
Little Rock
Little Rock Trojans men's basketball seasons
TRoj
TRoj |
6269934 | https://en.wikipedia.org/wiki/Carrollton%20High%20School%20%28Carrollton%2C%20Georgia%29 | Carrollton High School (Carrollton, Georgia) | Carrollton High School is a public high school in Carrollton, Georgia, United States, part of the Carrollton City School System. The school's mascot is the Trojan.
History
In 1886, a public school was established on College Street on the site of two former private schools, the "Carrollton Masonic Institute" and "Carrollton Seminary". Dr. William Washington Fitts, a local physician, civic leader, and owner of the school property, donated the land in order to establish the new public school system and served as president of its commissioning board. The new school, utilizing the wooden building of the old Masonic Institute, opened its doors in 1887 and served children in the local Carrollton area. The school was reconstructed as a larger two-story brick building ten years later and reopened as the Carrollton Public School, or College Street School. The first floor of this new building was divided into separate girls and boys high schools (jointly known as Carrollton High School) with younger grades attending classes on the second floor. Many years later in 1913, "Maple Street School" was constructed on the namesake street to serve as a feeder school, and children from the nearby "West View School" in Mandeville Mills were allowed to attend in 1922. Over a year earlier, the school district constructed another building on South White Street with a neoclassical design by architect Neel Reid, and the building became the new Carrollton High School in 1921. Both the Maple and College Street schools served as feeders into this new school. The College Street building was later dismantled in 1954 with a new smaller elementary school complex taking its place and name.
Segregation
While white children were allowed to attend the Carrollton Public School and later the Maple Street and Carrollton High School, school racial segregation was still in existence and African American students were denied admittance into these schools.
With the construction of the Maple Street School in 1913, another school for African American children was built on Pearl Street. However, the name of this original school is unknown. In 1932, using funds raised from a bond issue by the city of Carrollton, along with matching funds from the Rosenwald Fund, the Carroll County Training School was established. In 1954, a new building was built for grades 8-12 and was named George Washington Carver High School while the Carroll County Training School, renamed to "Alabama Street Elementary", became a feeder school.
Current location
A new Carrollton High School was built on the corner of Frances Place and Oak Avenue from 1962 to 1963, and students from the Neel Reid building were moved to this new location as it became the junior high school for the district. A Carrollton High student would later petition the local city council to rename the stretch of road in front of the school, and it became "Trojan Drive" in 1966. School integration was later organized in 1969, and students from the now closed Carver High attended Carrollton along with surrounding county schools. The school district underwent major reorganization with integration, and established a single cluster system utilizing the formerly segregated school facilities. A new junior high school was built in 1986 next to the high school while the historic Neel Reid building was sold to the community; now known as the "Tracy Stallings Community Center". The College Street School elementary facility was also sold to the community and is now the Carroll County Administration Building. The current elementary and middle (now upper elementary) schools were opened in 1992 and 2005 respectively next to the junior high and high school establishing the entire system on a unified 130-acre campus.
Academics
Carrollton High consistently ranks among the top 20 schools statewide in graduation rate performance. The school has multiple Advanced Placement and International Baccalaureate course offerings which supplement college-preparatory focus. The school follows a 4x4 block scheduling system and provides a full-service guidance staff which offers on-site graduation coaches, career specialists, and academic coaches to students as well as a Career and Technical Education department featuring several industry-certified programs. A collaboration with the nearby University of West Georgia allowed high-achieving students the opportunity to attend college with the Advanced Academy of Georgia before its dismantlement in 2017 to pave the way for the more general dual enrollment program. Students in the engineering pathway are offered the chance of an internship, the Southwire Engineering Academy, at the locally headquartered Southwire Company their senior year. Each student is offered language training in Spanish and or French.
Arts
CHS Trojan Band
Carrollton High School has historically emphasized its music program. The Carrollton High School Trojan Band, one of the oldest band programs in the state, was founded in 1948 by John Dilliard. The program, spanning over seven decades, has achieved multiple awards in music excellence including a thirty-seven year superior rating record. Having only six directors since Mr. Dillard, the group is currently led by Chris Carr and Zachary Nelson. The Trojan Band currently averages over 200 members, and includes the general marching band, a premier wind ensemble, symphonic band, concert band, jazz band, and two winterguard groups. The marching band has also made numerous appearances across the globe in internationally televised parades and performances.
Lions Club International Parade, New York City (1959)
National Cherry Blossom Festival Parade, Washington, D.C. (1970)
Mexico City Parade (1976)
First Place in Cherry Blossom Field Competition, Washington, D.C. (1978)
Tournament of Roses Parade, Pasadena, CA (1981)
First Place in the Apple Blossom Festival, Winchester, VA (1983 and 1993)
Macy’s Thanksgiving Day Parade, New York City (1984)
100th Annual Tournament of Roses Parade, Pasadena, CA (1989)
Orange Bowl Parade, Miami, FL (1990 and 1994)
Honolulu Parade, Honolulu, HI (1997)
London’s New Years Day Parade (2001)
Performance at the 60th Anniversary of D-Day, Normandy, France (June 6, 2004)
Outback Bowl Halftime Performance, Outback Parade, Tampa, Florida (2010)
Holiday Bowl Parade, San Diego, CA (2012)
Magic Kingdom Parade, Disney World, Orlando, FL (2015)
Performance at the U.S.S Intrepid, New York City (2017)
Hollywood Christmas Parade, Hollywood, CA (2018)
CHS Performing Arts
The Carrollton High School Performing Arts Program consists of the Drama Club and Chorus Program. Both groups regularly orchestrate joint musical works and theatrical presentations. The chorus program is also one of the premier high school choruses in the state of Georgia and has won many awards with the Georgia Music Educators Association.
Athletics
Carrollton's athletics program is a focal point of their school system; student athletes compete in the Georgia High School Association's Class 5AAAAAA. Sports teams at the school have records dating back to 1909 with the football program making an appearance in 1920. The athletic teams received the name of the Trojans in 1938. Carrollton has received numerous "Field of the Year" awards for its baseball field, and commonly hosts the GHSA's state Cross Country meet, as well as a "Last Chance" Invitational. Best known for their Football and Track & Field programs, football has won seven state championships and track & field has won twenty-four state championships. Athletic teams have secured over fifty state championship titles in various sports including, but not limited to, soccer, baseball, golf, tennis, swimming, cheerleading, basketball, and wrestling.
Facilities
School replacement
In 2016, Carrollton High School underwent major renovations to replace many existing halls that have stood since the construction of the 1963 school. The new high school, a state-of-the-art facility taking design elements from the old Reid building, was constructed in three phases, and was finalized in 2019.
Grisham Stadium
Grisham Stadium serves as the main home field for many athletic teams in the school district.
Mabry Arts Center
The Mabry Arts Center is a theater designed to showcase the various productions, musicals, and visual art displays created by the student body.
Pope-McGinnis Student Activity Center
The Student Activity Center was built in 2019 to accommodate various athletic needs of the district. The facility houses an auxiliary basketball court, weightlifting room and the only regulation-sized indoor football field in the state of Georgia.
Notable alumni
Reggie Brown - football player
Cooper Criswell - baseball player
Corey Crowder - basketball player
Josh Harris - football player
Jamie Henderson - football player
John Willis Hurst - cardiologist
Jonathan Jones - football player
Darnell Powell - football player
Dontavius Russell - football player
References
External links
Official website
Carrollton City School District
CHS Trojan Band
Official athletic website
Public high schools in Georgia (U.S. state)
Schools in Carroll County, Georgia |
54789920 | https://en.wikipedia.org/wiki/The%20Great%20Battles%20of%20Alexander | The Great Battles of Alexander | The Great Battles of Alexander is a 1997 turn-based computer wargame developed by Erudite Software and published by Interactive Magic. Adapted from the GMT Games physical wargame of the same name, it depicts 10 of Alexander the Great's key conflicts, and simulates the interplay between Ancient Macedonian battle tactics and its rival military doctrines. Gameplay occurs at the tactical level: players direct predetermined armies on discrete battlefields, in a manner that one commentator compared to chess.
Development of Alexander began at Erudite Software in 1994, under the direction of Mark Herman, co-designer of the original board game. Its production cycle was long and troubled: following several delays, the game was dropped in 1996 by publisher Strategic Simulations. Interactive Magic ultimately signed Erudite to publish Alexander, and installed S. Craig Taylor as the game's producer. The team sought to make Alexander accessible despite the complexity of the wargame genre, and focused on polishing its audiovisual presentation and interface, the latter of which was inspired by Panzer General.
Critics praised Alexanders historical accuracy, graphics and audio, but noted its frame rate as a low point; a writer for PC Gamer UK argued that this problem helped to ruin the overall product. The title received a "Game of the Month" award from Jerry Pournelle of Byte. After the release of Alexander in June 1997, Erudite and Interactive Magic created two sequel products: The Great Battles of Hannibal (1997) and The Great Battles of Caesar (1998). These three games formed the Great Battles series, and were released together in the Great Battles: Collector's Edition compilation in late 1998. Their game engine was later reused in Erudite's North vs. South.
Gameplay
The Great Battles of Alexander is a computer wargame, which recreates the historical military exploits of Alexander the Great via turn-based gameplay. The game takes place on a hex map, and simulates combat at the tactical level; the player navigates an army of predetermined units on discrete battlefields, in a manner that one commentator compared to chess. Ten historical engagements—such as the Battle of the Hydaspes and the Siege of Pelium—are included.
Development
Production
The Great Battles of Alexander began development at Erudite Software in 1994, as an adaptation of the titular board wargame designed by Mark Herman and Richard Berg, first published by GMT Games in 1991. The physical Great Battles series was known as a commercial success in a period of falling sales for board games. Alexanders computer adaptation was first announced in late 1994, under the direction of Mark Herman, and was created with assistance from GMT. Erudite, a business software developer founded in 1990, initially hoped to self-publish the game. However, the company had partnered with publisher Strategic Simulations (SSI) by the time of Alexanders announcement. At that point, the game was set to include play-by-email (PBEM) support, and Herman explained his plan to apply artificial intelligence (AI) routines he had created originally for the United States Department of Defense. A summer 1995 release was planned.
Alexander experienced a long and troubled development cycle; Scott Udell of Computer Games Strategy Plus later called it "a 'lost child' of the computer wargaming world, moving from one publisher to another and then seeming to disappear completely." By August 1995, production delays at Erudite related to Windows 95 development had pushed the game's projected release back to the following year. As development progressed into 1996, Computer Gaming Worlds Terry Coleman reported that SSI had grown "tired of waiting" for Erudite to complete the game. As a result, the company dropped Alexander in the first half of the year, and the team was left to search for a new publisher. However, Coleman wrote at the time that "two other major wargame publishers" were rumored to be in talks with the developer. That May, Erudite was purchased for $12.8 million by GSE Systems, a developer of simulation programs for energy companies. The following month, the publication rights for Alexander were picked up by Interactive Magic. Udell called this an example of the publisher's "trend wherein they give new life to an orphan wargame product", as it had done with Harpoon Classic 97 and American Civil War: From Sumter to Appomattox.
Interactive Magic and Erudite Software released a game demo for Alexander in January 1997. Later that month, Interactive Magic declared its intent to publish the title alongside two sequel products: The Great Battles of Hannibal and The Great Battles of Caesar. These three games together formed the Great Battles computer wargame series, all produced by the publisher's S. Craig Taylor. At the time, Alexander was slated for release in early 1997; the sequels were given unspecified release dates. In May, Interactive Magic rescheduled Alexander for mid-June. Erudite completed the game on June 12, and it was ultimately released on the 22nd, at a price of roughly $50.
Design
Mark Herman described Alexander as an attempt to simulate the "interplay" between Ancient Macedonian battle tactics and the tactics of that nation's opponents, such as Persia and Greece. Based on his research into ancient war, he determined that Macedonia relied on a combined arms approach, while Persia favored ranged combat and Greece relied on infantry charges. To capture this clash via game mechanics, Herman categorized each unit into a specific tactical system: when units of rival tactical systems collide in shock combat, a bonus is awarded to the unit with the better system. He noted, "Many wargames treat all combat units as a singular entity while only varying speed and strength to show unit distinctions. I believe this approach is fundamentally wrong and removes most of what is important about tactical interactions in combat." Seeking also to capture the effects of leadership in the ancient world, he created the game's phasing turn structure, which allowed leaders to better display their initiative and range of influence in gameplay.
Alexander was adapted from the 1995 Deluxe re-release of the board game, rather than the original 1991 version. In converting a physical game design to computers, Herman hoped to "capture the essence of [the] original intent while using the strength of the new venue [...] to its best advantage." He considered the computer adaptation to be a "simulation", and a more accurate portrayal of Alexander the Great's battles than had been possible in board form. This led him to streamline, automate or eliminate several of the board version's features, including the trumping mechanic, whereby a leader with a high initiative rating could roll dice for a chance to interrupt an enemy leader's momentum. Trumping had been created to cut down on die rolls in the original; with a computer to automate this aspect, he felt that it was no longer necessary. Features from the board game that interrupted game flow, including those that required regular notification prompts, were removed. Herman argued that adapting board rules too literally made for poor computer gameplay, and that "the less times you remind the player he is playing [on] a computer and the more times you keep the interface environment constant and uninterrupted the better".
Erudite and Interactive Magic hoped to make Alexander both accessible to wargame newcomers and appealing to hardcore enthusiasts in the genre. They sought a product "so friendly that you can jump right in and enjoy it", which offered historical accuracy for experienced players and educational value for novices, according to co-designer Gene Billingsley. A heavy focus was placed on polishing the audiovisual presentation, which Bill Stealey of Interactive Magic believed would give the game a wide appeal. Craig Taylor noted the team's choice of a miniature wargaming graphical style, in opposition to wargame visuals akin to "a few figures [pasted] to the top of a counter". Inspiration for the game's audio was drawn from an episode of You Are There, in which Walter Cronkite performs a mock interview with Alexander the Great during the Battle of Gaugamela. Taylor likened the sequence's sound design to "a really intense football game". The team also simplified and streamlined the game's interface; at the time, Billingsley criticized earlier wargames for being needlessly inaccessible to newcomers. He remarked, "We believe strongly in the Panzer General type of interface, because it was so successful in getting the game to the player."
Reception and legacy
The Great Battles of Alexander was named Byte magazine's "Game of the Month" by columnist Jerry Pournelle. Calling it "the best classical era war game I've ever come across", Pournelle praised Alexander as a detailed and accurate portrayal of ancient war, without the time-consuming mathematical calculations required by board wargames.
The reviewer for Computer Games Strategy Plus, Robert Mayer, shared Pournelle's regard for Alexanders intuitive simplicity. "[F]or a boardgame conversion, this is as good as it gets", he argued. Mayer and Computer Gaming Worlds Jim Cobb offered plaudits to the game's visuals and interface, although they disagreed on the quality of the AI, about which Mayer had reservations. However, they concurred on the overall strength of the product: Cobb called Alexander "simply the best-ever ancients system", and a wargame "otherwise flawless" beyond frame rate problems and minor historical oversights.
PC Gamer US wargame columnist William R. Trotter continued the praise for Alexanders graphics and, along with Mayer, its audio. However, he echoed Cobb's complaints about its poor frame rate, while noting the "steep learning curve" and the bugs and errors within the interface. Although Trotter found these issues "minor in comparison to the overall achievement", James Weston of the magazine's British edition argued that frame rate and AI problems ruined the product. Despite enjoying the interface and campaign, he remarked that Alexander "will disappoint."
Writing for PC PowerPlay in Australia, reviewer March Stepnik compared Alexander favorably to real-time strategy titles such as Command & Conquer and Warcraft II: Tides of Darkness. Like Trotter, who considered the game to be "packed with authenticity", Stepnik singled out Alexanders "authentic feel" and deep, realistic strategy as high points. While he found the graphics mediocre and music unsuitable, he enjoyed the game overall. "Just don't be expecting any cheap and easy thrills — this will require some major investment", Stepnik concluded.
After the release of Alexander, Erudite Software and Interactive Magic launched the sequel, The Great Battles of Hannibal, in November 1997. The two games were followed by The Great Battles of Caesar early the next year. The Great Battles: Collector's Edition, which joined Alexander with its sequels, was released in December 1998. The series' game engine was later reused in North vs. South: The Great American Civil War, developed by Erudite and published by Interactive Magic.
References
External links
Official website (archived)
1997 video games
Computer wargames
Video games based on board games
Video games developed in the United States
Windows games
Windows-only games |
3339342 | https://en.wikipedia.org/wiki/Signcryption | Signcryption | In cryptography, signcryption is a public-key primitive that simultaneously performs the functions of both digital signature and encryption.
Encryption and digital signature are two fundamental cryptographic tools that can guarantee the confidentiality, integrity, and non-repudiation. Until 1997, they were viewed as important but distinct building blocks of various cryptographic systems. In public key schemes, a traditional method is to digitally sign a message then followed by an encryption (signature-then-encryption) that can have two problems: Low efficiency and high cost of such summation, and the case that any arbitrary scheme cannot guarantee security. Signcryption is a relatively new cryptographic technique that is supposed to perform the functions of digital signature and encryption in a single logical step and can effectively decrease the computational costs and communication overheads in comparison with the traditional signature-then-encryption schemes.
Signcryption provides the properties of both digital signatures and encryption schemes in a way that is more efficient than signing and encrypting separately. This means that at least some aspect of its efficiency (for example the computation time) is better than any hybrid of digital signature and encryption schemes, under a particular model of security. Note that sometimes hybrid encryption can be employed instead of simple encryption, and a single session-key reused for several encryptions to achieve better overall efficiency across many signature-encryptions than a signcryption scheme but the session-key reuse causes the system to lose security under even the relatively weak CPA model. This is the reason why a random session key is used for each message in a hybrid encryption scheme but for a given level of security (i.e., a given model, say CPA), a signcryption scheme should be more efficient than any simple signature-hybrid encryption combination.
History
The first signcryption scheme was introduced by Yuliang Zheng in 1997. Zheng also proposed an elliptic curve-based signcryption scheme that saves 58% of computational and 40% of communication costs when it is compared with the traditional elliptic curve-based signature-then-encryption schemes. There are also many other signcryption schemes that have been proposed throughout the years, each of them having its own problems and limitations, while offering different levels of security and computational costs.
Scheme
A signcryption scheme typically consists of three algorithms: Key Generation (Gen), Signcryption (SC), and Unsigncryption (USC). Gen generates a pair of keys for any user, SC is generally a probabilistic algorithm, and USC is most likely deterministic. Any signcryption scheme should have the following properties:
Correctness: Any signcryption scheme should be verifiably correct.
Efficiency: The computational costs and communication overheads of a signcryption scheme should be smaller than those of the best known signature-then-encryption schemes with the same provided functionalities.
Security: A signcryption scheme should simultaneously fulfill the security attributes of an encryption scheme and those of a digital signature. Such additional properties mainly include: Confidentiality, Unforgeability, Integrity, and Non-repudiation. Some signcryption schemes provide further attributes such as Public verifiability and Forward secrecy of message confidentiality while the others do not provide them. Such properties are the attributes that are required in many applications while the others may not require them. Hereunder, the above-mentioned attributes are briefly described.
Confidentiality: It should be computationally infeasible for an adaptive attacker to gain any partial information on the contents of a signcrypted text, without knowledge of the sender's or designated recipient's private key.
Unforgeability: It should be computationally infeasible for an adaptive attacker to masquerade as an honest sender in creating an authentic signcrypted text that can be accepted by the unsigncryption algorithm.
Non-repudiation: The recipient should have the ability to prove to a third party (e.g. a judge) that the sender has sent the signcrypted text. This ensures that the sender cannot deny his previously signcrypted texts.
Integrity: The recipient should be able to verify that the received message is the original one that was sent by the sender.
Public verifiability: Any third party without any need for the private key of sender or recipient can verify that the signcrypted text is the valid signcryption of its corresponding message.
Forward secrecy of message confidentiality: If the long-term private key of the sender is compromised, no one should be able to extract the plaintext of previously signcrypted texts. In a regular signcryption scheme, when the long-term private key is compromised, all the previously issued signatures will not be trustworthy any more. Since the threat of key exposure is becoming more acute as the cryptographic computations are performed more frequently on poorly protected devices such as mobile phones, forward secrecy seems an essential attribute in such systems.
Applications
Signcryption is seen to have several applications including the following:
Secure and authentic email.
E-commerce and M-commerce applications that often require confidentiality, authenticity, and perhaps non-repudiation.
See also
Authenticated encryption
References
Public-key cryptography |
9385454 | https://en.wikipedia.org/wiki/Spatial%20data%20infrastructure | Spatial data infrastructure | A spatial data infrastructure (SDI) is a data infrastructure implementing a framework of geographic data, metadata, users and tools that are interactively connected in order to use spatial data in an efficient and flexible way. Another definition is "the technology, policies, standards, human resources, and related activities necessary to acquire, process, distribute, use, maintain, and preserve spatial data".
A further definition is given in Kuhn (2005): "An SDI is a coordinated series of agreements on technology standards, institutional arrangements, and policies that enable the discovery and use of geospatial information by users and for purposes other than those it was created for."
General
Some of the main principles are that data and metadata should not be managed centrally, but by the data originator and/or owner, and that tools and services connect via computer networks to the various sources. A GIS is often the platform for deploying an individual node within an SDI. To achieve these objectives, good coordination between all the actors is necessary and the definition of standards is very important.
Due to its nature (size, cost, number of t-related. An example of an existing SDI, since 2002, is the NSDI created by the OMB Circular A-16 in the United States. At the European side, since 2007, the INSPIRE is a European Commission initiative to build a European SDI beyond national boundaries and ultimately the United Nations Spatial Data Infrastructure (UNSDI) will do the same for over 30 UN Funds, Programmes, Specialized Agencies and member countries.
Software components
An SDI should enable the discovery and delivery of spatial data from a data repository, via a spatial service provider, to a user. As mentioned earlier it is often wished that the data provider is able to update spatial data stored in a repository. Hence, the basic software components of an SDI are:
Software client - to display, query, and analyse spatial data (this could be a browser or a desktop GIS)
Catalogue service - for the discovery, browsing, and querying of metadata or spatial services, spatial datasets and other resources
Spatial data service - allowing the delivery of the data via the Internet
Processing services - such as datum and projection transformations, or the transformation of cadastral survey observations and owner requests into Cadastral documentation
(Spatial) data repository - to store data, e.g., a spatial database
GIS software (client or desktop) - to create and update spatial data
Besides these software components, a range of (international) technical standards are necessary that allow interaction between the different software components. Among those are geospatial standards defined by the Open Geospatial Consortium (e.g., OGC WMS, WFS, GML, etc.) and ISO (e.g., ISO 19115) for the delivery of maps, vector and raster data, but also data format and internet transfer standards by W3C consortium.
National spatial data infrastructures
List by country or administrative zone. It is not complete, is a sample of NSDIs with stable official website.
See also
GeoSUR
GEOSS
GMES
INSPIRE
UNSDI
GIS file formats
GIS software
External links
The INSPIRE Directive: a brief description (JRC Audiovisuals)
GSDI 11 World Conference: The Geo-Spatial event of 2009, Rotterdam The Netherlands
International Cartographic Association (ICA), the world body for mapping and GIScience professionals
Global Spatial Data Infrastructure (GSDI) Association
Links to SDI initiatives from the GSDI Association website
The Netherlands Coordination Office of UNSDI (UNSDI-NCO)
The GeoNetwork portal of UNSDI-NCO (with over 17.800 metadata sets)
Laboratory of Geo-Information Science and Remote Sensing
SNIG - Portuguese National System for Geographic Information
SDI-related journals
International Journal of Spatial Data Infrastructure Research
SDI-related books
The SDI Cookbook from the Global Spatial Data Infrastructur Organisation (GSDI)
Research and Theory in Advancing Spatial Data Infrastructure Concepts
GIS Worlds: Creating Spatial Data Infrastructures
Building European Spatial Data Infrastructures
SDI software
geOrchestra is a free, modular and interoperable Spatial Data Infrastructure software that includes other software like GeoNetwork, GeoServer, GeoWebCache,...,
GeoNetwork is a free and open source (FOSS) cataloging application for spatially referenced resources,
GeoNode is a web-based application and platform for developing geospatial information systems (GIS) and for deploying spatial data infrastructures (SDI),
OpenSDI includes Open Source components like GeoServer and GeoNetwork,
easySDI is a complete web-based platform for deploying any geoportal.
Geoportal Server is an open source solution for building SDI models where a central SDI node is populated with content from distributed nodes, as well as SDI models where each node participates equally in a federated mode. It requires a closed-source solution to operate.
References
Geographic data and information regulation
Spatial analysis
IT infrastructure |
23347531 | https://en.wikipedia.org/wiki/Trinity%20College%20Dublin%20American%20Football | Trinity College Dublin American Football | Trinity College Dublin American Football (competing as Trinity College; formerly known as the Trinity Thunderbolts) is the American Football team of Trinity College Dublin.
First established as the Gridiron Society in 1993, a competitive team was formed in 2008. The team's first season was played in the Irish American Football League (IAFL) Development League. Since then they have competed in the Shamrock Bowl Conference (SBC) South.
History
Dublin University Gridiron Society
The club was first established in the early nineties as the Dublin University Gridiron Society. It went on to play flag football under the name Trinity Thunderbolts for one season but soon folded due to lack of numbers and interest.
Trinity Thunderbolts
2008 season
In 2007, the club was reinstated by new club captain Conor O'Shea, once again as the Trinity Thunderbolts. IAFL commissioner Darrin O'Toole was recruited as coach. In their first year of full contact football, the Thunderbolts formed part of the new IAFL Development League (also known as the DV-8 league). Their season began with successive losses against the IAFL's oldest team, the Craigavon Cowboys. Another loss followed against the Dublin Rebels, the eventual league winners. The team then scored what would be its only victory of the season against the Dublin Dragons, in a 51–20 blowout. The Thunderbolts then lost to the Cork Admirals and the Rebels again to leave them at 1–5.
Dublin University American Football Club
2009 season
Following the conclusion of the 2008 season, the team undertook moves to become an official Trinity College club, as required by college rules, renaming itself the Dublin University American Football Club. After the departure of Darrin O'Toole, injured captain Conor O'Shea became head coach. The club went on a recruiting drive to increase its squad size, looking to improve on its 1–5 record.
The moves proved successful, as in the club's first match of the 2009 season, it defeated the Edenderry Soldiers on a scoreline of 56–0, which would prove to be both the highest margin of victory in the league that season, and the first of seven shutout victories for Trinity. Trinity's only loss of the season came in their second game, against the UCD Sentinels. They followed this quickly with two victories against the Sentinels, followed by victories against the Craigavon Cowboys, the Dublin Dragons, and two against the Erris Rams, to leave the team with an unassailable record of seven victories and one loss.
2010 season
In 2010, Darrin O'Toole returned as head coach, and the club looked to continue its success in the DV-8 league by moving into the main IAFL league. Trinity was placed in the IAFL Central division, alongside collegiate rivals the DCU Saints, the West Dublin Rhinos, and the league's most successful team, the Dublin Rebels. Despite a strong showing in early season games, including inaugural victories over the DCU Saints and the Belfast Trojans, the club suffered from poor form during the middle of the season, culminating in a 36–6 loss to the Saints in what would transpire to be their last competitive game before folding the following season.
2011 season
2011 began brightly for the club, as they captured the IAFL College Bowl for the first time in spectacular fashion. Facing the UL Vikings in Limerick, the game finished 12–6 after triple overtime (7 quarters) and 3 hours and 45 minutes of play; an IAFL record for match length. The victory was sealed in the third quarter of overtime with a touchdown from Rob McDowell. Linebacker Stephen Carton was named game MVP. This form continued into the full IAFL league, as the club recorded three victories in its opening four games: against UCD, the Dublin Dragons, and the West Dublin Rhinos. However, the season would end poorly: the club was forced to withdrew from the league in early June due to a lack of players (the college's term having finished in May, many students became unavailable after this time). Despite forfeiting three games, running back Rob McDowell would go on to be named the 2011 IAFL league MVP.
2012 season
Trinity hosted a preseason charity game in November against rivals UCD, with Trinity coming out on top 7–0 thanks to a Rob McDowell touchdown. But more importantly the game ensured the children in Temple Street hospital received presents that were donated by both sets of fans and players. Competing in the newly formed IAFL South division, following the restructuring of the IAFL's divisional system, Trinity was set to embark on their 2012 campaign with early match-ups against the top three-seeded teams in the country.
In their season opener, Trinity endured a crushing 55–8 defeat at the hands of the second-ranked UL Vikings. This was immediately followed by a 52–18 loss to the reigning league champions, the Dublin Rebels. Having begun the season 0–2, Trinity sought to redeem their poor start with a win against the number three-seeded Carrickfergus Knights. Trinity sprang to an early 12-point lead and would hold on to beat the Knights 19–16. Trinity went on to dominate the rest of their season, ending 6–2 and setting various offensive records. The club concluded its season with an exhibition game on 2 June against the New England Ironmen, a collection of graduating seniors from a number of NCAA Division III football programs located mainly in the New England region, including Endicott College, UMass Dartmouth, Juniata College, Curry College, and Worcester State University.
2013 season
Trinity competed in the South division of the then newly formed Shamrock Bowl Conference (SBC) in the 2013 season.
Captains
The captain of the club is the team's central figure. He, along with the committee, decides team policy and procedure according to the club's constitution and previous precedents. The committee appoints the team's Head Coach and resigns football related matters in the club to him. The current captain is Rory O'Dwyer.
Records
2014 season
Winning all eight regular season games, Trinity won the south division of the Shamrock Bowl Conference. En route to their undefeated year, Trinity's offence set league records in nearly every category. As well, Trinity had three players leading in the Shamrock Bowl Conference individual statistical categories in the 2014 season (QB Dan Finnamore, WR Daniel Murphy and K Conor McGinn). On 25 May 2014 they defeated the Belfast Trojans 18–0, bringing an end to their two-year unbeaten streak. After the regular season they matched up with the Dublin Rebels in the semi-finals of the playoffs. Trinity got the best of the Rebels with a dominant 41–8 win. The students then faced the Belfast Trojans once again in Shamrock Bowl XXVIII. In a rain-soaked matchup in Tallaght Stadium, the Trojans eked out a 7–0 win to capture their third Championship in a row.
College Bowl
Regular season
Standings
Shamrock Bowl Playoffs
Trinity received a first-round bye in the playoffs and matched up against the Dublin Rebels in the semi-finals. After a commanding win over the Rebels, Trinity went on to play in Shamrock Bowl XXVIII against the Belfast Trojans. In a rain-soaked Tallaght Stadium, Belfast capitalised on a crucial Trinity turnover for the only score, and their second Shamrock Bowl win in a row.
2015 season
Trinity followed its undefeated campaign with a 7–1 record. They lost the first game in a slow, muddy affair in UL. Following that they got a pair of unconventional wins including a scrappy 21-point comeback win at the Dublin Rebels. In a truly memorable game, the Rebels became the only IAFL team in history to squander such a large lead. Trinity drove to score the winning touchdown and capped it off with a successful 2-point conversion to WR Alex Gurnee. Their season rolled on much like their previous undefeated season with a defence that began firing on all cylinders to match their high-powered offence. A game against the Dublin Rhinos involved a jersey mishap where Trinity wore the Rhinos jerseys for the duration of their win. Some matches this season also took place at ALSAA due to scheduling issues. Trinity advanced to the playoff where they once again matched up with the Dublin Rebels. During the rainy match Trinity's offence couldn't be stopped and they advanced to Shamrock Bowl XXIX with a commanding 22–0 win over the rival Rebels. In the Dalymount Park rematch of the previous bowl game Trinity was stopped by the best team in Ireland. the Trojans got the victory by beating the students 28–14 and earning their fourth Championship in a row.
College Bowl
Regular season
Standings
Shamrock Bowl Playoffs
Trinity received a first-round bye in the playoffs and matched up in a rematch of the regular season classic with the Dublin Rebels. In the away match at a neutral venue, Triniy held the high power Rebels to a shutout. Advancing to the Shamrock Bowl, Trinity played a rematch of last year's Shamrock Bowl against the Belfast Trojans.
2016 season
On the back of a second Shamrock Bowl loss Trinity's 2016 season never got off the ground: a paltry 2–6 record was all the students had to show. Coming into the campaign Trinity lost a core part of the team to the expatriation of Offensive Coordinator Craig Marron, Offensive Weapon Rob McDowell, OL Kieran Coughlan, QB Dan Finnamore, WR Alex Gurnee. Beleaguered, Trinity went into the season trying out different offensive combinations none of which provided for a stable week in week out team like the previous years. The season began in November with an 8–6 loss to UL in the College Championships. Beginning the SBC season Trinity lost away to UCD 22–0. They rebounded the next week against Craigavon at home. A close home loss to UL followed. Then an ugly loss away to North Kildare in a game overshadowed with penalties made for a 1–3 record at the halfway point. The students gained back some players from key injuries and recorded a 31–0 shutout win in a home rematch with North Kildare. It would be their last win during a season marred by inconsistent play on both sides of the ball. Trinity nevertheless faced a strong UL team in the first round of the playoffs and lost by a missed XP 7–6, ending their 2016 regular season. Emerging from the 2–6 season was Conor O'Dwyer who with a strong season at Tight End, was named to the Irish National Team. Trinity also played an international fixture against the visiting NYPD Finest Football team.
College Bowl
Regular season
Standings
Shamrock Bowl Playoffs
Trinity received a Wild Card spot in the playoffs against UL Vikings. In the away match, RB Ola Bademosi scored the lone touchdown, but a missed extra point led to a Vikings win, and the end of Trinity's season.
Overall records
Note: W = Wins, L = Losses, T = Ties
References
External links
1993 establishments in Ireland
American football teams in County Dublin
American football teams in the Republic of Ireland
Sport at Trinity College Dublin
American football teams established in 2008 |
38279132 | https://en.wikipedia.org/wiki/Convergent%20encryption | Convergent encryption | Convergent encryption, also known as content hash keying, is a cryptosystem that produces identical ciphertext from identical plaintext files. This has applications in cloud computing to remove duplicate files from storage without the provider having access to the encryption keys. The combination of deduplication and convergent encryption was described in a backup system patent filed by Stac Electronics in 1995. This combination has been used by Farsite, Permabit, Freenet, MojoNation, GNUnet, flud, and the Tahoe Least-Authority File Store.
The system gained additional visibility in 2011 when cloud storage provider Bitcasa announced they were using convergent encryption to enable de-duplication of data in their cloud storage service.
Overview
The system computes a cryptographic hash of the plaintext in question.
The system then encrypts the plaintext by using the hash as a key.
Finally, the hash itself is stored, encrypted with a key chosen by the user.
Known Attacks
Convergent encryption is open to a "confirmation of a file attack" in which an attacker can effectively confirm whether a target possesses a certain file by encrypting an unencrypted, or plain-text, version and then simply comparing the output with files possessed by the target. This attack poses a problem for a user storing information that is non-unique, i.e. also either publicly available or already held by the adversary - for example: banned books or files that cause copyright infringement. An argument could be made that a confirmation of a file attack is rendered less effective by adding a unique piece of data such as a few random characters to the plain text before encryption; this causes the uploaded file to be unique and therefore results in a unique encrypted file. However, some implementations of convergent encryption where the plain-text is broken down into blocks based on file content, and each block then independently convergently encrypted may inadvertently defeat attempts at making the file unique by adding bytes at the beginning or end.
Even more alarming than the confirmation attack is the "learn the remaining information attack" described by Drew Perttula in 2008. This type of attack applies to the encryption of files that are only slight variations of a public document. For example, if the defender encrypts a bank form including a ten digit bank account number, an attacker that is aware of generic bank form format may extract defender's bank account number by producing bank forms for all possible bank account numbers, encrypt them and then by comparing those encryptions with defender's encrypted file deduce the bank account number. Note that this attack can be extended to attack a large number of targets at once (all spelling variations of a target bank customer in the example above, or even all potential bank customers), and the presence of this problem extends to any type of form document: tax returns, financial documents, healthcare forms, employment forms, etc. Also note that there is no known method for decreasing the severity of this attack -- adding a few random bytes to files as they are stored does not help, since those bytes can likewise be attacked with the "learn the remaining information" approach. The only effective approach to mitigating this attack is to encrypt the contents of files with a non-convergent secret before storing (negating any benefit from convergent encryption), or to simply not use convergent encryption in the first place.
See also
Salt (cryptography)
Deterministic encryption
References
Cryptography |
47825879 | https://en.wikipedia.org/wiki/Vacuum-tube%20computer | Vacuum-tube computer | A vacuum-tube computer, now termed a first-generation computer, is a computer that uses vacuum tubes for logic circuitry. Although superseded by second-generation, transistorized computers, vacuum-tube computers continued to be built into the 1960s. These computers were mostly one-of-a-kind designs.
Development
The use of cross-coupled vacuum-tube amplifiers to produce a train of pulses was described by Eccles and Jordan in 1918. This circuit became the basis of the flip-flop, a circuit with two states that became the fundamental element of electronic binary digital computers.
The Atanasoff–Berry computer, a prototype of which was first demonstrated in 1939, is now credited as the first vacuum-tube computer. However, it was not a general-purpose computer, being able to only solve a system of linear equations, and was also not very reliable.
During World War II, special-purpose vacuum-tube digital computers such as Colossus were used to break German and Japanese ciphers. The military intelligence gathered by these systems was essential to the Allied war effort. Each Colossus used between 1,600 and 2,400 vacuum tubes. The existence of the machine was kept secret, and the public was unaware of its application until the 1970s.
Also during the war, electro-mechanical binary computers were being developed by Konrad Zuse. The German military establishment during the war did not prioritize computer development. An experimental electronic computer circuit with around 100 tubes was developed in 1942, but destroyed in an air raid.
In the United States, work started on the ENIAC computer late in the Second World War. The machine was completed in 1945. Although one application which motivated its development was the production of firing tables for artillery, one of the first uses of ENIAC was to carry out calculations related to the development of a hydrogen bomb. ENIAC was programmed with plugboards and switches instead of an electronically stored program. A post-war series of lectures disclosing the design of ENIAC, and a report by John von Neumann on a foreseeable successor to ENIAC, First Draft of a Report on the EDVAC, were widely distributed and were influential in the design of post-war vacuum-tube computers.
The Ferranti Mark 1 (1951) is considered the first commercial vacuum tube computer. The first mass-produced computer was the IBM 650 (1953).
Design
Vacuum-tube technology required a great deal of electricity. The ENIAC computer (1946) had over 17,000 tubes and suffered a tube failure (which would take 15 minutes to locate) on average every two days. In operation the ENIAC consumed 150 kilowatts of power, of which 80 kilowatts were used for heating tubes, 45 kilowatts for DC power supplies, 20 kilowatts for ventilation blowers, and 5 kilowatts for punched-card auxiliary equipment.
Because the failure of any one of the thousands of tubes in a computer could result in errors, tube reliability was of high importance. Special quality tubes were built for computer service, with higher standards of materials, inspection and testing than standard receiving tubes.
One effect of digital operation that rarely appeared in analog circuits was cathode poisoning. Vacuum tubes that operated for extended intervals with no plate current would develop a high-resistivity layer on the cathodes, reducing the gain of the tube. Specially selected materials were required for computer tubes to prevent this effect. To avoid mechanical stresses associated with warming the tubes to operating temperature, often the tube heaters had their full operating voltage applied slowly, over a minute or more, to prevent stress-related fractures of the cathode heaters. Heater power could be left on during standby time for the machine, with high-voltage plate supplies switched off. Marginal testing was built into sub-systems of a vacuum-tube computer; by lowering plate or heater voltages and testing for proper operation, components at risk of early failure could be detected. To regulate all the power-supply voltages and prevent surges and dips from the power grid from affecting computer operation, power was derived from a motor-generator set that improved the stability and regulation of power-supply voltages.
Two broad types of logic circuits were used in construction of vacuum-tube computers. The "asynchronous", or direct, DC-coupled type used only resistors to connect between logic gates and within the gates themselves. Logic levels were represented by two widely separated voltages. In the "synchronous", or "dynamic pulse", type of logic, every stage was coupled by pulse networks such as transformers or capacitors. Each logic element had a "clock" pulse applied. Logic states were represented by the presence or absence of pulses during each clock interval. Asynchronous designs potentially could operate faster, but required more circuitry to protect against logic "races", as different logic paths would have different propagation time from input to stable output. Synchronous systems avoided this problem, but needed extra circuitry to distribute a clock signal, which might have several phases for each stage of the machine. Direct-coupled logic stages were somewhat sensitive to drift in component values or small leakage currents, but the binary nature of operation gave circuits considerable margin against malfunction due to drift. An example of a "pulse" (syncronous) compute was the MIT Whirlwind. The IAS computers (ILLIAC and others) used asynchronous, direct-coupled logic stages.
Tube computers primarily used triodes and pentodes as switching and amplifying elements. At least one specially designed gating tube had two control grids with similar characteristics, which allowed it to directly implement a two-input AND gate. Thyratrons were sometimes used, such as for driving I/O devices or to simplify design of latches and holding registers. Often vacuum-tube computers made extensive use of solid-state ("crystal") diodes to perform AND and OR logic functions, and only used vacuum tubes to amplify signals between stages or to construct elements such as flip-flops, counters, and registers. The solid-state diodes reduced the size and power consumption of the overall machine.
Memory technology
Early systems used a variety of memory technologies prior to finally settling on magnetic-core memory. The Atanasoff–Berry computer of 1942 stored numerical values as binary numbers in a revolving mechanical drum, with a special circuit to refresh this "dynamic" memory on every revolution. The war-time ENIAC could store 20 numbers, but the vacuum-tube registers used were too expensive to build to store more than a few numbers. A stored-program computer was out of reach until an economical form of memory could be developed. Maurice Wilkes built EDSAC in 1947, which had a mercury delay-line memory that could store 32 words of 17 bits each. Since the delay-line memory was inherently serially organized, the machine logic was also bit-serial as well.
Mercury delay-line memory was used by J. Presper Eckert in the EDVAC and UNIVAC I. Eckert and John Mauchly received a patent for delay-line memory in 1953. Bits in a delay line are stored as sound waves in the medium, which travel at a constant rate. The UNIVAC I (1951) used seven memory units, each containing 18 columns of mercury, storing 120 bits each. This provided a memory of 1000 12-character words with an average access time of 300 microseconds. This memory subsystem formed its own walk-in room.
Williams tubes were the first true random-access memory device. The Williams tube displays a grid of dots on a cathode-ray tube (CRT), creating a small charge of static electricity over each dot. The charge at the location of each of the dots is read by a thin metal sheet just in front of the display. Frederic Calland Williams and Tom Kilburn applied for patents for the Williams tube in 1946. The Williams tube was much faster than the delay line, but suffered from reliability problems. The UNIVAC 1103 used 36 Williams tubes with a capacity of 1024 bits each, giving a total random access memory of 1024 words of 36 bits each. The access time for Williams-tube memory on the IBM 701 was 30 microseconds.
Magnetic drum memory was invented in 1932 by Gustav Tauschek in Austria. A drum consisted of a large rapidly rotating metal cylinder coated with a ferromagnetic recording material. Most drums had one or more rows of fixed read-write heads along the long axis of the drum for each track. The drum controller selected the proper head and waited for the data to appear under it as the drum turned. The IBM 650 had a drum memory of 1000 to 4000 10-digit words with an average access time of 2.5 milliseconds.
Magnetic-core memory was patented by An Wang in 1951. Core uses tiny magnetic ring cores, through which wires are threaded to write and read information. Each core represents one bit of information. The cores can be magnetized in two different ways (clockwise or counterclockwise), and the bit stored in a core is zero or one depending on that core's magnetization direction. The wires allow an individual core to be set to either a one or a zero and for its magnetization to be changed by sending appropriate electric current pulses through selected wires. Core memory offered random access and greater speed, in addition to much higher reliability. It was quickly put to use in computers such as the MIT/IBM Whirlwind, where an initial 1024 16-bit words of memory were installed replacing Williams tubes. Likewise the UNIVAC 1103 was upgraded to the 1103A in 1956, with core memory replacing Williams tubes. The core memory used on the 1103 had an access time of 10 microseconds.
See also
History of computing hardware
List of vacuum-tube computers
7AK7 vacuum tube
Stored-program computer
References
History of computing hardware |
3824630 | https://en.wikipedia.org/wiki/Paul%20Maritz | Paul Maritz | Paul Alistair Maritz (born March 16, 1955) is a computer scientist and software executive. He held positions at large companies including Microsoft and EMC Corporation. He currently serves as chairman of Pivotal Software.
Early life
Paul Maritz was born and raised in Rhodesia (now Zimbabwe). His family later moved to South Africa where he was schooled at Highbury Preparatory School and Hilton College. He received a B.Sc. in Computer Science from the University of Natal, and a B.Sc. (Hons) degree, also in Computer Science, from the University of Cape Town in 1977.
Career
After finishing his graduate studies, Maritz had a programming job with Burroughs Corporation and later became a researcher at the University of St. Andrews in Scotland, before moving to Silicon Valley in 1981 to join Intel. He worked for Intel for five years, including developing early tools to help developers write software for the then-new x86 platform, before joining Microsoft in 1986.
Microsoft
From 1986 to 2000, he worked at Microsoft and served on its executive committee. He became executive vice president of the Platforms Strategy and Developer Group and part of the 5-person executive management team. He was often said to be the third-ranking executive, behind Bill Gates and Steve Ballmer. He was responsible for essentially all of Microsoft's desktop and server software, including such major initiatives as the development of Windows 95, Windows NT, and Internet Explorer.
He was the highest-ranking executive to testify at the antitrust trial of Microsoft in 1999.
While at Microsoft, Maritz was credited with originating the term "eating your own dogfood" also known as dogfooding.
In July 1999, he announced he would have a reduced role at Microsoft, and resigned in September 2000 around the announcement of Windows ME.
According to Steve Ballmer Maritz was "truly a leader among leaders". Bill Gates stated that "Paul's vision and technological insight has had a major impact not only on Microsoft but on the entire computer industry."
In October 2013, he was reported to again be under consideration to become chief executive of Microsoft, succeeding Ballmer.
Pi Corporation
He then co-founded, and was CEO of Pi Corporation, a company backed by Warburg Pincus, which developed software for Linux with development in Bangalore, India.
When Pi was acquired by EMC in February 2008, Maritz briefly became president and general manager of EMC Corporation's cloud computing division.
VMware
On July 8, 2008 he was appointed CEO of VMware (a public company majority-owned by EMC), replacing co-founder and CEO Diane Greene. While serving as CEO, company sales and profits tripled by mid-2012. He was succeeded as CEO by Pat Gelsinger on September 1, 2012.
GoPivotal
In April 2013, he was announced as the CEO of GoPivotal, Inc. (Pivotal), a venture funded by General Electric (GE), EMC and VMware which he led until August 2015.
After his resignation he announced that he would stay the CEO of Pivotals and mentor other companies in which he has invested. He also wants to work for Mifos, a financial services startup, that targets developing countries.
Mifos
Maritz serves as Chairman of the Board of Mifos, an open source financial software platform. For some time he was the only source of financial support for the initiative.
Philanthropy
Maritz was an angel investor in Apture.
He is the chairman of the board of the Grameen Foundation, which provides microfinance support and sponsors third-world development projects.
Maritz is interested in wildlife issues and helps developing countries to use technology to improve life.
Recognition
In 2010, Paul Maritz was named by CRN Magazine the number one Most Influential Executive of 2010.
In 2011, Maritz won the Morgan Stanley Leadership Award for Global Commerce. As well in 2011, the Silicon Valley Business Journal announced Paul Maritz as the Executive of the Year.
References
External links
1955 births
Living people
Afrikaner people
White Rhodesian people
Zimbabwean people of Dutch descent
American people of Afrikaner descent
Zimbabwean emigrants to the United States
Microsoft employees
University of Cape Town alumni
Alumni of Hilton College (South Africa)
Academics of the University of St Andrews
American technology chief executives
American company founders
Computer programmers
American chairpersons of corporations
University of Natal alumni
Burroughs Corporation people
Rhodesian emigrants to South Africa |
8002258 | https://en.wikipedia.org/wiki/Network%20booting | Network booting | Network booting, shortened netboot, is the process of booting a computer from a network rather than a local drive. This method of booting can be used by routers, diskless workstations and centrally managed computers (thin clients) such as public computers at libraries and schools.
Network booting can be used to centralize management of disk storage, which supporters claim can result in reduced capital and maintenance costs. It can also be used in cluster computing, in which nodes may not have local disks.
In the late 1980s/early 1990s, network boot was used to save the expense of a disk drive, because a decently sized harddisk would still cost thousands of dollars, often equaling the price of the CPU.
Hardware support
Contemporary desktop personal computers generally provide an option to boot from the network in their BIOS/UEFI via the Preboot Execution Environment (PXE). Post-1998 PowerPC (G3 G5) Mac systems can also boot from their New World ROM firmware to a network disk via NetBoot. Old personal computers without network boot firmware support can utilize a floppy disk or flash drive containing software to boot from the network.
Process
The initial software to be run is loaded from a server on the network; for IP networks this is usually done using the Trivial File Transfer Protocol (TFTP). The server from which to load the initial software is usually found by broadcasting a Bootstrap Protocol or Dynamic Host Configuration Protocol (DHCP) request. Typically, this initial software is not a full image of the operating system to be loaded, but a small network boot manager program such as PXELINUX which can deploy a boot option menu and then load the full image by invoking the corresponding second-stage bootloader.
Installations
Netbooting is also used for unattended operating system installations. In this case, a network-booted helper operating system is used as a platform to execute the script-driven, unattended installation of the intended operating system on the target machine. Implementations of this for Mac OS X and Windows exist as NetInstall and Windows Deployment Services, respectively.
Legacy
Before IP became the primary Layer 3 protocol, Novell's NetWare Core Protocol (NCP) and IBM's Remote Initial Program Load (RIPL) were widely used for network booting. Their client implementations also fit into smaller ROM than PXE. Technically network booting can be implemented over any of file transfer or resource sharing protocols, for example, NFS is preferred by BSD variants.
See also
Wake-on-LAN (WoL)
References
External links
PXE specification The Preboot Execution Environment specification v2.1 published by Intel & SystemSoft
Remote Boot Protocol Draft draft of the PXE Client/Server Protocol included in the PXE specification
NetBoot NetBoot 2.0: Boot Server Discovery Protocol (BSDP) |
29783050 | https://en.wikipedia.org/wiki/Visual%20Studio%20Lab%20Management | Visual Studio Lab Management | Visual Studio Lab Management is a software development tool developed by Microsoft for software testers to create and manage virtual environments. Lab Management extends the existing Visual Studio Application Lifecycle Management platform to enable an integrated Hyper-V based test lab.
Since Visual Studio 2012, it is already shipped as a part of it; and, can be set up after Azure DevOps and SCVMM are integrated.
Virtual Environment
A virtual environment is a collection of virtual machines (VMs). Each Virtual Machine in a virtual environment represents a role required for the application that is to be tested, developed or run.
Lab Management can be used to start all the virtual machines in a virtual environment to run an application, or test an application. Lab Management uses System Center Virtual Machine Manager (SCVMM) to allow access to virtual machines or templates in a library as golden masters. These golden masters are created by using either Hyper-V or SCVMM. SCVMM is used to deploy the virtual machines and templates in the environments on the specified host group.
Using Lab Management for application lifecycle management
Visual Studio Lab Management is integrated with System Center Virtual Machine Manager (SCVMM) to enable management of multiple physical computers that host virtual machines and to manage the storage of virtual machines, virtual machine templates, and other configuration files in SCVMM library servers. It enables users to:
Reproduce the exact conditions of a bug or other development issue
Build, deploy, and test applications automatically in a clean environment
Reduce the time required to create and configure machines for testing an application
Run multiple copies of a test or development at the same time
Create and manage virtual environments without requiring system administrator privileges
References
External links
Visual Studio 2010 Lab management
Configuring and administering lab management
2005 software
2008 software
2010 software
Bug and issue tracking software
Build automation
Microsoft server technology
Lab Management
Proprietary version control systems
Unit testing frameworks |
379518 | https://en.wikipedia.org/wiki/Panoramic%20photography | Panoramic photography | Panoramic photography is a technique of photography, using specialized equipment or software, that captures images with horizontally elongated fields of view. It is sometimes known as wide format photography. The term has also been applied to a photograph that is cropped to a relatively wide aspect ratio, like the familiar letterbox format in wide-screen video.
While there is no formal division between "wide-angle" and "panoramic" photography, "wide-angle" normally refers to a type of lens, but using this lens type does not necessarily make an image a panorama. An image made with an ultra wide-angle fisheye lens covering the normal film frame of 1:1.33 is not automatically considered to be a panorama. An image showing a field of view approximating, or greater than, that of the human eye – about 160° by 75° – may be termed panoramic. This generally means it has an aspect ratio of 2:1 or larger, the image being at least twice as wide as it is high. The resulting images take the form of a wide strip. Some panoramic images have aspect ratios of 4:1 and sometimes 10:1, covering fields of view of up to 360 degrees. Both the aspect ratio and coverage of field are important factors in defining a true panoramic image.
Photo-finishers and manufacturers of Advanced Photo System (APS) cameras use the word "panoramic" to define any print format with a wide aspect ratio, not necessarily photos that encompass a large field of view.
History
The device of the panorama existed in painting, particularly in murals as early as 20 A.D. in those found in Pompeii, as a means of generating an immersive 'panoptic' experience of a vista, long before the advent of photography. In the century prior to the advent of photography, and from 1787, with the work of Robert Barker, it reached a pinnacle of development in which whole buildings were constructed to house 360° panoramas, and even incorporated lighting effects and moving elements. Indeed, the careers of one of the inventors of photography, Daguerre, began in the production of popular panoramas and dioramas.
The idea and longing to create a detailed cityscape without a paintbrush, inspired Friedrich von Marten. von Marten created panoramic daguerreotype by using a special panoramic camera that he created himself. The camera could capture a broad view on a single daguerreotype plate. In complete and vivid detail, a cityscape is laid out before the viewer.
The development of panoramic cameras was a logical extension of the nineteenth-century fad for the panorama. One of the first recorded patents for a panoramic camera was submitted by Joseph Puchberger in Austria in 1843 for a hand-cranked, 150° field of view, 8-inch focal length camera that exposed a relatively large Daguerreotype, up to long. A more successful and technically superior panoramic camera was assembled the next year by Friedrich von Martens in Germany in 1844. His camera, the Megaskop, used curved plates and added the crucial feature of set gears, offering a relatively steady panning speed. As a result, the camera properly exposed the photographic plate, avoiding unsteady speeds that can create an unevenness in exposure, called banding. Martens was employed by Lerebours, a photographer/publisher. It is also possible that Martens camera was perfected before Puchberger patented his camera. Because of the high cost of materials and the technical difficulty of properly exposing the plates, Daguerreotype panoramas, especially those pieced together from several plates (see below) are rare.
After the advent of wet-plate collodion process, photographers would take anywhere from two to a dozen of the ensuing albumen prints and piece them together to form a panoramic image (see: Segmented). This photographic process was technically easier and far less expensive than Daguerreotypes. While William Stanley Jevons' wet-collodion Panorama of Port Jackson, New South Wales, from a high rock above Shell Cove, North Shore survived undiscovered until 1953 in his scrap-book of 1857, some of the most famous early panoramas were assembled this way by George N. Barnard, a photographer for the Union Army in the American Civil War in the 1860s. His work provided vast overviews of fortifications and terrain, much valued by engineers, generals, and artists alike. (see Photography and photographers of the American Civil War) In 1875, through remarkable effort, Bernard Otto Holtermann and Charles Bayliss coated twenty-three wet-plates measuring 56 by 46 centimetres to record a sweeping view of Sydney Harbour.
Following the invention of flexible film in 1888, panoramic photography was revolutionised. Dozens of cameras were marketed, many with brand names indicative of their era; such as the Pantascopic, (1862) Cylindrograph survey camera (1884), Kodak Panoram (1899), Wonder Panoramic (1890), and Cyclo-Pan (1970). More portable and simple to operate, and with the advantage of holding several panoramic views on the one roll, these cameras were enthusiastically deployed around the turn of the century by such photographers as the American adventurer Melvin Vaniman, who popularised the medium in Australia where it was taken up by both Pictorialist and postcard photographers, such as Robert Vere Scott,. Richard T. Maurice (1859-1909), H.H. Tilbrook (1884-1937), and Harry Phillips (1873-1944).
Panoramic cameras and methods
Stereo Cyclographe
A camera with combined two-fixed focus panoramic camera in one mahogany-wooded box. The lenses were eight centimeters apart from each other with an indicator in between the lens to help the photographer set the camera level. A clock motor transported the nine-centimeter-wide film along with turning the shaft that rotated the camera. The camera could make a 9 × 80 cm pair that required a special viewer. These images were mostly used for mapping purposes.
Wonder Panoramic Camera
Made in 1890 in Berlin, Germany, by Rudolf Stirn, the Wonder Panoramic Camera needed the photographer for its motive power. A string, inside of the camera, hanging through a hole in the tripod screw, wound around a pulley inside the wooden box camera. To take a panoramic photo, the photographer swiveled the metal cap away from the lens to start the exposure. The rotation could be set for a full 360-degree view, producing an eighteen-inch-long negative.
Periphote
Built by Lumiere Freres of Paris in 1901. The Periphote had a spring-wound clock motor that rotated, and the inside barrier held the roll of film and its take-up spool. Attached to the body was a 55mm Jarret lens and a prism that directed the light through a half-millimeter-wide aperture at the film.
Short rotation
Short rotation, rotating lens and swing lens cameras have a lens that rotates around the camera lens's rear nodal point and use a curved film plane. As the photograph is taken, the lens pivots around its rear nodal point while a slit exposes a vertical strip of film that is aligned with the axis of the lens. The exposure usually takes a fraction of a second. Typically, these cameras capture a field of view between 110° to 140° and an aspect ratio of 2:1 to 4:1. The images produced occupy between 1.5 and 3 times as much space on the negative as the standard 24 mm x 36 mm 35 mm frame.
Cameras of this type include the Widelux, Noblex, and the Horizon. These have a negative size of approximately 24x58 mm. The Russian "Spaceview FT-2", originally an artillery spotting camera, produced wider negatives, 12 exposures on a 36-exposure 35 mm film.
Short rotation cameras usually offer few shutter speeds and have poor focusing ability. Most models have a fixed focus lens, set to the hyperfocal distance of the maximum aperture of the lens, often at around 10 meters (30 ft). Photographers wishing to photograph closer subjects must use a small aperture to bring the foreground into focus, limiting the camera's use in low-light situations.
Rotating lens cameras produce distortion of straight lines. This looks unusual because the image, which was captured from a sweeping, curved perspective, is being viewed flat. To view the image correctly, the viewer would have to produce a sufficiently large print and curve it identically to the curve of the film plane. This distortion can be reduced by using a swing-lens camera with a standard focal length lens. The FT-2 has a 50 mm while most other 35 mm swing lens cameras use a wide-angle lens, often 28 mm].
Similar distortion is seen in panoramas shot with digital cameras using in-camera stitching.
Full rotation
Rotating panoramic cameras, also called slit scan or scanning cameras are capable of 360° or greater degree of rotation. A clockwork or motorized mechanism rotates the camera continuously and pulls the film through the camera, so the motion of the film matches that of the image movement across the image plane. Exposure is made through a narrow slit. The central part of the image field produces a very sharp picture that is consistent across the frame.
Digital rotating line cameras image a 360° panorama line by line. Digital cameras in this style are the Panoscan and Eyescan. Analogue cameras include the Cirkut (1905), Leme (1962), Hulcherama (1979), Globuscope (1981), Seitz Roundshot (1988) and Lomography Spinner 360° (2010).
Fixed lens
Fixed lens cameras, also called flatback, wide view or wide field, have fixed lenses and a flat image plane. These are the most common form of panoramic camera and range from inexpensive APS cameras to sophisticated 6x17 cm and 6x24 cm medium format cameras. Panoramic cameras using sheet film are available in formats up to 10 x 24 inches. APS or 35 mm cameras produce cropped images in a panoramic aspect ratio using a small area of film. Specialized 35 mm or medium format fixed-lens panoramic cameras use wide field lenses to cover an extended length as well as the full height of the film to produce images with a greater image width than normal.
Pinhole cameras of a variety of constructions can be used to make panoramic images. A popular design is the 'oatmeal box', a vertical cylindrical container in which the pinhole is made in one side and the film or photographic paper is wrapped around the inside wall opposite, and extending almost right to the edge of, the pinhole. This generates an egg-shaped image with more than 180° view.
Because they expose the film in a single exposure, fixed lens cameras can be used with electronic flash, which would not work consistently with rotational panoramic cameras.
With a flat image plane, 90° is the widest field of view that can be captured in focus and without significant wide-angle distortion or vignetting. Lenses with an imaging angle approaching 120 degrees require a center filter to correct vignetting at the edges of the image. Lenses that capture angles of up to 180°, commonly known as fisheye lenses exhibit extreme geometrical distortion but typically display less brightness falloff than rectilinear lenses.
Examples of this type of camera are: Taiyokoki Viscawide-16 ST-D (16 mm film), Siciliano Camera Works Pannaroma (35mm, 1987), Hasselblad X-Pan (35 mm, 1998), Linhof 612PC, Horseman SW612, Linhof Technorama 617, Tomiyama Art Panorama 617 and 624, and Fuji G617 and GX617 (Medium format (film)).
The panomorph lens provides a full hemispheric field of view with no blind spot, unlike catadioptric lenses.
Digital photography
Digital stitching of segmented panoramas
With digital photography, the most common method for producing panoramas is to take a series of pictures and stitch them together. There are two main types: the cylindrical panorama used primarily in stills photography and the spherical panorama used for virtual-reality images.
Segmented panoramas, also called stitched panoramas, are made by joining multiple photographs with slightly overlapping fields of view to create a panoramic image. Stitching software is used to combine multiple images. Ideally, in order to correctly stitch images together without parallax error, the camera must be rotated about the center of its lens entrance pupil. Stitching software can correct some parallax errors and different programs seem to vary in their ability to correct parallax errors. In general specific panorama software seems better at this than some of the built in stitching in general photomanipulation software.
In-camera stitching of panoramas
Some digital cameras can do the stitching internally, either as a standard feature or by installing a smartphone app.
Catadioptric cameras
Lens- and mirror-based (catadioptric) cameras consist of lenses and curved mirrors that reflect a 360-degree field of view into the lens' optics. The mirror shape and lens used are specifically chosen and arranged so that the camera maintains a single viewpoint. The single viewpoint means the complete panorama is effectively imaged or viewed from a single point in space. One can simply warp the acquired image into a cylindrical or spherical panorama. Even perspective views of smaller fields of view can be accurately computed.
The biggest advantage of catadioptric systems (panoramic mirror lenses) is that because one uses mirrors to bend the light rays instead of lenses (like fish eye), the image has almost no chromatic aberrations or distortions. The image, a reflection of the surface on the mirror, is in the form of a doughnut to which software is applied in order to create a flat panoramic picture. Such software is normally supplied by the company who produces the system. Because the complete panorama is imaged at once, dynamic scenes can be captured without problems. Panoramic video can be captured and has found applications in robotics and journalism. The mirror lens system uses only a partial section of the digital camera's sensor and therefore some pixels are not used. Recommendations are always to use a camera with a high pixel count in order to maximize the resolution of the final image.
There are even inexpensive add-on catadioptric lenses for smartphones, such as the GoPano micro and Kogeto Dot.
Artistic uses
Strip panoramas
Ed Ruscha's Every Building on the Sunset Strip (1966) was made by photographing building facades contiguously as seen from the back of a pickup truck traveling a 4 km length of the street. In the ironically 'deadpan' spirit of his work at the time, he published the work in strip form in a foldout book, intended to be viewed from one end or the other to see either side of 'The Strip' in correct orientation.
Preceding Ruscha's work, in 1954, Yoshikazu Suzuki produced an accordion-fold panorama of every building on Ginza Street, Tokyo in the Japanese architecture book Ginza, Kawaii, Ginza Haccho.
Joiners
Joiners (for which the terms panography and panograph have been used) is a photographic technique in which one picture is assembled from several overlapping photographs. This can be done manually with prints or by using digital image editing software and may resemble a wide-angle or panoramic view of a scene, similar in effect to segmented panoramic photography or image stitching. A joiner is distinct because the overlapping edges between adjacent pictures are not removed; the edge becomes part of the picture. 'Joiners' or 'panography' is thus a type of photomontage and a sub-set of collage.
Artist David Hockney is an early and important contributor to this technique. Through his fascination with human vision, his efforts to render a subjective view in his artworks resulted in the manual montaging of 10x15cm high-street-processed prints of (often several entire) 35mm films as a solution. He called the resulting cut-and-paste montages "joiners", and one of his most famous is "Pearblossom Highway", held by the Getty Museum. His group was called the "Hockney joiners", and he still paints and photographs joiners today.
Jan Dibbets' Dutch Mountain series (c.1971) relies on stitching of panoramic sequences to make a mountain of the Netherlands seaside.
Revivalists
In the 1970s and 1980s, a school of art photographers took up panoramic photography, inventing new cameras and using found and updated antique cameras to revive the format. The new panoramists included Kenneth Snelson, David Avison, Art Sinsabaugh, and Jim Alinder.
Digital stitching
Andreas Gursky frequently employs digital stitching in his large-format panoramic imagery.
See also
Anamorphic format
Cinerama
Hemispherical photography
Panorama portraits
Panoramic tripod head
Photo finish
Photo stitching software
Route panorama, a type of "parallel motion" or "linear" or "multi-viewpoint" panorama
Slit-scan photography
VR photography
References
Further reading
External links
A timeline of panoramic cameras 1843–1994
Stanford University CS 178 interactive Flash demo explaining the construction of cylindrical panoramas.
How to build a panoramic camera with intricate technical details and optical specifications for constructing a swing-lens panoramic camera.
A home-made panoramic head bracket for taking panoramic photographs.
IVRPA - The International VR Photography Association
Photography by genre |
12692065 | https://en.wikipedia.org/wiki/64th%20Air%20Division | 64th Air Division | The 64th Air Division is an inactive United States Air Force organization. Its last assignment was with Air Defense Command, being stationed at Stewart Air Force Base, New York. It was inactivated on 1 July 1963.
History
World War II
The organization was established during the early days of World War II as an air defense command and control wing assigned to First Air Force at Mitchel Field, New York.
By February 1943, it was clear that no German aircraft were heading to attack the East Coast, and the organization was realigned to become a command and control organization for Twelfth Air Force, engaged in combat as part of the North African Campaign. "The wing moved to North Africa in February 1943 and supported combat operations with a warning and control system, and, occasionally, augmenting the operations section of the XII Air Support Command in the Tunisian campaign."
"During the Sicilian and Italian campaigns (1943–1944), it administered fighter and fighter-bomber support to ground forces in a wide range of operations that included cover patrols, battle-area patrols, invasion coverage, escort missions, dive bombing missions, and reconnaissance. In Italy, the 64th directed close air support operations against enemy objectives in advance of Allied troops. Its primary targets included enemy gun positions, road junctions, traffic concentrations, assembly areas, bridges, and targets of opportunity."
"In August 1944 during the invasion of southern France, wing personnel, applying techniques developed in the invasion of Sicily and Italy, controlled air operations while aboard ships patrolling the assault beaches. With the landing of troops, a beachhead control unit directed aircraft to hit enemy strong points, ammunition dumps, troop concentrations, road intersections, supply lines, and communications. As Allied forces advanced northward along the Rhone valley, the wing implemented a plan to give more rapid support to the ground troops. Forward control units, equipped with the latest in air ground communications, directed sector air ground support. During the operations in France and Germany (1944–1945), the 64th continued to coordinate the close air-ground support of its fighter aircraft."
After the end of hostilities in May 1945, the wing served in the occupation of Germany as part of the XII Tactical Air Command, United States Air Forces in Europe. In Occupied Germany the wing performed many occupation duties such as destroying captured enemy aircraft, repairing roads, bridges and processing Prisoners of War. It also commanded combat units which were inactivating and sending their aircraft to storage, disposal or return to the United States. It was inactivated in Germany on 5 June 1947.
Cold War
Reactivated as an Air Division under Northeast Air Command (NEAC) at Pepperrell Air Force Base, Newfoundland in December 1952. NEAC had taken over the former Newfoundland Base Command atmospheric forces and ground air and radar stations in Newfoundland, Northeastern Canada and Greenland upon the former command's inactivation. The 64th Air Division was NEAC's command and control echelon of command over these assets.
"Its mission was the administration, training and providing air defense combat ready forces within its designated geographic area of responsibility, exercising command jurisdiction over its assigned units, installations, and facilities. In addition, the division and its subordinate units under its control participated in numerous exercises. NEAC was inactivated in April 1957, and its air defense mission was reassigned to Air Defense Command (ADC).
The 64th continued its operations under ADC at Pepperrell including the operational control of the Distant Early Warning Line (DEW Line) and Air Forces Iceland. In January 1960, it activated the Goose Air Defense Sector (Manual) at Goose Air Force Base. On 26 May 1960, the division headquarters moved from Newfoundland to Stewart Air Force Base, New York, when part of its mission was taken over by the 26th Air Division (SAGE) in a realignment of forces.
At Stewart it assumed the mission of training and providing air defense combat ready forces for the aerospace defense of a 6,000,000 square miles (16,000,000 km2) region of North America, including New Jersey, New York, New England north of Massachusetts, Eastern Canada, and atmospheric forces in Greenland.
The Division was inactivated in July 1963 with the phasedown of ADC at Stewart, its mission being taken over by First Air Force.
Lineage
Established as the 3d Air Defense Wing on 12 December 1942
Activated on 12 December 1942
Redesignated 64th Fighter Wing on 24 July 1943
Inactivated on 5 June 1947
Redesignated 64th Air Division (Defense) on 17 March 1952
Activated on 8 April 1952
Inactivated on 20 December 1952
Organized on 20 December 1952
Discontinued, and inactivated, on 1 July 1963
Assignments
I Fighter Command, 12 December 1942-c. 7 February 1943
Army Service Forces, Port of Embarkation, c. 7 February 1943
XII Fighter Command, 22 February 1943
XII Air Support Command (later XII Tactical Air Command), 9 March 1943 – 5 June 1947 (attached to First Tactical Air Force (Provisional), 27 November 1944 – May 1945)
Northeast Air Command, 8 April 1952
Air Defense Command, 1 April 1957 – 1 July 1963
Stations
Mitchel Field, New York, 12 December 1942 – 23 January 1943
Oran Es Sénia Airport, Algeria, 22 February 1943
Thelepte Airfield, Tunisia, 1 March 1943
Sbeitla, Tunisia, 18 March 1943
Le Sers Airfield, Tunisia, 12 April 1943
Korba Airfield, Tunisia, 18 May 1943
Ponte Olivo Airfield, Sicily, 12 July 1943
Milazzo Airfield, Sicily, 1 September 1943
Frattamaggiore, Italy, 7 October 1943
San Felice Circeo, Italy, 1 June 1944
Rocca di Papa, Italy, 7 June 1944
Orbetello, Italy 19 June 1944
Santa Maria Capua Vetere, Italy 19 July 1944
St Tropez, France, 15 August 1944
Dôle-Tavaux Airport (Y-7), France 19 September 1944
Ludres, France, 3 November 1944
Toul/Ochey Airfield (A-96), France, 15 January 1945
Edenkoben, Germany, 1 April 1945
Schwäbisch Hall, Germany, 29 April 1945
AAF Station Darmstadt/Griesheim, Germany, 7 July 1945
AAF Station Bad Kissingen, Germany, 1 December 1945 – 5 June 1947
Pepperrell Air Force Base, Newfoundland, 20 December 1952
Stewart Air Force Base, New York, 1 July 1960 – 1 July 1963
Components
World War II
Groups
27th Fighter Bomber Group (later 27 Fighter Group: c. 28 May 1943 – c. 22 October 1945; c. 13 August 1946 – 5 June 1947
31st Fighter Group: 1 September 1943 – 31 March 1944
33d Fighter Group: c. 9 March 1943 – 14 February 1944
36th Fighter Group: 15 November 1945 – 15 February 1946
50th Fighter Group: c. 29 September 1944 – 22 June 1945
52d Fighter Group: 9 November 1946 – 5 June 1947
69th Tactical Reconnaissance Group: c. 22 March – 30 June 1945
79th Fighter Group: 18 January – 29 September 1944
86th Fighter-Bomber Group (later 86th Fighter Group): c. 31 July – c. 31 December 1943; 10 March 1945 – c. 15 February 1946; 20 August 1946 – 5 June 1947
324th Fighter Group: 22 August 1943 – c. 5 March 1944; 30 April – 14 August 1945
354th Fighter Group: 4 July 1945 – c. 15 February 1946
355th Fighter Group: c. 15 April – 1 August 1946
358th Fighter Group: c. 30 May – 18 July 1945
363d Reconnaissance Group: 18 May – 20 November 1945
366th Fighter Group: 4 July 1945 – 20 August 1946
370th Fighter Group: 27 June – 17 September 1945
404th Fighter Group: 23 June – 2 August 1945
406th Fighter Group: 5 August 1945 – 20 August 1946
Squadrons
14th Liaison Squadron: 10 July 1946 – 1 May 1947
47th Liaison Squadron: 4 March 1946 – 1 May 1947
111th Reconnaissance Squadron: attached June – September 1943
155th Photographic Reconnaissance Squadron: 1 August – 24 November 1945
415th Night Fighter Squadron: Attached c. 3 September – 5 December 1943, assigned 5 December 1943 – 15 February 1946
416th Night Fighter Squadron: 15 August – 9 November 1946
417th Night Fighter Squadron: 24 March – 17 May 1945; 26 June 1945 – 9 November 1946
Cold War
Force
Air Forces Iceland
Keflavik Airport, Iceland, 1 July 1962 – 1 July 1963
Sector
Goose Air Defense Sector
Goose Air Force Base, Newfoundland, 1 April 1960 – 1 July 1963
Wings
4601st Support Wing, 1 October 1960 – 1 July 1963
Paramus, New Jersey
4602d Support Wing, 1 January 1961 – 1 July 1963
Ottawa, Ontario, Canada
4683d Air Defense Wing, 1 July 1960 – 1 July 1963
Thule Air Base, Greenland
4737th Air Base Wing (see 6604th Air Base Group)
6604th Air Base Wing (see 6604th Air Base Group)
6605th Air Base Wing (see 6602d Air Base Group)
6606th Air Base Wing (see 6603d Air Base Group)
6607th Air Base Wing (see 6612th Air Base Group)
Groups
4684th Air Base Group, 1 July 1960 – 1 July 1963
Sondrestrom Air Base, Greenland
4737th Air Base Group, Newfoundland, 1 May 1958 – 1 September 1960
Pepperrell Air Force Base, Newfoundland
4731st Air Defense Group, 1 April 1957 – 1 July 1960
Ernest Harmon Air Force Base, Newfoundland
4732d Air Defense Group, 1 April 1957 – 1 July 1960
Goose Air Force Base, Labrador
4733d Air Defense Group, 1 April 1957 – 1 May 1958
Frobisher Bay Air Force Base, Northwest Territories
4734th Air Defense Group, 1 April 1957 – 1 May 1958
Thule Air Base, Greenland, 1 April 1957 – 1 May 1958
6602d Air Base Group (later 6605th Air Base Wing), 8 April 1952 – 1 April 1957
Ernest Harmon Air Force Base, Newfoundland
6603d Air Base Group (later 6606th Air Base Wing), 8 April 1952 – 1 April 1957
Goose Air Force Base, Labrador
6604th Air Base Group (later 6604th Air Base Wing, 4737th Air Base Wing), 8 April 1952 – 1 May 1958
6611th Air Base Group, 8 April 1952 – 1 April 1957
Narsarsuaq Air Base, Greenland
6612th Air Base Group (later 6607th Air Base Wing), 8 April 1952 – 1 April 1957
6621st Air Base Group, 8 April 1952 – 1 April 1957
Sondrestrom Air Base, Greenland
6614th Air Transport Group, 8 April 1952 – 1 April 1957
Pepperrell Air Force Base, Newfoundland, 8 April 1952
Squadrons
59th Fighter-Interceptor Squadron
Goose Air Force Base, Labrador, 28 October 1952 – 31 December 1966
61st Fighter-Interceptor Squadron
Ernest Harmon Air Force Base, Newfoundland, 6 August 1953 – 17 October 1957
74th Fighter-Interceptor Squadron
Thule Air Base, Greenland, 20 August 1954 – 25 June 1958
318th Fighter-Interceptor Squadron
Thule Air Base, Greenland, 1 July 1953 – 5 August 1954
327th Fighter-Interceptor Squadron
Thule Air Base, Greenland, 3 July 1958 – 25 March 1960
105th Aircraft Control and Warning Squadron (Fed ANG) (NEAC)
Stephenville Air Station, Newfoundland, 8 April 1952 – 1 January 1953
639th Aircraft Control and Warning Squadron
Lowther AS, Ontario, 15 November 1958 – 1 April 1959
640th Aircraft Control and Warning Squadron, 1 January 1953 – 6 Jun 60
Stephenville Air Station, Newfoundland, 8 April 1952
642d Aircraft Control and Warning Squadron
Red Cliff Air Station, Newfoundland, 1 January 1953 – 1 October 1961
920th Aircraft Control and Warning Squadron (NEAC)
Resolution Island Air Station, Northwest Territory, 19 January 1952 – 1 April 1957
921st Aircraft Control and Warning Squadron (NEAC)
Saint Anthony Air Station, Labrador, 1 October 1953 – 1 April 1957
923d Aircraft Control and Warning Squadron (NEAC)
Hopedale Air Station, Labrador, 1 October 1953 – 1 April 1957
924th Aircraft Control and Warning Squadron (NEAC)
Saglek Air Station, Labrador, 1 October 1953 – 1 April 1957
926th Aircraft Control and Warning Squadron (NEAC)
Frobisher Bay Air Base, Northwest Territory, 1 October 1953 – 1 April 1957
931st Aircraft Control and Warning Squadron
Thule Air Station, Greenland, 1 May 1958 – 1 July 1960
See also
List of United States Air Force air divisions
Aerospace Defense Command Fighter Squadrons
List of USAF Aerospace Defense Command General Surveillance Radar Stations
References
Notes
Explanatory notes
Citations
Bibliography
.
064
Aerospace Defense Command units |
7715782 | https://en.wikipedia.org/wiki/Monica%20S.%20Lam | Monica S. Lam | Monica Sin-Ling Lam is an American computer scientist. She is a professor in the Computer Science Department at Stanford University.
Professional biography
Monica Lam received a B.Sc. from University of British Columbia in 1980 and a Ph.D. in Computer Science from Carnegie Mellon University in 1987.
Lam joined the faculty of Computer Science at Stanford University in 1988. She has contributed to the research of a wide range of computer systems topics including compilers, program analysis, operating systems, security, computer architecture, and high-performance computing. More recently, she is working in natural language processing, and virtual assistants with an emphasis on privacy protection. She is the faculty director of the Open Virtual Assistant Lab, which organized the first workshop for the World Wide Voice Web. The lab developed the open-source Almond voice assistant, which is sponsored by the National Science Foundation. Almond received Popular Science's Best of What's New award in 2019.
Previously, Lam led the SUIF (Stanford University Intermediate Format) Compiler project, which produced a widely used compiler infrastructure known for its locality optimizations and interprocedural parallelization. Many of the compiler techniques she developed have been adopted by industry. Her other research projects included the architecture and compiler for the CMU Warp machine, a systolic array of VLIW processors, and the Stanford DASH distributed shared memory machine. In 1998, she took a sabbatical leave from Stanford to help start Tensilica Inc., a company that specializes in configurable processor cores.
In another research project, her program analysis group developed a collection of tools for improving software security and reliability. They developed the first scalable context-sensitive inclusion-based pointer analysis and a freely available tool called BDDBDDB, that allows programmers to express context-sensitive analyses simply by writing Datalog queries. Other tools developed include Griffin, static and dynamic analysis for finding security vulnerabilities in Web applications such as SQL injection, a static and dynamic program query language called QL, a static memory leak detector called Clouseau, a dynamic buffer overrun detector called CRED, and a dynamic error diagnosis tool called DIDUCE.
In the Collective project, her research group and she developed the concept of a livePC: subscribers of the livePC will automatically run the latest of the published PC virtual images with each reboot. This approach allows computers to be managed scalably and securely. In 2005, the group started a company called MokaFive to transfer the technology to industry. She also directed the MobiSocial laboratory at Stanford, as part of the Programmable Open Mobile Internet 2020 initiative.
Lam is also the cofounder of Omlet, which launched in 2014. Omlet is the first product from MobiSocial. Omlet is an open, decentralized social networking tool, based on an extensible chat platform.
Lam chaired the ACM SIGPLAN Programming Languages Design and Implementation Conference in 2000, served on the Editorial Board of ACM Transactions on Computer Systems and numerous program committees for conferences on languages and compilers (PLDI, POPL), operating systems (SOSP), and computer architecture (ASPLOS, ISCA).
Awards
Lam has received the following awards and honors:
National Academy of Engineering member, 2019
University of British Columbia Computer Science 50th Anniversary Research Award, 2018
Fellow of the ACM, 2007
ACM Programming Language Design and Implementation Best Paper Award in 2004
ACM SIGSOFT Distinguished Paper Award in 2002
ACM Most Influential Programming Language Design and Implementation Paper Award in 2001
NSF Young Investigator award in 1992
Two of her papers were recognized in "20 Years of PLDI--a Selection (1979-1999)"
One of her papers was recognized in the "25 Years of the International Symposia on Computer Architecture", 1988.
Bibliography
Compilers: Principles, Techniques and Tools (2d Ed) (2006) (the "Dragon Book") by Alfred V. Aho, Monica S. Lam, Ravi Sethi, and Jeffrey D. Ullman ()
A Systolic Array Optimizing Compiler (1989) ()
Monica Lam, Dissertation
References
External links
Tree of Monica Lam's students
DBLP Publications Server
Monica Lam Current CV
Monica Lam interview on MokaFive's background
Living people
American computer scientists
Carnegie Mellon University alumni
Fellows of the Association for Computing Machinery
Programming language researchers
Stanford University School of Engineering faculty
University of British Columbia alumni
American women computer scientists
Year of birth missing (living people)
Natural language processing researchers |
20838574 | https://en.wikipedia.org/wiki/Smolt%20%28Linux%29 | Smolt (Linux) | Smolt was a computer program used to gather hardware information from computers running Linux, and submit them to a central server for statistical purposes, quality assurance and support. It was initiated by Fedora, with the release of Fedora 7, and soon after it was a combined effort of various Linux projects. Information collection was voluntary (opt-in) and anonymous. Smolt did not run automatically. It requested permission before uploading new data to the Smolt server. On October 10, 2012, it was announced that smolt would be discontinued on November 1, 2013. That is now in effect. The Smolt webpage is no longer available.
The project is superseded by Hardware probe.
General
Before Smolt there was no widely accepted system for assembling Linux statistics in one place. Smolt was not the first nor the only attempt, but it is the first accepted by major Linux distributions.
Collecting this kind of data across distributions can:
aid developers in detecting hardware that is poorly supported
focus efforts on popular hardware
provide workaround and fix tips
help users to choose the best distribution for their hardware
convince hardware vendors to support Linux
Use
Smolt was included in:
Fedora
openSUSE, releases from 11.1 to 12.2;
RHEL and CentOS see https://web.archive.org/web/20090109010205/http://download.fedora.redhat.com/pub/epel/ (retired link)
Gentoo see https://web.archive.org/web/20090207100254/http://packages.gentoo.org/package/app-admin/smolt
MythTV see http://smolt.mythtv.org/
Smolt server
The Smolt server stored all collected data.
See also
Linux Counter
References
External links
Smolt wiki
Smolt retirement
openSUSE about Smolt
Linux
Discontinued software
Internet properties disestablished in 2013 |
55482609 | https://en.wikipedia.org/wiki/Government%20Post%20Graduate%20College%20Dargai | Government Post Graduate College Dargai | Government Post Graduate College Dargai Malakand is located in Dargai, Khyber Pakhtunkhwa, Pakistan. The college currently offers programs at Intermediate affiliated with Board of Intermediate and Secondary Education Malakand, Bachelor, Master and 4 years BS programs in various disciplines.
Overview & History
Government Post Graduate College Dargai Malakand is situated college in Dargai Malakand. The college as intermediate college in July 1977. The college attained degree college status in August 1983. Computer Science course was introduced in 2002.
The college campus has total area of 36 acres. The latest addition to college teaching programs are BS 4-year programs in the subject Computer Science, English, Zoology, Botany, Chemistry, Physics, Maths, Urdu and Political Science.
Departments and Faculties
The college currently has the following faculties.
Faculty of Social Sciences
Department of Economics
Department of English
Department of Geography
Department of History
Department of Islamiyat
Department of Law
Department of Pakistan Studies
Department of Physical Education
Department of Political Science
Department of Pashto
Department of Urdu
Faculty of Biological Sciences
Department of Zoology
Department of Botany
Faculty of Physical Sciences
Department of Chemistry
Department of Computer Science
Department of Maths
Department of Statistics
Department of Physics
Academic Programs
The college currently offers the following programs.
Intermediate
This institution doesn't offer any program on intermediate level.
College has been closed.
Master Level (2 years)
MSc – Mathematics
MSc - Chemistry
MA - English
BS Degrees (4 years)
BS Computer Science
BS English
BS Mathematics
BS Chemistry
BS Political Science
BS Economics
BS Physics
BS Statistics
BS Electronics
BS Zoology
BS Gender Studies
BS Urdu
See also
University of Malakand
Shaheed Benazir Bhutto University, Sheringal
External links
Government Post Graduate College Dargai Official Website
References
Public universities and colleges in Khyber Pakhtunkhwa
Malakand District |
16995601 | https://en.wikipedia.org/wiki/Wallander%20%28British%20TV%20series%29 | Wallander (British TV series) | Wallander is a British television series adapted from the Swedish novelist Henning Mankell's Kurt Wallander novels and starring Kenneth Branagh as the eponymous police inspector. It was the first time the Wallander novels have been adapted into an English-language production. Yellow Bird, a production company formed by Mankell, began negotiations with British companies to produce the adaptations in 2006. In 2007, Branagh met Mankell to discuss playing the role. Contracts were signed and work began on the films, adapted from the novels Sidetracked, Firewall and One Step Behind, in January 2008. Emmy-award-winning director Philip Martin was hired as lead director. Martin worked with cinematographer Anthony Dod Mantle to establish a visual style for the series.
The first three-episode series, produced by Yellow Bird, Left Bank Pictures and TKBC for BBC Scotland, was broadcast on BBC One from November to December 2008. The second series was filmed from July to October 2009 and was broadcast in January 2010. The third series was filmed in the summer of 2011 in Ystad, Scania, Sweden, and Riga, Latvia, and aired in July 2012. The fourth and final series was shot from October 2014 to January 2015 and premiered on German TV, dubbed into German, in December 2015. The final series aired in the original English on BBC One in May 2016. Critics have written positively of the series, which has won a Broadcasting Press Guild Award (Best Actor for Branagh) and six British Academy Television Awards, including Best Drama Series.
Characters
The series is based on Kurt Wallander (Branagh), a detective and police inspector in the small town of Ystad, Sweden. Branagh describes Wallander as "an existentialist who is questioning what life is about and why he does what he does every day, and for whom acts of violence never become normal. There is a level of empathy with the victims of crime that is almost impossible to contain, and one of the prices he pays for that sort of empathy is a personal life that is a kind of wasteland." In the novels, Wallander regularly listens to opera in his apartment and his car. This signature hobby has been dropped for this adaptation; producer Francis Hopkinson believes it would make Wallander too similar to Inspector Morse, whose love of opera is already familiar to British viewers. Branagh did not watch any of the Swedish Wallander films before playing the role, preferring to bring his own interpretation of the character to the screen.
Wallander's team at the Ystad police station is made up of: Anne-Britt Hoglund (Smart), Kalle Svedberg (Beard), and Magnus Martinsson (Hiddleston). Of Wallander and Hoglund, Smart said, "Our relationship is based on this impeccable mutual respect which is all very Scandinavian and, actually, more interesting to play." The team is joined at murder scenes by Nyberg (McCabe), a forensics expert. The team is overseen by Lisa Holgersson (Shimmin), Ystad's chief of police. Away from the police station, Wallander has a tempestuous relationship with his daughter Linda (Spark) and his father Povel (Warner), who Wallander discovers in Sidetracked has recently been diagnosed with Alzheimer's disease. Wallander's father spends his days sitting in an art studio, painting the same landscape repeatedly while in the care of his new wife Gertrude (Hemingway).
Cast
Kenneth Branagh as Kurt Wallander
Sarah Smart as Ann-Britt Hoglund (Series 1–3)
Tom Hiddleston as Magnus Martinsson (Series 1–2)
Richard McCabe as Sven Nyberg
Tom Beard as Kalle Svedberg (Series 1)
Sadie Shimmin as Lisa Holgersson (Series 1–2)
Jeany Spark as Linda Wallander, Kurt's daughter
David Warner as Povel Wallander, Kurt's father (Series 1–2, 4)
Polly Hemingway as Gertrude, Kurt's step-mother (Series 1–2)
Saskia Reeves as Vanja Andersson (Series 2–3)
Rebekah Staton as Kristyna (Series 3)
Mark Hadfield as Stefan Lindeman (Series 3)
Barnaby Kay as Lennart Mattson (Series 3–4)
Production
In 2006, Yellow Bird managing director Morten Fisker opened discussions with British production companies about developing English-language adaptations of the Kurt Wallander novels, to which Yellow Bird holds the distribution rights. The BBC and Channel 4 were believed to be involved in discussions; the BBC had already announced plans to adapt Mankell's The Return of the Dancing Master. Fisker wanted to bring a new detective to British screens to replace Inspector Morse, who had been killed off on-screen in 2000. Actors proposed to play Wallander were Trevor Eve, Neil Pearson, Jason Isaacs, David Morrissey, Clive Owen and Michael Gambon. Negotiations were still under way in 2007, when Kenneth Branagh met Henning Mankell at an Ingmar Bergman film festival and asked to play Wallander. Branagh had started reading the Wallander books "relatively late" but enjoyed them, and read all nine translated novels in a month. Mankell agreed to let Branagh play the role, and Branagh visited Ystad in December to scout for locations and meet Film i Skånes chief executive Ralf Ivarsson.
A series of three 90-minute adaptations was commissioned by BBC Scotland's Anne Mensah and BBC Controller of Fiction Jane Tranter in January 2008. Like Morten Fisker, the BBC wanted a returning series that would have the same audience appeal as Inspector Morse, Prime Suspect and Cracker. Yellow Bird was contracted as a co-producer, working with Left Bank Pictures, a production house formed in 2007 by former ITV Controller of Comedy, Drama and Film Andy Harries. Harries described Wallander as "more than just a detective series" and that it would be visually "very picture postcard". The first series consists of adaptations of Sidetracked, Firewall and One Step Behind. Philip Martin was hired as lead director of the series, and met with Branagh, Harries and Left Bank producer Francis Hopkinson in January. The four discussed how the adaptations would appear on screen, agreeing that the characterisations, atmosphere and ideas would be difficult to portray on screen. Richard Cottan was hired to adapt Mankell's novels, and delivered his first scripts in February. Cottan changed the plots of some of the books in order to fit them into a 90-minute adaptation, though made sure the scripts retained Wallander's "journey". The following month, Martin began discussions with cinematographer Anthony Dod Mantle about what visual style the films would have. They agreed to use the Red One digital camera to shoot on, which has a near-35 mm resolution and is not as expensive as 35 mm; Dod Mantle said that the BBC "has politics" about the cheaper 16 mm and Super 16. Casting of British actors, which was done in London, was completed by April, and the whole crew moved over to Ystad to begin rehearsals. Martin wanted the actors playing police officers to know how to fire a gun, so arranged for them to spend time at a firing range using live ammunition. Wallander’s distinctive mobile phone ringtone was specially composed by Lee Crichlow.
Series 1
A £6 million budget was originally assigned to the first series, which increased to £7.5 million. Half of that came from the BBC, and the rest from pre-sale co-production funding from American WGBH Boston and German ARD Degeto, and a tax deduction for filming in Sweden. ARD Degeto and WGBH are credited as co-producers for their budget contribution. Using scripts adapted by Richard Cottan and Richard McBrien, filming ran for 12 weeks from April to July 2008 in Wallander's hometown of Ystad, Sweden.
Location filming was principally set in Ystad. Interior sets were constructed at Ystad Studios under the supervision of Anders Olin, who also designed the sets of the Swedish Wallander films. The main police station set is 500 square metres, twice the size of Olin's previous sets. For exterior shots of the police station, a combination of the Ystad railway station and swimming pool was used. Mock-ups of Ystads Allehanda, a local newspaper, were produced as working props. Producer Simon Moseley explained that the mock-ups use Swedish words that can be understood by English-speaking audiences. Moseley also explained that some pronunciations of Swedish words are Anglicised (such as the pronunciation of "Ystad" and "Wallander"), as "the authentic local accent is very strange to English ears and we didn't want to stray into 'Allo 'Allo! territory". Like Branagh, Philip Martin did not watch any of the Swedish-language Wallander films so that he could bring a fresh interpretation to the films. Filming was scheduled for 66 days over 12 weeks in Sweden; each film would be shot back-to-back over 22 days. Martin directed the first and third films and Niall MacCormick directed the second. Dod Mantle was keen to conceive a good style for what could become a long-running series.
Filming on Sidetracked commenced on 14 April on location at a townhouse in Södra Änggatan, Ystad. The same week, filming was done at Häckeberga Castle near Genarp. Another castle was going to be used, but the deal fell through. The manager of Häckeberga Castle, which had been turned into a hotel, allowed filming to take place there on the night of 17 April, though guests had to be moved to stables for the night. Scenes set in the rapeseed field were filmed at Charlottenlund Mansion. Location scouts had been impressed with the look of the winter rapeseed. The team from Danish Special Effects had difficulty setting the field on fire. Using the Red One digital camera meant that rushes could be viewed on set, saving time on the already tight schedule. Martin and Dod Mantle believed that the Red captured the Swedish light well, so there was no need to use big lighting rigs. The cheaper filming option meant that the budget could be used on other things.
One Step Behind was filmed in May. The opening scene, featuring a multiple murder and burial in the woods, was filmed on location at the Hagestads nature reserve. A large hole was needed for the shallow grave, so Yellow Bird approached the local authority for permission. The request was granted on the same day as it was lodged, with the stipulation that the hole be filled in after filming. Niall MacCormick arrived in Sweden to film Firewall in June, concluding in the third week of July. Danish Special Effects also worked on body squibs, bullet hits and atmospheric effects. Their post-production work was completed in August. While the crew were in Sweden, editing was done at The Chimney Pot in Stockholm. Post-production was completed by The Farm in London. Martin Phipps composed the soundtrack to the series. A version of "Nostalgia" by Australian singer-songwriter Emily Barker is the opening theme. The three films of series 1 were broadcast on BBC One on 30 November 7 December, and 14 December 2008 respectively.
Series 2
The production of three new films based on Faceless Killers, The Fifth Woman and The Man Who Smiled was confirmed by the BBC in May 2009 to start in the summer in Ystad. The BBC broadcast the series in January 2010. Richard Cottan wrote Faceless Killers and The Fifth Woman, while Simon Donald wrote The Man Who Smiled. Hettie MacDonald directed Faceless Killers, Andy Wilson handled The Man Who Smiled while Aisling Walsh directed The Fifth Woman. Photographer Igor Martinovic (director of photography on Man on Wire) worked with Macdonald and Wilson while Lukas Strebel, who won an Emmy in 2009 for Little Dorrit, was in charge of photography for The Fifth Woman.
The second series started shooting on 22 June 2009. The film crew consisted of slightly more Britons, as the Swedish-language films were still filming in the area until December 2009. Yellow Bird's Daniel Ahlqvist said, "It is a quite special that we are doing two different Wallander productions at the same time. So it has been a little bit tougher to recruit competent personnel here in Skåne. We came to the conclusion that if we cannot get people from Skåne, we might as well bring in folks from the UK rather than Stockholm." The landscape of Skåne was a big part of the second series. Shooting started in the outskirts of Ystad but a big scene in Ystad city square was planned. Scenes were also planned to be filmed at the summer residence that served as the home for Wallander's father. Faceless Killers was first in the shooting schedule, followed by The Fifth Woman and last The Man Who Smiled. As with Series 1, each episode is filmed over approximately 22–23 days, with just 3–5 days set aside for studio recording, and the rest for location shooting. On 23 June, the film team spent all day in Simrishamn, a coastal town north east of Ystad. Scenes were shot at the local police station and in the town square. Production Manager Nina Sackmann explained that "the town was perfect for what we needed to convey with this film". On 21 July, the portions of road 1015 passing by the Karlsfält Farmland Estate north of Ystad was closed from 11 p.m. until midnight to accommodate the film crew.
On 18 August, closing scenes of The Fifth Woman, where Kurt Wallander is dragged away at gunpoint, were shot on location at Ystad railway station. On the right side of the railway track, this dramatic scene was being filmed and on the left side, commuters were exiting the train. About 40 metres away, the Swedish language Wallander film Vålnaden (The Ghost) was being filmed at the same time. Earlier in the week, scenes were shot at an old automobile repair and maintenance shop from 1928 in Hammenhög village. Part of the building had served as a flower shop when Mankell wrote The Fifth Woman and, since a murder victim is a flower shop owner, it was convenient to shoot in the now abandoned building.
Filming on The Man Who Smiled began at the beginning of September. Location production on the episode concluded on 2 October. The first couple of weeks featured location work outside of the swimming baths—which doubles as the exterior of the police station. For the last two weeks, production moved to locations around the countryside of Österlen. On Monday evening 14 September, the Ystad city square was closed off to film an important action scene from The Man Who Smiled where Kurt Wallander comes running across the square as a car explodes. The clear blue September sky caused problems with the lighting and they had to wait until the sun started to set.
Kenneth Branagh explained that the challenge for filming series one was to "create" the strange world of Ystad, in part as Henning Mankell saw it, in part as script writer Rick Cottan saw it, and then upon arrival to realise that the town looks different. "To get all these different visions to work together was a bit nervous last year. This year the pressure is to develop the style of this show and develop the characters, for example the other policemen at the station. Branagh claimed that there had been no problems shooting due to weather conditions except the last day of filming: "Henning Mankell often writes about the long Swedish summer rains, but during two years of filming we have not seen any of that. No wonder British tourists like to visit." He also stated that there is a possibility of a third series. "It all depends on how these new episodes are received, but I think I really would like to film more episodes. But we also need to feel that we have something more to offer, more to tell and that the scripts are good." Any filming on a third series would be postponed until 2011, to allow Branagh to work on Thor. Yellow Bird's Daniel Ahlqvist believes that The White Lionesss South African setting makes it difficult to film, and the post-Cold War plot of The Dogs of Riga is no longer relevant, but sees no reason why Before the Frost and some new story ideas, in the same vein as the original Yellow Bird films could not be developed for the BBC.
Local politicians supported and invested 8,000,000 Swedish kronor (roughly £750,000) in the second Wallander series through Film i Skåne, a regional resource and production centre.
Series 2 features some interesting choices of actors for minor roles. Fredrik Gunnarsson features in Faceless Killers as Valfrid Strom, Gunnarson appears in 17 episodes of Yellow Bird's Swedish language TV series as uniformed police officer Svartman. Rune Bergman had a minor role in the Swedish language adaptation of Faceless Killers and also featured in the TV film Luftslottet. Patrik Karlson featured in the Swedish language adaptation of The Man Who Smiled as well as the TV film Mastermind. Bergman and Karlson have the distinction of appearing in films starring the three Kurt Wallander actors. Karin Bertling also appears in the English language Faceless Killers and has previously worked on the Swedish-language TV film Before the Frost.
Series 3
The third series aired in July 2012. Screenwriter Peter Harness wrote the scripts for all three films that made up Series 3. Mankell worked closely with Harness on the scripts. "He is too busy to talk to me all the time. But we have met to discuss the material, so he is involved in what happens", Harness told Ystads Allehanda.
Hiddleston and Shimmin did not return for this series. Actress Rebekah Staton portrayed a new character, Kristina, in all three episodes. Mark Hadfield joined the cast as police officer Stefan Lindeman, one of the main characters in the first season of the Swedish Wallander TV series and the lead character in the Mankell novel The Return of the Dancing Master (a book that has already been filmed in Swedish and German versions). Barnaby Kay plays Lennart Mattson, who is Chief Holgerson's successor.
On 4 August 2011 it was made official that three new films were in production. The filming of The Dogs of Riga started in Latvia on 1 August at The Hotel Riga, and concluded on 20 August. More scenes were shot in Ystad the following week. This film was directed by Esther May Campbell, and featured cinematography by Lukas Strebel who worked on the second Wallander series. The production tried to use as many Latvian actors as possible but a problem arose as most Latvian actors had a very limited knowledge of English. Latvian actor Artūrs Skrastiņš was the only native actor that landed a speaking role in the film. He portrayed Colonel Putnis. Romanian actor Dragos Bucur portrays Sergei Upitis, an investigative journalist. The film was partially funded by The Riga Film Fund and co-stars Lithuanian actress Ingeborga Dapkūnaitė.
On 10 August, several scenes were shot outside the Latvian Parliament and outside a building on Jēkaba street that was decorated with Swedish flags, to stand in for the Swedish embassy in Riga. On 13 August, the city closed down several streets to accommodate the filming. On 16 August scenes were filmed at Riga Central Station. The national police cars used in for this production had been equipped with stickers that said Rīgas pilsētas policijas (Riga City Police). These stickers covered up the usual coat of arms that Latvian police cars are decorated with, these stickers were designed specifically for the film and are easily removed. Nothing on Latvian police cars specifies what city they serve in.
On 22 August the film team was back in Sweden to film for one week. The shooting started at a football pitch in Kåseberga, which has been converted into a filming area. Producer Hillary Benson explained to local press that once The Dogs of Riga had wrapped up, the film team would be back in mid October to start filming the other two episodes. The first two series were filmed in the summer, this time around the aim was to film in autumn and winter.
The other two films in the series are Before the Frost, based on the novel of the same name, and An Event in Autumn, which is based on the short story "Händelse om hösten" (The Grave), a short story from 2004 published only in the Netherlands.
Before the Frost was directed by Charles Martin. Filming started in Ystad on 12 October 2011. The first days of shooting were stunts and scenes with an animal trainer as Kenneth Branagh did not arrive until 17 October. Scenes were also shot at The Chemistry Hall at the Macklean School in Skurup Municipality. With the local firefighters on standby, a stunt man poured petrol over himself and then set himself alight. This three-minute long film sequence took nine and a half hours to shoot. Filming began on Friday 14 October at 6 pm and wrapped at 3:30 am on Saturday morning. The film crew later came back at the end of October to shoot a scene using headmaster Christin Stigborgs' office. From Tuesday, 24 October and until the end of the week, three streets in central Ystad (Lilla Norregatan, Stora Norregatan and Sladdergatan) had to be closed down for a short time to shoot several scenes.
Parts of the film were shot in the Snogeholm nature conservation area, Sjöbo Municipality. Filming took place for several days along the roads and a parking space. This was mainly shots of the environment and the nature of the conversation area and the Snogeholm lake, according to production manager Martin Ersgård.
An Event in Autumn was the last film. Filming started 14 November and was directed by Toby Haynes According to Yellow Bird producer Daniel Ahlqvist, An Event in Autumn is about how "Kurt tries to take charge of his own life by getting a new house but gets interrupted and is more or less forced back to his job".
On 21 and 23 October the crew was filming at an old small farm in the small village of Svarte. It is around the corner from the house where Wallander's father lived in the previous films. The small farm house is Wallander's new home but the remains of a dead woman are found on the property. Due to time constraints and unusually for a BBC production, all scenes were filmed with two cameras to provide more material for post production and cutting. The last week of shooting included filming some scenes in Germany.
With the previous two series, the Skåne Regional Council invested 7 and 8 million Swedish Krona through its subsidiary Film i Skåne. With the third series, the Skåne Regional Council only wanted to invest 2 million Krona. They later signed on to support the production by other means such as letting BBC and Yellow Bird use Ystad Studios for free, worth about half a million Swedish Krona. City of Ystad-Österlens Film Bond also invested 2 million Swedish Krona.
Series 4
On 8 October 2014, the BBC announced that principal photography of the final three-episode fourth series had started.
The first episode, The White Lioness, is written by James Dormer (Strike Back, Outcast), and directed by Benjamin Caron (Tommy Cooper: Not Like That, Like This, Skins, My Mad Fat Diary). Most of the book takes place in South Africa and the episode was filmed in Cape Town in January 2015.
The final two installments in the Wallander series, A Lesson in Love and The Troubled Man were written by Peter Harness, not Ronan Bennett, as previously announced, and also directed by Benjamin Caron, and adapted from the final Wallander novel, The Troubled Man. These two episodes were filmed on location in Skåne, Sweden, and Copenhagen, Denmark.
Returning cast include Jeany Spark as Linda Wallander, Richard McCabe as Nyberg, Barnaby Kay as Lennart Mattson, and Ingeborga Dapkunaite as Baiba Liepa.
Shooting took place in Ystad Studios, simultaneously with the third season of Swedish-Danish crime drama The Bridge. The budget for the final season is 100 million Swedish kronor. The tax funded entities Ystad-Österlens filmfond and Film i Skåne have put three million Swedish kronor into the production according to Sveriges Radio.
The new series was shot on several locations surrounding Ystad, including Mossbystrand, Östra Hoby, Vårhallen Beach, Tunbyholm Castle plus Blekinge Province and the Danish island of Zealand. On 30 October, several scenes were shot at the Norreportskolan, a local Ystad middle school. Several of the students participated as extras.
The final three episodes had their world première dubbed into German on German network ARD, which co-produced them. They aired over three nights, on 25 December, 26 and 27, 2015. In Poland, the episodes aired on Ale Kino+ on 11, 18 and 25 March 2016. They made their English language première on BBC UKTV New Zealand on 11 April. In the US, 80-minute-long re-edited versions of the episodes aired as "Wallander, The Final Season" on the PBS anthology series Masterpiece Mystery! on 8, 15 and 22 May.
BBC One broadcast the full 89-minute episodes in the UK beginning on 22 May 2016.
Broadcast
A public screening of Sidetracked was given by the British Academy of Film and Television Arts on 10 November 2008, and was followed by a question-and-answer session with Philip Martin and Kenneth Branagh. A gala premiere of Sidetracked was held in Ystad on 23 November, a week before it was broadcast in Britain. Sidetrackeds first British broadcast came on BBC One on 30 November, followed by Firewall on 7 December, and One Step Behind on 14 December. Episodes were simulcast on BBC HD. BBC Four broadcast programmes and films to complement the series; the schedule included a documentary by John Harvey entitled Who is Kurt Wallander, as well as the Swedish adaptation of the Linda Wallander novel Before the Frost, and Mastermind, an installment of the Mankell's Wallander film series starring Krister Henriksson.
The series has already been sold to 14 countries and territories across the world, including TV4 Sweden, TV2 Norway, DR Denmark, MTV3 Finland, France on Arte, Canada, Slovenia, Australia, Poland, Lumiere Benelux and Svensk Film for its pan Scandinavian feed. BBC Worldwide, the BBC's commercial arm, sold the series to further buyers at the Mipcom television festival in October 2008. In the United States, PBS secured the broadcast rights through the co-production deal struck between its affiliate WGBH Boston and the BBC. It aired as part of WGBH's Masterpiece Mystery! in May 2009. In advance of the broadcast, Branagh and WGBH Boston's Rebecca Eaton presented a screening of an episode at The Paley Center for Media on 29 April. In Germany, ARD broadcast the first series episodes on 29 and 30 May, and 1 June 2009. TV4 broadcast the first series in Sweden from 11 October 2009.
Episodes
Series 1 (2008)
Series 2 (2010)
Series 3 (2012)
Series 4 (2016)
Reception
Critical response
The series received a positive reception from critics, who praised both Branagh's performance and the character he played; in a preview of the BBC's Autumn season, Mark Wright of The Stage Online wrote that Branagh was "a good fit" for the character and had "high hopes for the success of [the] series". Previewing Sidetracked, The Timess David Chater called Branagh "superb as Kurt Wallander", and the series "one of those superior cop shows in which the character of the detective matters more than the plot". In a feature in The Knowledge, a supplement of The Times, Paul Hoggart called Branagh's performance "understated, ruminative, warm, sensitive and depressed" and wrote positively of the design and cinematography and concluded by writing that "Wallander is that rare treasure: a popular form used for intelligent, thoughtful, classy drama and superbly shot". At the time the series was commissioned, Scottish author Ian Rankin expressed disappointment to The Scotsman that BBC Scotland was producing adaptations of Swedish literature; "My main caveat is that there's so much good, complex and diverse Scottish crime writing going on right now that I'd like to have seen BBC Scotland pick up on that".
Reviewing Sidetracked after it aired, Tom Sutcliffe for The Independent called it, "often a visually dazzling experience, the camerawork as attentive to the contours of Branagh's stubbly, despairing face as it was to the Swedish locations in which the action took place or the bruised pastels of a Munch sunset". He praised Branagh's acting but felt the Wallander character was "shallower than the performance, the disaffection and Weltschmerz just another detective gimmick". The Guardians Kira Cochrane was also complimentary to Branagh, calling him "faultless", but was not impressed with the scenes between Wallander and his father, which she believed slowed the pace of the film, as she did not want to learn Wallander's entire backstory immediately. Like Sutcliffe, Cochrane praised the cinematography and was pleased that the ending "tied up nicely". Andrew Billen of The Times wrote, "This distinctly superior cop show is both spare and suggestive, and brilliantly acted." He took time to adjust to Kenneth Branagh as Wallander, and found the warm blue skies of Sweden unexpected. Billen's and Cochrane's opinions of the child abuse storyline differed; Billen believed that it was "used too often in fiction, but here it meant something", though Cochrane called it a "familiar element". In The Daily Telegraph, James Walton was disappointed with the revelation that the crimes stemmed from sexual abuse; "once quite a daring TV subject, now a rather clichéd short cut to the black recesses of the human heart". Walton, like others, was complimentary of Branagh, and concluded by writing, "The series still probably won't appeal to fans of Heartbeat, but if you fancy an undoubtedly classy antidote to the cosy cop show, you could do a lot worse." The broadcast had an average 6.2 million viewers and 23.9% audience share. The episode began with a peak of 6.9 million (25.4%) but dropped to 5.8 million (24.6%) at the end. 57.2% of the audience was from the upmarket ABC1 demographic and 6.1% were in the age 16–34 demographic. The average viewer rating was down 300,000 on the same timeslot in the previous week. Final ratings, incorporating those who watched via DVR, was 6.54 million, making it the eighth-most-watched programme on BBC One that week. An editorial in The Independent complained that the episode's closing credits ran too fast; a hundred names were displayed in 14 seconds. Branagh called the speed of the credits "insulting". The actors' union Equity also complained to BBC director general Mark Thompson.
Firewall was seen by 5.6 million (23% share), 600,000 viewers and one share point down on the previous week. Final ratings boosted it to 5.90 million and the tenth-most-watched broadcast on BBC One that week. In The Guardian, Sam Wollaston wrote, "with the greyness, the cold, the Scandinavian sadness, and a troubled Kenneth Branagh mooching around in the gloom trying to figure out who killed these people so horribly, it's all pretty perfect." Andrew Billen wrote in The Times that Wallander and Ella's relationship not working out is conventional for a television detective drama, though liked how Wallander's depression "has grown out of the failure of his marriage and the experiences of his career". On TV Scoop website, John Beresford wrote that the episode "went quickly downhill" from the murder of the taxi driver in the opening minutes; "Pedestrian plots, characters that wander aimlessly about with next to nothing to do or say, and a format that seems better fitted for radio than it is for television. By that I mean the endless shots where there's a someone on the left of the screen, someone on the right, and they stand there for hours tal...king...verrrry...slow...ly to each other with absolutely nothing else happening." One Step Behind received overnight ratings of 5.6 million (22.4%). Final ratings were recorded as 5.66 million, making it the week's twelfth-most-watched programme on BBC One. David Chater's Times preview called Branagh "a masterpiece of vulnerability and despair". He wrote of the conclusion: "a climactic scene that has been done dozens of times in thrillers, on this one occasion it felt entirely believable". The Daily Record named it "Best of this week's TV" though it was criticised in The Herald; David Belcher called it "far worse than initially reckoned. Never has there been a less observant, more irritating fictional detective". Belcher hoped that no more adaptations would be made.
In a review called "Wåll-and-ör– den äkta Wallander" (the title is first poking fun at Branagh's pronunciation of Wallander while at the same time calling the version the real or proper Wallander), Martin Andersson of southern Sweden's main daily newspaper Sydsvenskan was very positive to Branagh's interpretation of Wallander, and thought the BBC series to be of better quality than the current Swedish-language series. He emphasised that not only was Branagh's performance of higher quality than the current Swedish Wallander actor Krister Henriksson, but the BBC series really understood how to use the nature and environment of the Skåne province to tell the proper story and added that, as a person from southern Sweden, he recognised all the settings and they had never looked as beautiful as in this production.
Awards
Branagh won the award for best actor at the 35th Broadcasting Press Guild Television and Radio Awards (2009). It is his first major television award win in the UK. The series was nominated for Best Drama Series but lost to The Devil's Whore. The series, represented by Sidetracked, won the British Academy Television Award for Best Drama Series. Richard Cottan, Branagh, Philip Martin and Francis Hopkinson are named as the nomination recipients. At the BAFTA Television Craft Awards, the series won four of five nominations: Martin Phipps for Original Television Music, Anthony Dod Mantle for Photograph & Lighting (Fiction/Entertainment), Jacqueline Abrahams for Production Design, and Bosse Persson, Lee Crichlow, Iain Eyre and Paul Hamblin for Sound (Fiction/Entertainment). Ray Leek was also nominated for his opening titles work.
In May 2009, PBS distributed promotional DVDs of One Step Behind to members of the Academy of Television Arts & Sciences for nomination consideration at the 61st Primetime Emmy Awards. The episode was not nominated, but Branagh was nominated for his performance in the Outstanding Actor, Miniseries or Movie category and Philip Martin was nominated for Outstanding Directing For A Miniseries, Movie Or A Dramatic Special. Branagh was placed on longlist in the Best Actor category of the 2010 National Television Awards. The series was nominated for The TV Dagger at the 2009 Crime Thriller Awards.
In November 2009, the Royal Television Society presented the series with two awards at the 2009 RTS Craft & Design Awards; Aidan Farrell at post-production house The Farm was presented with the Effects (Picture Enhancement) award, and Martin Phipps and Emily Barker with the Music (Original Title) award for the opening theme. Anthony Dod Mantle was also nominated in the Lighting, Photography & Camera (Photography)—Drama category, and Bosse Persson, Lee Crichlow, Iain Eyre and Paul Hamblin in the Sound (Drama) category. The series was nominated in the Best Drama Series/Serial category at the Broadcast Awards 2010. The International Press Academy nominated the series for the Satellite Award for Best Miniseries and Branagh for the Satellite Award for Best Actor – Miniseries or Television Film. The Hollywood Foreign Press Association nominated Branagh for the Golden Globe Award for Best Miniseries or Television Film for his performance in One Step Behind.
Impact on the Wallander franchise
In a Radio Times interview, Henning Mankell announced that he has a new Wallander book in the works. Several Swedish media outlets have speculated that the renewed Wallander interest in the UK and the warm reception of the BBC adaptations has sparked a new motivation in writing further Wallander novels; Mankell's last book starring the Ystad inspector was originally published in 1999. The new and final Kurt Wallander book, The Troubled Man, was published in Swedish in August 2009.
The increase in sales of the novels already published in the UK was also attributed to the television series.
Impact on Ystad
The series has resulted in a new interest among British tourists to visit Sweden, and especially Ystad and the rest of the Skåne province according to Itta Johnson, Marketing Strategist with Ystad County. Johnson reports that in the past British people were reluctant to visit Sweden since they saw the country as cold and expensive, but now questions are mostly about the light and the nature seen in the BBC series. Statistics Sweden reports that Skåne is the only Swedish region that has seen an increase in hotel visits during the first quarter of 2009. The largest increase in non-Scandinavian tourists is seen among Britons, who now count for 12% which is almost as large as the percentage of visitors from Germany, at 13%. In 2009, Ystad saw an increase of tourists from the UK with 18%, and local politicians credit the BBC Wallander series with attracting British tourists.
Johnson estimates that 2–3% of the people who watched the first series of Wallander on the BBC decided to visit the region. In 2008 tourism brought into Ystad 51 million Swedish kronor (c. £4.4 million) and with the influx of British tourists this number could very likely be higher for 2009.
"A lot of travel organisers from the UK call and want to include Ystad in what they can offer their clients" says Marie Holmström, tourism coordinator with Ystad tourism agency. "This year (2009) we have 30% more hotel bookings from Great Britain, compared to last year. Kenneth Branagh says many good things about this town and we have received many requests from British press". Jolanta Olsson, tourism coordinator with Ystad tourism agency, says they get many requests from visiting Britons concerning shooting locations and where the film crew reside.
Starting in October 2009, Ystad will start hosting a film festival with a focus on crime fiction. The festival is kick started with a marathon of series one and a speech by Yellow Bird producer Daniel Ahlqvist.
Ystad was awarded the 2009 Stora Turismpriset (The Great Tourism Award). "The brand of Ystad as a film- and tourism town has been strengthened due consequent and longsighted film investments" said Pia Jönsson- Rajgård, President of Tourism in Skåne.
Merchandise
Vintage Books published paperbacks of the first three adapted novels in Series One with tie-in covers featuring Branagh on 20 November 2008. The Series One DVD was published by 2 Entertain Video on 26 December 2008. It features all three films, the Who is Kurt Wallander? documentary, and a 55-minute documentary entitled The Wallander Look. Half of The Wallander Look features Branagh and Mankell discussing Wallander. The DVD was released in the United States on 2 June 2009.
Tie-in editions of the novels adapted for Series 2 were published on 31 December 2009. The second series was released on DVD and Blu-ray on 8 February 2010.
No tie-in editions of the two full novels adapted for the third series were released, and the short story "An Event In Autumn" was not even available in English at the time.
The third series was released on DVD and Blu-ray on 23 July 2012.
The fourth series was released on DVD in the US on 21 June 2016.
References
Further reading
Macnab, Geoffrey (31 July 2009). "Wallander: Swede dreams are made of this". The Independent (Independent News & Media): pp. 34–35 (Art & Books section).
—A comparison between Wallander and Mankell's Wallander.
Nicholson, Paul (December 2008). "Wallander and the BBC". High Definition (Media Maker Publishing) (34).
—A detailed description of the cinematography and editing technology used on the series.
Silberg, Jon (23 June 2009). "Red One No Mystery For DP Anthony Dod Mantle On 'Wallander'". DV Magazine (NewBay Media).
—An interview with director of photography Anthony Dod Mantle about the series' camera setup.
Tapper, Michael (April 2009). "'More than ABBA and skinny-dipping in mountain lakes': Swedish dystopia, Henning Mankell and the British Wallander series". Film International 7 (2): 60–69. . doi: 10.1386/fiin.7.2.60.
—An analysis of the political and social representation of Sweden in the novels and Wallander.
External links
Branagh's Wallander website
Wallander at Yellow Bird Pictures (with trailer).
Videos
Independent News & Media plc (December 2009). "Wallander Returns". The Independent Show: Interview with Branagh and preview. Retrieved 4 January 2010.
Jeffries, Stuart (27 December 2009). "Kenneth Branagh talks to Stuart Jeffries about the new series of Wallander". Guardian Web-TV. Retrieved 4 January 2010.
Branagh, Kenneth; et al.. Interview with Angellica Bell. Baftaonline YouTube channel. 26 April 2009.
Panel discussion of the novels and adaptations. Newsnight Review. 1 December 2008.
Filming Wallander in Ystad city square. Filming of The Man Who Smiled. Ystad Allehanda Webb-TV. 15 September 2009.
Interviews
Geoghegan, Kev (1 January 2010). "Talking Shop: Kenneth Branagh". BBC News website. Retrieved 1 January 2010.
Press releases
BBC Press Office (10 January 2008). "Wallander–Kenneth Branagh in major new drama adaptation for BBC One". Press release. Retrieved 18 April 2008.
BBC Press Office (4 November 2008). "Killing time". Press release. Retrieved 4 November 2008.
2008 British television series debuts
2016 British television series endings
2000s British drama television series
2010s British drama television series
BBC Scotland television shows
English-language television shows
Television series by Left Bank Pictures
Television shows based on Swedish novels
British detective television series
Wallander
Television shows set in Sweden
Scania in fiction |
391487 | https://en.wikipedia.org/wiki/Job%20Control%20Language | Job Control Language | Job Control Language (JCL) is a name for scripting languages used on IBM mainframe operating systems to instruct the system on how to run a batch job or start a subsystem.
More specifically, the purpose of JCL is to say which programs to run, using which files or devices for input or output, and at times to also indicate under what conditions to skip a step.
Parameters in the JCL can also provide accounting information for tracking the resources used by a job as well as which machine the job should run on.
There are two distinct IBM Job Control languages:
one for the operating system lineage that begins with DOS/360 and whose latest member is z/VSE; and
the other for the lineage from OS/360 to z/OS, the latter now including JES extensions, Job Entry Control Language (JECL).
They share some basic syntax rules and a few basic concepts, but are otherwise very different.
The VM operating system does not have JCL as such; the CP and CMS components each have command languages.
Terminology
Certain words or phrases used in conjunction to JCL are specific to IBM mainframe technology.
Dataset: a "dataset" is a file; it can be temporary or permanent, and located on a disk drive, tape storage, or other device.
Member: a "member" of a partitioned dataset (PDS) is an individual dataset within a PDS. A member can be accessed by specifying the name of the PDS with the member name in parentheses. For example, the system macro GETMAIN in SYS1.MACLIB can be referenced as SYS1.MACLIB(GETMAIN).
Partitioned dataset: a "partitioned dataset" or PDS is collection of members, or archive, typically used to represent system libraries. As with most such structures, a member, once stored, cannot be updated; the member must be deleted and replaced, such as with the IEBUPDTE utility. Partitioned datasets are roughly analogous to ar-based static libraries in Unix-based systems.
USS: Unix system services, a Unix subsystem running as part of MVS, and allowing Unix files, scripts, tasks, and programs to run on a mainframe in a UNIX environment.
Motivation
Originally, mainframe systems were oriented toward batch processing. Many batch jobs require setup, with specific requirements for main storage, and dedicated devices such as magnetic tapes, private disk volumes, and printers set up with special forms. JCL was developed as a means of ensuring that all required resources are available before a job is scheduled to run. For example, many systems, such as Linux allow identification of required datasets to be specified on the command line, and therefore subject to substitution by the shell, or generated by the program at run-time. On these systems the operating system job scheduler has little or no idea of the requirements of the job. In contrast, JCL explicitly specifies all required datasets and devices. The scheduler can pre-allocate the resources prior to releasing the job to run. This helps to avoid "deadlock", where job A holds resource R1 and requests resource R2, while concurrently running job B holds resource R2 and requests R1. In such cases the only solution is for the computer operator to terminate one of the jobs, which then needs to be restarted. With job control, if job A is scheduled to run, job B will not be started until job A completes or releases the required resources.
Features common to DOS and OS JCL
Jobs, steps and procedures
For both DOS and OS the unit of work is the job. A job consists of one or several steps, each of which is a request to run one specific program. For example, before the days of relational databases, a job to produce a printed report for management might consist of the following steps: a user-written program to select the appropriate records and copy them to a temporary file; sort the temporary file into the required order, usually using a general-purpose utility; a user-written program to present the information in a way that is easy for the end-users to read and includes other useful information such as sub-totals; and a user-written program to format selected pages of the end-user information for display on a monitor or terminal.
In both DOS and OS JCL the first "card" must be the JOB card, which:
Identifies the job.
Usually provides information to enable the computer services department to bill the appropriate user department.
Defines how the job as a whole is to be run, e.g. its priority relative to other jobs in the queue.
Procedures (commonly called procs) are pre-written JCL for steps or groups of steps, inserted into a job. Both JCLs allow such procedures. Procs are used for repeating steps which are used several times in one job, or in several different jobs. They save programmer time and reduce the risk of errors. To run a procedure one simply includes in the JCL file a single "card" which copies the procedure from a specified file, and inserts it into the jobstream. Also, procs can include parameters to customize the procedure for each use.
Basic syntax
Both DOS and OS JCL have a maximum usable line length of 80 characters, because when DOS/360 and OS/360 were first used the main method of providing new input to a computer system was 80-column punched cards. It later became possible to submit jobs via disk or tape files with longer record lengths, but the operating system's job submission components ignored everything after character 80.
Strictly speaking both operating system families use only 71 characters per line. Characters 73-80 are usually card sequence numbers which the system printed on the end-of-job report and are useful for identifying the locations of any errors reported by the operating system. Character 72 is usually left blank, but it can contain a nonblank character to indicate that the JCL statement is continued onto the next card.
All commands, parameter names and values have to be in capitals, except for USS filenames.
All lines except for in-stream input (see below) have to begin with a slash "/", and all lines which the operating system processes have to begin with two slashes // - always starting in the first column. However, there are two exceptions: the delimiter statement and the comment statement. A delimiter statements begins with a slash and an asterisk (/*), and a comment statement in OS JCL begins with a pair of slashes and asterisk (//*) or an asterisk in DOS JCL.
Many JCL statements are too long to fit within 71 characters, but can be extended on to an indefinite number of continuation cards by:
The structure of the most common types of card is:
In-stream input
DOS and OS JCL both allow in-stream input, i.e. "cards" which are to be processed by the application program rather than the operating system. Data which is to be kept for a long time will normally be stored on disk, but before the use of interactive terminals became common the only way to create and edit such disk files was by supplying the new data on cards.
DOS and OS JCL have different ways of signaling the start of in-stream input, but both end in-stream input with /* at column 1 of the card following the last in-stream data card. This makes the operating system resume processing JCL in the card following the /* card.
OS JCL: DD statements can be used to describe in-stream data, as well as data sets. A DD statement dealing with in-stream data has an asterisk (*) following the DD identifier, e.g. //SYSIN DD *. JCL statements can be included as part of in-stream data by using the DD DATA statements.
An operand named DLM allowed specifying a delimiter (default is "/*"). Specifying an alternate delimiter allows JCL to be read as data, for example to copy procedures to a library member or to submit a job to the internal reader.
An example, which submits a job to the Internal Reader (INTRDR) and then deletes two files is:
//SUBM EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=Z
//SYSUT2 DD SYSOUT=(A,INTRDR)
//SYSIN DD DUMMY
//SYSUT1 DD DATA,DLM=ZZ
//RUNLATR JOB ACCT,MANIX,CLASS=A.TYPRUN=HOLD
//* ^ a JOB to run later
//CPUHOG EXEC PGM=PICALC1K
//OUTPUT DD DSN=PICALC.1000DGTS,SPACE=(TRK,1),DISP=(,KEEP)
ZZ
//* ^ as specified by DLM=ZZ
//DROPOLDR EXEC PGM=IEFBR14
//DELETE4 DD DSN=PICALC.4DGTS,DISP=(OLD,DELETE)
//DELETE5 DD DSN=PICALC.5DGTS,DISP=(OLD,DELETE)
The program called PICALC1K will await (TYPRUN=HOLD) being released manually
The program called IEFBR14 will run NOW and upon completion, the two existing files, PICALC.4DGTS and PICALC.5DGTS will be deleted.
DOS JCL: Simply enter the in-stream data after the EXEC card for the program.
Complexity
Much of the complexity of OS JCL, in particular, derives from the large number of options for specifying dataset information. While files on Unix-like operating systems are abstracted into arbitrary collections of bytes, with the details handled in large part by the operating system, datasets on OS/360 and its successors expose their file types and sizes, record types and lengths, block sizes, device-specific information like magnetic tape density, and label information. Although there are system defaults for many options, there is still a lot to be specified by the programmer, through a combination of JCL and information coded in the program. The more information coded in the program, the less flexible it is, since information in the program overrides anything in the JCL; thus, most information is usually supplied through JCL.
For example, to copy a file on Unix operating system, the user would enter a command like:
cp oldFile newFile
The following example, using JCL, might be used to copy a file on OS/360:
//IS198CPY JOB (IS198T30500),'COPY JOB',CLASS=L,MSGCLASS=X
//COPY01 EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=OLDFILE,DISP=SHR
//SYSUT2 DD DSN=NEWFILE,
// DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(40,5),RLSE),
// DCB=(LRECL=115,BLKSIZE=1150)
//SYSIN DD DUMMY
A second explanation for the complexity of JCL is the different expectations for running a job from those found in a PC or Unix-like environment.
Low-end System/360 CPUs were less powerful and more expensive than the mid-1980s PCs for which MS-DOS was designed. OS/360 was intended for systems with a minimum memory size of 32 KB and DOS/360 for systems with a minimum of 16 KB. A 360/30 CPU—low-end when System/360 was announced in 1964—processed 1.8K to 34.5K instructions per second. The first IBM PC in 1981 had 16 KB or 64 KB of memory and would process about 330K instructions per second. As a result, JCL had to be easy for the computer to process, and ease of use by programmers was a much lower priority. In this era, programmers were much cheaper than computers.
JCL was designed for batch processing. As such, it has to tell the operating system everything, including what to do depending on the result of a step. For example, DISP=(NEW,CATLG,DELETE) means "if the program runs successfully, create a new file and catalog it; otherwise delete the new file." Programs run on a PC frequently depend on the user to clean up after processing problems.
System/360 machines were designed to be shared by all the users in an organization. So the JOB card tells the operating system how to bill the user's account (IS198T30500), what predefined amount of storage and other resources may be allocated (CLASS=L), and several other things. tells the computer to print the program's report on the default printer which is loaded with ordinary paper, not on some other printer which might be loaded with blank checks. DISP=SHR tells the operating system that other programs can read OLDFILE at the same time.
Later versions of the DOS/360 and OS/360 operating systems retain most features of the original JCL—although some simplification has been made, to avoid forcing customers to rewrite all their JCL files. Many users save as a procedure any set of JCL statements which is likely to be used more than once or twice.
The syntax of OS JCL is similar to the syntax of macros in System/360 assembly language, and would therefore have been familiar to programmers at a time when many programs were coded in assembly language.
DOS JCL
Positional parameters
//TLBL TAPEFIL,'COPYTAPE.JOB',,,,2
//ASSGN SYS005,200
//DLBL DISKFIL,'COPYTAPE.JOB',0,SD
//EXTENT SYS005,VOL01,1,0,800,1600
DOS JCL parameters are positional, which makes them harder to read and write, but easier for the system to parse.
The programmer must remember which item goes in which position in every type of statement.
If some optional parameters are omitted but later ones are included, the omitted parameters must be represented by commas with no spaces, as in the TLBL statement above.
DOS JCL to some extent mitigates the difficulties of positional parameters by using more statements with fewer parameters than OS JCL. In the example the ASSGN, DLBL and EXTENT statements do the same work (specifying where a new disk file should be stored) as a single DD statement in OS JCL.
Device dependence
In the original DOS/360 and in most versions of DOS/VS one had to specify the model number of the device which was to be used for each disk or tape file—even for existing files and for temporary files which would be deleted at the end of the job. This meant that, if a customer upgraded to more modern equipment, many JCL files had to be changed.
Later members of the DOS/360 family reduced the number of situations in which device model numbers were required.
Manual file allocation
DOS/360 originally required the programmer to specify the location and size of all files on DASD. The EXTENT card specifies the volume on which the extent resides, the starting absolute track, and the number of tracks. For z/VSE a file can have up to 256 extents on different volumes.
OS JCL
OS JCL consists of three basic statement types:
JOB statement, which identifies the start of the job, and information about the whole job, such as billing, run priority, and time and space limits.
EXEC statement, which identifies the program or procedure to be executed in this step of the job, and information about the step, including CONDitions for running or skipping a step.
DD (Data Definition) statements, which identify a data file to be used in a step, and detailed info about that file. DD statements can be in any order within the step.
Right from the start, JCL for the OS family (up to and including z/OS) was more flexible and easier to use.
The following examples use the old style of syntax which was provided right from the launch of System/360 in 1964. The old syntax is still quite common in jobs that have been running for decades with only minor changes.
Rules for Coding JCL Statements
Each JCL Statement is divided into five fields:
Identifier-Field Name-Field Operation-Field Parameter-Field Comments-Field
^ ^ ^ ^
no space space space space
Identifier-Field should be concatenated with Name-Field, i.e. there should be no spaces between them.
Identifier-Field (//): The identifier field indicates to the system that a statement is a JCL statement rather than data. The identifier field consists of the following:
Columns 1 and 2 of all JCL statements, except the delimiter statement, contain //
Columns 1 and 2 of the delimiter statement contain /*
Columns 1, 2, and 3 of a JCL comment statement contain //*
Name-Field: The name field identifies a particular statement so that other statements and the system can refer to it. For JCL statements, it should be coded as follows:
The name must begin in column 3.
The name is 1 through 8 alphanumeric or national ($, #, @) characters.
The first character must be an alphabetic.
The name must be followed by at least one blank.
Operation-Field: The operation field specifies the type of statement, or, for the command statement, the command. Operation-Field should be coded as follows:
The operation field consists of the characters in the syntax box for the statement.
The operation follows the name field.
The operation must be preceded and followed by at least one blank.
The operation will be one of JOB, EXEC and DD.
Parameter-Field: The parameter field, also sometimes referred to as the operand field, contains parameters separated by commas. Parameter field should be coded as follows:
The parameter field follows the operation field.
The parameter field must be preceded by at least one blank.
The parameter field contains parameters which are keywords that used in the statement to provide information such as the program or dataset name.
Comments-Field: This contains comments. Comments-Field should be coded as Follows:
The comments field follows the parameter field.
The comments field must be preceded by at least one blank.
Keyword parameters
//NEWFILE DD DSN=MYFILE01,UNIT=DISK,SPACE=(TRK,80,10),
// DCB=(LRECL=100,BLKSIZE=1000),
// DISP=(NEW,CATLG,DELETE)
All of the major parameters of OS JCL statements are identified by keywords and can be presented in any order. A few of these contain two or more sub-parameters, such as SPACE (how much disk space to allocate to a new file) and DCB (detailed specification of a file's layout) in the example above. Sub-parameters are sometimes positional, as in SPACE, but the most complex parameters, such as DCB, have keyword sub-parameters.
Positional parameter must precede keyword parameters. Keyword parameters always assign values to a keyword using the equals sign (=).
Data access (DD statement)
The DD statement is used to reference data. This statement links a program's internal description of a dataset to the data on external devices: disks, tapes, cards, printers, etc. The DD may provide information such as a device type (e.g. '181','2400-5','TAPE'), a volume serial number for tapes or disks, and the description of the data file, called the DCB subparameter after the Data Control Block (DCB) in the program used to identify the file.
Information describing the file can come from three sources: The DD card information, the dataset label information for an existing file stored on tape or disk, and the DCB macro coded in the program. When the file is opened this data is merged, with the DD information taking precedence over the label information, and the DCB information taking precedence over both. The updated description is then written back to the dataset label. This can lead to unintended consequences if incorrect DCB information is provided.
Because of the parameters listed above and specific information for various access methods and devices the DD statement is the most complex JCL statement. In one IBM reference manual description of the DD statement occupies over 130 pages—more than twice as much as the JOB and EXEC statements combined.
The DD statement allows inline data to be injected into the job stream. This is useful for providing control information to utilities such as IDCAMS, SORT, etc. as well as providing input data to programs.
Device independence
From the very beginning, the JCL for the OS family of operating systems offered a high degree of device independence. Even for new files which were to be kept after the end of the job one could specify the device type in generic terms, e.g., UNIT=DISK, UNIT=TAPE, or UNIT=SYSSQ (tape or disk). Of course, if it mattered one could specify a model number or even a specific device address.
Procedures
Procedures permit grouping one or more "EXEC PGM=" and DD statements and then invoking them with "EXEC PROC=procname" -or- simply "EXEC procname"
A facility called a Procedure Library allowed pre-storing procedures.
PROC & PEND
Procedures can also be included in the job stream by terminating the procedure with a // PEND statement, then invoking it by name the same was as if it were in a procedure library.
For example:
//SUMPRINT PROC
//PRINT EXEC PGM=IEBGENER
//SYSUT1 DD DSN=CEO.FILES.DAYEND.RPT24A,DISP=SHR
//SYSUT2 DD SYSOUT=A
//SYSIN DD DUMMY
// PEND
// EXEC SUMPRINT
Parameterized procedures
OS JCL procedures were parameterized from the start, making them rather like macros or even simple subroutines and thus increasing their reusability in a wide range of situations.
//MYPROC PROC FNAME=MYFILE01,SPTYPE=TRK,SPINIT=50,SPEXT=10,LR=100,BLK=1000
.....
//NEWFILE DD DSN=&FNAME,UNIT=DISK,SPACE=(&SPTYPE,&SPINIT,&SPEXT),
// DCB=(LRECL=&LR,BLKSIZE=&BLK),DISP=(NEW,CATLG,DELETE)
....
In this example, all the values beginning with ampersands "&" are parameters which will be specified when a job requests that the procedure be used. The PROC statement, in addition to giving the procedure a name, allows the programmer to specify default values for each parameter. So one could use the one procedure in this example to create new files of many different sizes and layouts. For example:
//JOB01 JOB ..........
//STEP01 EXEC MYPROC FNAME=JOESFILE,SPTYPE=CYL,SPINIT=10,SPEXT=2,LR=100,BLK=2000
or
//JOB02 JOB ..........
//STEP01 EXEC MYPROC FNAME=SUESFILE,SPTYPE=TRK,SPINIT=500,SPEXT=100,LR=100,BLK=5000
Referbacks
In multi-step jobs, a later step can use a referback instead of specifying in full a file which has already been specified in an earlier step. For example:
//MYPROC ................
//MYPR01 EXEC PGM=..........
//NEWFILE DD DSN=&MYFILE,UNIT=DISK,SPACE=(TRK,50,10),
// DCB=(LRECL=100,BLKSIZE=1000),DISP=(NEW,CATLG,DELETE)
....
//MYPR02 EXEC PGM=..........
//INPUT01 DD DSN=*.MYPR01.NEWFILE
Here, MYPR02 uses the file identified as NEWFILE in step MYPR01 (DSN means "dataset name" and specifies the name of the file; a DSN could not exceed 44 characters).
In jobs which contain a mixture of job-specific JCL and procedure calls, a job-specific step can refer back to a file which was fully specified in a procedure, for example:
//MYJOB JOB ..........
//STEP01 EXEC MYPROC Using a procedure
//STEP02 EXEC PGM=......... Step which is specific to this job
//INPUT01 DD DSN=*.STEP01.MYPR01.NEWFILE
where DSN=*.STEP01.MYPR01.NEWFILE means "use the file identified as NEWFILE in step MYPR01 of the procedure used by step STEP01 of this job". Using the name of the step which called the procedure rather than the name of the procedure allows a programmer to use the same procedure several times in the same job without confusion about which instance of the procedure is used in the referback.
Comments
JCL files can be long and complex, and the language is not easy to read. OS JCL allows programmers to include two types of explanatory comment:
On the same line as a JCL statement. They can be extended by placing a continuation character (conventionally "X") in column 72, followed by "// " in columns 1–3 of the next line.
Lines which contain only comment, often used to explain major points about the overall structure of the JCL rather than local details. Comment-only lines are also used to divide long, complex JCL files into sections.
//MYJOB JOB ..........
//* Lines containing only comments.
//******** Often used to divide JCL listing into sections ********
//STEP01 EXEC MYPROC Comment 2 on same line as statement
//STEP02 EXEC PGM=......... Comment 3 has been extended and X
// overflows into another line.
//INPUT01 DD DSN=STEP01.MYPR01.NEWFILE
Concatenating input files
OS JCL allows programmers to concatenate ("chain") input files so that they appear to the program as one file, for example
//INPUT01 DD DSN=MYFILE01,DISP=SHR
// DD DSN=JOESFILE,DISP=SHR
// DD DSN=SUESFILE,DISP=SHR
The 2nd and third statements have no value in the name field, so OS treats them as concatenations. The files must be of the same basic type (almost always sequential), and must have the same record length, however the block length need not be the same.
In early versions of the OS (certainly before OS/360 R21.8) the block length must be in decreasing order, or the user must inspect each instance and append to the named DD statement the maximum block length found, as in, for example,
//INPUT01 DD DSN=MYFILE01,DISP=SHR,BLKSIZE=800
// DD DSN=JOESFILE,DISP=SHR (BLKSIZE assumed to be equal to or less than 800)
// DD DSN=SUESFILE,DISP=SHR (BLKSIZE assumed to be equal to or less than 800)
In later versions of the OS (certainly after OS/MVS R3.7 with the appropriate "selectable units") the OS itself, during allocation, would inspect each instance in a concatenation and would substitute the maximum block length which was found.
A usual fallback was to simply determine the maximum possible block length on the device, and specify that on the named DD statement, as in, for example,
//INPUT01 DD DSN=MYFILE01,DISP=SHR,BLKSIZE=8000
// DD DSN=JOESFILE,DISP=SHR (BLKSIZE assumed to be equal to or less than 8000)
// DD DSN=SUESFILE,DISP=SHR (BLKSIZE assumed to be equal to or less than 8000)
The purpose of this fallback was to ensure that the access method would allocate an input buffer set which was large enough to accommodate any and all of the specified datasets.
Conditional processing
OS expects programs to set a return code which specifies how successful the program thought it was. The most common conventional values are:
0 = Normal - all OK
4 = Warning - minor errors or problems
8 = Error - significant errors or problems
12 = Severe error - major errors or problems, the results (e.g. files or reports produced) should not be trusted.
16 = Terminal error - very serious problems, do not use the results!
OS JCL refers to the return code as COND ("condition code"), and can use it to decide whether to run subsequent steps. However, unlike most modern programming languages, conditional steps in OS JCL are not executed if the specified condition is true—thus giving rise to the mnemonic, "If it's true, pass on through [without running the code]." To complicate matters further, the condition can only be specified after the step to which it refers. For example:
//MYJOB JOB ...........
//STEP01 EXEC PGM=PROG01
....
//STEP02 EXEC PGM=PROG02,COND=(4,GT,STEP01)
....
//STEP03 EXEC PGM=PROG03,COND=(8,LE)
....
//STEP04 EXEC PGM=PROG04,COND=(ONLY,STEP01)
....
//STEP05 EXEC PGM=PROG05,COND=(EVEN,STEP03)
....
means:
Run STEP01, and collect its return code.
Don't run STEP02 if the number 4 is greater than STEP01's return code.
Don't run STEP03 if the number 8 is less than or equal to any previous return code.
Run STEP04 only if STEP01 abnormally ended.
Run STEP05, even if STEP03 abnormally ended.
This translates to the following pseudocode:
run STEP01
if STEP01's return code is greater than or equal to 4 then
run STEP02
end if
if any previous return code is less than 8 then
run STEP03
end if
if STEP01 abnormally ended then
run STEP04
end if
if STEP03 abnormally ended then
run STEP05
else
run STEP05
end if
Note that by reading the steps containing COND statements backwards, one can understand them fairly easily. This is an example of logical transposition.
However, IBM later introduced IF condition in JCL thereby making coding somewhat easier for programmers while retaining the COND parameter (to avoid making changes to the existing JCLs where is used).
The COND parameter may also be specified on the JOB statement. If so the system "performs the same return code tests for every step in a job. If a JOB statement return code test is satisfied, the job terminates."
Utilities
Jobs use a number of IBM utility programs to assist in the processing of data. Utilities are most useful in batch processing. The utilities can be grouped into three sets:
Data Set Utilities - Create, print, copy, move and delete data sets.
System Utilities - Maintain and manage catalogs and other system information.
Access Method Services - Process Virtual Storage Access Method (VSAM) and non-VSAM data sets.
Difficulty of use
OS JCL is undeniably complex and has been described as "user hostile". As one instructional book on JCL asked, "Why do even sophisticated programmers hesitate when it comes to Job Control Language?" The book stated that many programmers either copied control cards without really understanding what they did, or "believed the prevalent rumors that JCL was horrible, and only 'die-hard' computer-types ever understood it" and handed the task of figuring out the JCL statements to someone else. Such an attitude could be found in programming language textbooks, which preferred to focus on the language itself and not how programs in it were run. As one Fortran IV textbook said when listing possible error messages from the WATFOR compiler: "Have you been so foolish as to try to write your own 'DD' system control cards? Cease and desist forthwith; run, do not walk, for help."
Nevertheless, some books that went into JCL in detail emphasized that once it was learned to an at least somewhat proficient degree, one gained freedom from installation-wide defaults and much better control over how an IBM system processed your workload. Another book commented on the complexity but said, "take heart. The JCL capability you will gain from [the preceding chapter] is all that most programmers will ever need."
Job Entry Control Language
On IBM mainframe systems Job Entry Control Language or JECL is the set of command language control statements that provide information for the spooling subsystem – JES2 or JES3 on z/OS or VSE/POWER for z/VSE. JECL statements may "specify on which network computer to run the job, when to run the job, and where to send the resulting output."
JECL is distinct from job control language (JCL), which instructs the operating system how to run the job.
There are different versions of JECL for the three environments.
OS/360
An early version of Job Entry Control Language for OS/360 Remote Job Entry (Program Number 360S-RC-536) used the identifier .. in columns 1–2 of the input record and consisted of a single control statement: JED (Job Entry Definition). "Workstation Commands" such as LOGON, LOGOFF, and STATUS also began with .. .
pre-JES JECL
Although the term had not yet been developed, HASP did have similar functionality to what would become the JECL of JES, including /* syntax.
z/OS
For JES2 JECL statements start with /*, for JES3 they start with //*, except for remote /*SIGNON and /*SIGNOFF commands. The commands for the two systems are completely different.
JES2 JECL
The following JES2 JECL statements are used in z/OS 1.2.0.
JES3 JECL
The following JES3 JECL statements are used in z/OS 1.2.0
z/VSE
For VSE JECL statements start with '* $$' (note the single space). The Job Entry Control Language defines the start and end lines of JCL jobs. It advises VSE/POWER how this job is handled. JECL statements define the job name (used by VSE/POWER), the class in which the job is processed, and the disposition of the job (i.e. D, L, K, H).
Example:
* $$ JOB JNM=NAME,DISP=K,CLASS=2
[some JCL statements here]
* $$ EOJ
Other systems
Other mainframe batch systems had some form of job control language, whether called that or not; their syntax was completely different from IBM versions, but they usually provided similar capabilities. Interactive systems
include "command languages"—command files (such as PCDOS ".bat" files) can be run non-interactively, but these usually do not provide as robust an environment for running unattended jobs as JCL. On some computer systems the job control language and the interactive command language may be different. For example, TSO on z/OS systems uses CLIST or Rexx as command languages along with JCL for batch work. On other systems these may be the same.
See also
dd (Unix), Unix program inspired by DD
IBM mainframe utility programs
Batch processing
Data set (IBM mainframe)#Generation Data Group
References
Sources
Scripting languages
Job scheduling
IBM mainframe operating systems |
50801508 | https://en.wikipedia.org/wiki/Apple%20File%20System | Apple File System | Apple File System (APFS) is a proprietary file system developed and deployed by Apple Inc. for macOS Sierra (10.12.4) and later, iOS 10.3 and later, tvOS 10.2 and later, watchOS 3.2 and later, and all versions of iPadOS. It aims to fix core problems of HFS+ (also called Mac OS Extended), APFS's predecessor on these operating systems. APFS is optimized for solid-state drive storage and supports encryption, snapshots, and increased data integrity, among other capabilities.
History
Apple File System was announced at Apple's developers conference (WWDC) in June 2016 as a replacement for HFS+, which had been in use since 1998. APFS was released for 64-bit iOS devices on March 27, 2017, with the release of iOS 10.3, and for macOS devices on September 25, 2017, with the release of macOS 10.13.
Apple released a partial specification for APFS in September 2018 which supported read-only access to Apple File Systems on unencrypted, non-Fusion storage devices. The specification for software encryption was documented later.
Design
The file system can be used on devices with relatively small or large amounts of storage. It uses 64-bit inode numbers, and allows for more secure storage. The APFS code, like the HFS+ code, uses the TRIM command, for better space management and performance. It may increase read-write speeds on iOS and macOS, as well as space on iOS devices, due to the way APFS calculates available data.
Partition scheme
APFS uses the GPT partition scheme. Within the GPT scheme are one or more APFS containers (partition type GUID is ). Within each container there are one or more APFS volumes, all of which share the allocated space of the container, and each volume may have APFS volume roles. macOS Catalina (macOS 10.15) introduced the APFS volume group, which are groups of volumes that Finder displays as one volume. APFS firmlinks lie between hard links and soft links and link between volumes.
In macOS Catalina the volume role (usually named "Macintosh HD") became read-only, and in macOS Big Sur (macOS 11) it became a signed system volume (SSV) and only volume snapshots are mounted. The volume role (usually named "Macintosh HD - Data") is used as an overlay or shadow of the volume, and both the and volumes are part of the same volume group and shown as one in Finder.
Clones
Clones allow the operating system to make efficient file copies on the same volume without occupying additional storage space. Changes to a cloned file are saved as delta extents, reducing storage space required for document revisions and copies. There is, however, no interface to mark two copies of the same file as clones of the other, or for other types of data deduplication.
Snapshots
APFS volumes support snapshots for creating a point-in-time, read-only instance of the file system.
Encryption
Apple File System natively supports full disk encryption, and file encryption with the following options:
no encryption
single-key encryption
multi-key encryption, where each file is encrypted with a separate key, and metadata is encrypted with a different key.
Increased maximum number of files
APFS supports 64-bit inode numbers, supporting over 9 quintillion files (263) on a single volume.
Data integrity
Apple File System uses checksums to ensure data integrity for metadata.
Crash protection
Apple File System is designed to avoid metadata corruption caused by system crashes. Instead of overwriting existing metadata records in place, it writes entirely new records, points to the new ones and then releases the old ones, an approach known as redirect-on-write. This avoids corrupted records containing partial old and partial new data caused by a crash that occurs during an update. It also avoids having to write the change twice, as happens with an HFS+ journaled file system, where changes are written first to the journal and then to the catalog file.
Compression
APFS supports transparent compression on individual files using Deflate (Zlib), LZVN (libFastCompression), and LZFSE. All three are Lempel-Ziv-type algorithms. This feature is inherited from HFS+, and is implemented with the same AppleFSCompression / decmpfs system using resource forks or extended attributes. As with HFS+, the transparency is broken for tools that do not use decmpfs-wrapped routines.
Space sharing
APFS adds the ability to have multiple logical drives (referred to as volumes) in the same container where free space is available to all volumes in that container (block device).
Limitations
While APFS includes numerous improvements relative to its predecessor, HFS+, a number of limitations have been noted.
Limited integrity checks for user data
APFS does not provide checksums for user data. It also does not take advantage of byte-addressable non-volatile random-access memory.
Performance on hard disk drives
Enumerating files, and any inode metadata in general, is much slower on APFS when it is located on a hard disk drive. This is because instead of storing metadata at a fixed location like HFS+ does, APFS stores them alongside the actual file data. This fragmentation of metadata means more seeks are performed when listing files, acceptable for SSDs but not HDDs.
Compatibility with Time Machine prior to macOS 11
Unlike HFS+, APFS does not support hard links to directories. Since the version of the Time Machine backup software included in Mac OS X 10.5 (Leopard) through macOS 10.15 (Catalina) relied on hard links to directories, APFS was initially not a supported option for its backup volumes. This limitation was overcome starting in macOS 11 Big Sur, wherein APFS is now the default file system for new Time Machine backups (existing HFS+-formatted backup drives are also still supported). macOS Big Sur's implementation of Time Machine in conjunction with APFS-formatted drives enables "faster, more compact, and more reliable backups" than were possible with HFS+-formatted backup drives.
Security issues
In March 2018, the APFS driver in High Sierra was found to have a bug that causes the disk encryption password to be logged in plaintext.
In January 2021, the APFS driver in iOS < 14.4, macOS < 11.2, watchOS < 7.3, and tvOS < 14.4 was found to have a bug that allowed a local user to read arbitrary files, regardless of their permissions.
Support
macOS
Limited, experimental support for APFS was first introduced in macOS Sierra 10.12.4. Since macOS 10.13 High Sierra, all devices with flash storage are automatically converted to APFS. As of macOS 10.14 Mojave, Fusion Drives and hard disk drives are also upgraded on installation. The primary user interface to upgrade does not present an option to opt out of this conversion, and devices formatted with the High Sierra version of APFS will not be readable in previous versions of macOS. Users can disable APFS conversion by using the installer's startosinstall utility on the command line and passing --converttoapfs NO.
FileVault volumes are not converted to APFS as of macOS Big Sur 11.2.1. Instead macOS formats external FileVault drives as CoreStorage Logical Volumes formatted with Mac OS Extended (Journaled). FileVault drives can be optionally Encrypted.
An experimental version of APFS, with some limitations, is available in macOS Sierra through the command line diskutil utility. Among these limitations, it does not perform Unicode normalization while HFS+ does, leading to problems with languages other than English. Drives formatted with Sierra’s version of APFS may also not be compatible with future versions of macOS or the final version of APFS, and the Sierra version of APFS cannot be used with Time Machine, FileVault volumes, or Fusion Drives.
iOS, tvOS, and watchOS
iOS 10.3, tvOS 10.2, and watchOS 3.2 convert the existing HFSX file system to APFS on compatible devices.
Third-party utilities
Despite the ubiquity of APFS volumes in today's Macs and the format's 2016 introduction, third-party repair utilities continue to have notable limitations in supporting APFS volumes, due to Apple's delayed release of complete documentation. According to Alsoft, the maker of DiskWarrior, Apple's 2018 release of partial APFS format documentation has delayed the creation of a version of DiskWarrior that can safely rebuild APFS disks. Competing products, including MicroMat's TechTool and Prosoft's Drive Genius, are expected to increase APFS support as well.
Paragon Software Group has published a software development kit under the 4-Clause BSD License that supports read-only access of APFS drives. An independent read-only open source implementation by Joachim Metz, libfsapfs, is released under GNU Lesser General Public License v3. It has been packaged into Debian and Ubuntu software repositories. Both are command-line tools that do not expose a normal filesystem driver interface. There is a Filesystem in Userspace (FUSE) driver for Linux called apfs-fuse with read-only access. An "APFS for Linux" project is working to integrate APFS support into the Linux kernel.
See also
Comparison of file systems
References
External links
Apple Developer: Apple File System Guide
Apple Developer: Apple File System Reference
WWDC 2016: Introduction of APFS by Apple software engineers Dominic Giampaolo and Eric Tamura
Detailed Overview of APFS by independent file system developer Adam Leventhal
2017 software
Apple Inc. file systems
Computer file systems
Disk file systems
Flash file systems
IOS
MacOS |
254299 | https://en.wikipedia.org/wiki/Curry%E2%80%93Howard%20correspondence | Curry–Howard correspondence | In programming language theory and proof theory, the Curry–Howard correspondence (also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation) is the direct relationship between computer programs and mathematical proofs.
It is a generalization of a syntactic analogy between systems of formal logic and computational calculi that was first discovered by the American mathematician Haskell Curry and the logician William Alvin Howard. It is the link between logic and computation that is usually attributed to Curry and Howard, although the idea is related to the operational interpretation of intuitionistic logic given in various formulations by L. E. J. Brouwer, Arend Heyting and Andrey Kolmogorov (see Brouwer–Heyting–Kolmogorov interpretation) and Stephen Kleene (see Realizability). The relationship has been extended to include category theory as the three-way Curry–Howard–Lambek correspondence.
Origin, scope, and consequences
The beginnings of the Curry–Howard correspondence lie in several observations:
In 1934 Curry observes that the types of the combinators could be seen as axiom-schemes for intuitionistic implicational logic.
In 1958 he observes that a certain kind of proof system, referred to as Hilbert-style deduction systems, coincides on some fragment to the typed fragment of a standard model of computation known as combinatory logic.
In 1969 Howard observes that another, more "high-level" proof system, referred to as natural deduction, can be directly interpreted in its intuitionistic version as a typed variant of the model of computation known as lambda calculus.
In other words, the Curry–Howard correspondence is the observation that two families of seemingly unrelated formalisms—namely, the proof systems on one hand, and the models of computation on the other—are in fact the same kind of mathematical objects.
If one abstracts on the peculiarities of either formalism, the following generalization arises: a proof is a program, and the formula it proves is the type for the program. More informally, this can be seen as an analogy that states that the return type of a function (i.e., the type of values returned by a function) is analogous to a logical theorem, subject to hypotheses corresponding to the types of the argument values passed to the function; and that the program to compute that function is analogous to a proof of that theorem. This sets a form of logic programming on a rigorous foundation: proofs can be represented as programs, and especially as lambda terms, or proofs can be run.
The correspondence has been the starting point of a large spectrum of new research after its discovery, leading in particular to a new class of formal systems designed to act both as a proof system and as a typed functional programming language. This includes Martin-Löf's intuitionistic type theory and Coquand's Calculus of Constructions, two calculi in which proofs are regular objects of the discourse and in which one can state properties of proofs the same way as of any program. This field of research is usually referred to as modern type theory.
Such typed lambda calculi derived from the Curry–Howard paradigm led to software like Coq in which proofs seen as programs can be formalized, checked, and run.
A converse direction is to use a program to extract a proof, given its correctness—an area of research closely related to proof-carrying code. This is only feasible if the programming language the program is written for is very richly typed: the development of such type systems has been partly motivated by the wish to make the Curry–Howard correspondence practically relevant.
The Curry–Howard correspondence also raised new questions regarding the computational content of proof concepts that were not covered by the original works of Curry and Howard. In particular, classical logic has been shown to correspond to the ability to manipulate the continuation of programs and the symmetry of sequent calculus to express the duality between the two evaluation strategies known as call-by-name and call-by-value.
Speculatively, the Curry–Howard correspondence might be expected to lead to a substantial unification between mathematical logic and foundational computer science:
Hilbert-style logic and natural deduction are but two kinds of proof systems among a large family of formalisms. Alternative syntaxes include sequent calculus, proof nets, calculus of structures, etc. If one admits the Curry–Howard correspondence as the general principle that any proof system hides a model of computation, a theory of the underlying untyped computational structure of these kinds of proof system should be possible. Then, a natural question is whether something mathematically interesting can be said about these underlying computational calculi.
Conversely, combinatory logic and simply typed lambda calculus are not the only models of computation, either. Girard's linear logic was developed from the fine analysis of the use of resources in some models of lambda calculus; is there typed version of Turing's machine that would behave as a proof system? Typed assembly languages are such an instance of "low-level" models of computation that carry types.
Because of the possibility of writing non-terminating programs, Turing-complete models of computation (such as languages with arbitrary recursive functions) must be interpreted with care, as naive application of the correspondence leads to an inconsistent logic. The best way of dealing with arbitrary computation from a logical point of view is still an actively debated research question, but one popular approach is based on using monads to segregate provably terminating from potentially non-terminating code (an approach that also generalizes to much richer models of computation, and is itself related to modal logic by a natural extension of the Curry–Howard isomorphism). A more radical approach, advocated by total functional programming, is to eliminate unrestricted recursion (and forgo Turing completeness, although still retaining high computational complexity), using more controlled corecursion wherever non-terminating behavior is actually desired.
General formulation
In its more general formulation, the Curry–Howard correspondence is a correspondence between formal proof calculi and type systems for models of computation. In particular, it splits into two correspondences. One at the level of formulas and types that is independent of which particular proof system or model of computation is considered, and one at the level of proofs and programs which, this time, is specific to the particular choice of proof system and model of computation considered.
At the level of formulas and types, the correspondence says that implication behaves the same as a function type, conjunction as a "product" type (this may be called a tuple, a struct, a list, or some other term depending on the language), disjunction as a sum type (this type may be called a union), the false formula as the empty type and the true formula as the singleton type (whose sole member is the null object). Quantifiers correspond to dependent function space or products (as appropriate).
This is summarized in the following table:
At the level of proof systems and models of computations, the correspondence mainly shows the identity of structure, first, between some particular formulations of systems known as Hilbert-style deduction system and combinatory logic, and, secondly, between some particular formulations of systems known as natural deduction and lambda calculus.
Between the natural deduction system and the lambda calculus there are the following correspondences:
Corresponding systems
Hilbert-style deduction systems and combinatory logic
It was at the beginning a simple remark in Curry and Feys's 1958 book on combinatory logic: the simplest types for the basic combinators K and S of combinatory logic surprisingly corresponded to the respective axiom schemes α → (β → α) and (α → (β → γ)) → ((α → β) → (α → γ)) used in Hilbert-style deduction systems. For this reason, these schemes are now often called axioms K and S. Examples of programs seen as proofs in a Hilbert-style logic are given below.
If one restricts to the implicational intuitionistic fragment, a simple way to formalize logic in Hilbert's style is as follows. Let Γ be a finite collection of formulas, considered as hypotheses. Then δ is derivable from Γ, denoted Γ ⊢ δ, in the following cases:
δ is an hypothesis, i.e. it is a formula of Γ,
δ is an instance of an axiom scheme; i.e., under the most common axiom system:
δ has the form α → (β → α), or
δ has the form (α → (β → γ)) → ((α → β) → (α → γ)),
δ follows by deduction, i.e., for some α, both α → δ and α are already derivable from Γ (this is the rule of modus ponens)
This can be formalized using inference rules, as in the left column of the following table.
Typed combinatory logic can be formulated using a similar syntax: let Γ be a finite collection of variables, annotated with their types. A term T (also annotated with its type) will depend on these variables [Γ ⊢ T:δ] when:
T is one of the variables in Γ,
T is a basic combinator; i.e., under the most common combinator basis:
T is K:α → (β → α) [where α and β denote the types of its arguments], or
T is S:(α → (β → γ)) → ((α → β) → (α → γ)),
T is the composition of two subterms which depend on the variables in Γ.
The generation rules defined here are given in the right-column below. Curry's remark simply states that both columns are in one-to-one correspondence. The restriction of the correspondence to intuitionistic logic means that some classical tautologies, such as Peirce's law ((α → β) → α) → α, are excluded from the correspondence.
Seen at a more abstract level, the correspondence can be restated as shown in the following table. Especially, the deduction theorem specific to Hilbert-style logic matches the process of abstraction elimination of combinatory logic.
Thanks to the correspondence, results from combinatory logic can be transferred to Hilbert-style logic and vice versa. For instance, the notion of reduction of terms in combinatory logic can be transferred to Hilbert-style logic and it provides a way to canonically transform proofs into other proofs of the same statement. One can also transfer the notion of normal terms to a notion of normal proofs, expressing that the hypotheses of the axioms never need to be all detached (since otherwise a simplification can happen).
Conversely, the non provability in intuitionistic logic of Peirce's law can be transferred back to combinatory logic: there is no typed term of combinatory logic that is typable with type
((α → β) → α) → α.
Results on the completeness of some sets of combinators or axioms can also be transferred. For instance, the fact that the combinator X constitutes a one-point basis of (extensional) combinatory logic implies that the single axiom scheme
(((α → (β → γ)) → ((α → β) → (α → γ))) → ((δ → (ε → δ)) → ζ)) → ζ,
which is the principal type of X, is an adequate replacement to the combination of the axiom schemes
α → (β → α) and
(α → (β → γ)) → ((α → β) → (α → γ)).
Natural deduction and lambda calculus
After Curry emphasized the syntactic correspondence between Hilbert-style deduction and combinatory logic, Howard made explicit in 1969 a syntactic analogy between the programs of simply typed lambda calculus and the proofs of natural deduction. Below, the left-hand side formalizes intuitionistic implicational natural deduction as a calculus of sequents (the use of sequents is standard in discussions of the Curry–Howard isomorphism as it allows the deduction rules to be stated more cleanly) with implicit weakening and the right-hand side shows the typing rules of lambda calculus. In the left-hand side, Γ, Γ1 and Γ2 denote ordered sequences of formulas while in the right-hand side, they denote sequences of named (i.e., typed) formulas with all names different.
To paraphrase the correspondence, proving Γ ⊢ α means having a program that, given values with the types listed in Γ, manufactures an object of type α. An axiom corresponds to the introduction of a new variable with a new, unconstrained type, the rule corresponds to function abstraction and the rule corresponds to function application. Observe that the correspondence is not exact if the context Γ is taken to be a set of formulas as, e.g., the λ-terms λx.λy.x and λx.λy.y of type would not be distinguished in the correspondence. Examples are given below.
Howard showed that the correspondence extends to other connectives of the logic and other constructions of simply typed lambda calculus. Seen at an abstract level, the correspondence can then be summarized as shown in the following table. Especially, it also shows that the notion of normal forms in lambda calculus matches Prawitz's notion of normal deduction in natural deduction, from which it follows that the algorithms for the type inhabitation problem can be turned into algorithms for deciding intuitionistic provability.
Howard's correspondence naturally extends to other extensions of natural deduction and simply typed lambda calculus. Here is a non-exhaustive list:
Girard-Reynolds System F as a common language for both second-order propositional logic and polymorphic lambda calculus,
higher-order logic and Girard's System Fω
inductive types as algebraic data type
necessity in modal logic and staged computation
possibility in modal logic and monadic types for effects
The calculus corresponds to relevant logic.
The local truth (∇) modality in Grothendieck topology or the equivalent "lax" modality (◯) of Benton, Bierman, and de Paiva (1998) correspond to CL-logic describing "computation types".
Classical logic and control operators
At the time of Curry, and also at the time of Howard, the proofs-as-programs correspondence concerned only intuitionistic logic, i.e. a logic in which, in particular, Peirce's law was not deducible. The extension of the correspondence to Peirce's law and hence to classical logic became clear from the work of Griffin on typing operators that capture the evaluation context of a given program execution so that this evaluation context can be later on reinstalled. The basic Curry–Howard-style correspondence for classical logic is given below. Note the correspondence between the double-negation translation used to map classical proofs to intuitionistic logic and the continuation-passing-style translation used to map lambda terms involving control to pure lambda terms. More particularly, call-by-name continuation-passing-style translations relates to Kolmogorov's double negation translation and call-by-value continuation-passing-style translations relates to a kind of double-negation translation due to Kuroda.
A finer Curry–Howard correspondence exists for classical logic if one defines classical logic not by adding an axiom such as Peirce's law, but by allowing several conclusions in sequents. In the case of classical natural deduction, there exists a proofs-as-programs correspondence with the typed programs of Parigot's λμ-calculus.
Sequent calculus
A proofs-as-programs correspondence can be settled for the formalism known as Gentzen's sequent calculus but it is not a correspondence with a well-defined pre-existing model of computation as it was for Hilbert-style and natural deductions.
Sequent calculus is characterized by the presence of left introduction rules, right introduction rule and a cut rule that can be eliminated. The structure of sequent calculus relates to a calculus whose structure is close to the one of some abstract machines. The informal correspondence is as follows:
Related proofs-as-programs correspondences
The role of de Bruijn
N. G. de Bruijn used the lambda notation for representing proofs of the theorem checker Automath, and represented propositions as "categories" of their proofs. It was in the late 1960s at the same period of time Howard wrote his manuscript; de Bruijn was likely unaware of Howard's work, and stated the correspondence independently (Sørensen & Urzyczyn [1998] 2006, pp 98–99). Some researchers tend to use the term Curry–Howard–de Bruijn correspondence in place of Curry–Howard correspondence.
BHK interpretation
The BHK interpretation interprets intuitionistic proofs as functions but it does not specify the class of functions relevant for the interpretation. If one takes lambda calculus for this class of function, then the BHK interpretation tells the same as Howard's correspondence between natural deduction and lambda calculus.
Realizability
Kleene's recursive realizability splits proofs of intuitionistic arithmetic into the pair of a recursive function and of
a proof of a formula expressing that the recursive function "realizes", i.e. correctly instantiates the disjunctions and existential quantifiers of the initial formula so that the formula gets true.
Kreisel's modified realizability applies to intuitionistic higher-order predicate logic and shows that the simply typed lambda term inductively extracted from the proof realizes the initial formula. In the case of propositional logic, it coincides with Howard's statement: the extracted lambda term is the proof itself (seen as an untyped lambda term) and the realizability statement is a paraphrase of the fact that the extracted lambda term has the type that the formula means (seen as a type).
Gödel's dialectica interpretation realizes (an extension of) intuitionistic arithmetic with computable functions. The connection with lambda calculus is unclear, even in the case of natural deduction.
Curry–Howard–Lambek correspondence
Joachim Lambek showed in the early 1970s that the proofs of intuitionistic propositional logic and the combinators of typed combinatory logic share a common equational theory which is the one of cartesian closed categories. The expression Curry–Howard–Lambek correspondence is now used by some people to refer to the three way isomorphism between intuitionistic logic, typed lambda calculus and cartesian closed categories, with objects being interpreted as types or propositions and morphisms as terms or proofs. The correspondence works at the equational level and is not the expression of a syntactic identity of structures as it is the case for each of Curry's and Howard's correspondences: i.e. the structure of a well-defined morphism in a cartesian-closed category is not comparable to the structure of a proof of the corresponding judgment in either Hilbert-style logic or natural deduction. To clarify this distinction, the underlying syntactic structure of cartesian closed categories is rephrased below.
Objects (types) are defined by
is an object
if and are objects then and are objects.
Morphisms (terms) are defined by
, , , and are morphisms
if is a morphism, λ is a morphism
if and are morphisms, and are morphisms.
Well-defined morphisms (typed terms) are defined by the following typing rules (in which the usual categorical morphism notation is replaced with sequent calculus notation ).
Identity:
Composition:
Unit type (terminal object):
Cartesian product:
Left and right projection:
Currying:
Application:
Finally, the equations of the category are
(if well-typed)
These equations imply the following -laws:
Now, there exists such that iff is provable in implicational intuitionistic logic,.
Examples
Thanks to the Curry–Howard correspondence, a typed expression whose type corresponds to a logical formula is analogous to a proof of that formula. Here are examples.
The identity combinator seen as a proof of α → α in Hilbert-style logic
As an example, consider a proof of the theorem . In lambda calculus, this is the type of the identity function and in combinatory logic, the identity function is obtained by applying S = λfgx.fx(gx) twice to K = λxy.x. That is, . As a description of a proof, this says that the following steps can be used to prove :
instantiate the second axiom scheme with the formulas , and to obtain a proof of ,
instantiate the first axiom scheme once with and to obtain a proof of ,
instantiate the first axiom scheme a second time with and to obtain a proof of ,
apply modus ponens twice to obtain a proof of
In general, the procedure is that whenever the program contains an application of the form (P Q), these steps should be followed:
First prove theorems corresponding to the types of P and Q.
Since P is being applied to Q, the type of P must have the form and the type of Q must have the form for some and . Therefore, it is possible to detach the conclusion, , via the modus ponens rule.
The composition combinator seen as a proof of (β → α) → (γ → β) → γ → α in Hilbert-style logic
As a more complicated example, let's look at the theorem that corresponds to the B function. The type of B is . B is equivalent to (S (K S) K). This is our roadmap for the proof of the theorem .
The first step is to construct (K S). To make the antecedent of the K axiom look like the S axiom, set equal to , and equal to (to avoid variable collisions):
Since the antecedent here is just S, the consequent can be detached using Modus Ponens:
This is the theorem that corresponds to the type of (K S). Now apply S to this expression. Taking S as follows
,
put , , and , yielding
and then detach the consequent:
This is the formula for the type of (S (K S)). A special case of this theorem has :
This last formula must be applied to K. Specialize K again, this time by replacing with and with :
This is the same as the antecedent of the prior formula so, detaching the consequent:
Switching the names of the variables and gives us
which was what remained to prove.
The normal proof of (β → α) → (γ → β) → γ → α in natural deduction seen as a λ-term
The diagram below gives proof of in natural deduction and shows how it can be interpreted as the λ-expression of type .
a:β → α, b:γ → β, g:γ ⊢ b : γ → β a:β → α, b:γ → β, g:γ ⊢ g : γ
——————————————————————————————————— ————————————————————————————————————————————————————————————————————
a:β → α, b:γ → β, g:γ ⊢ a : β → α a:β → α, b:γ → β, g:γ ⊢ b g : β
————————————————————————————————————————————————————————————————————————
a:β → α, b:γ → β, g:γ ⊢ a (b g) : α
————————————————————————————————————
a:β → α, b:γ → β ⊢ λ g. a (b g) : γ → α
————————————————————————————————————————
a:β → α ⊢ λ b. λ g. a (b g) : (γ → β) -> γ → α
————————————————————————————————————
⊢ λ a. λ b. λ g. a (b g) : (β → α) -> (γ → β) -> γ → α
Other applications
Recently, the isomorphism has been proposed as a way to define search space partition in genetic programming. The method indexes sets of genotypes (the program trees evolved by the GP system) by their Curry–Howard isomorphic proof (referred to as a species).
As noted by INRIA research director Bernard Lang, the Curry-Howard correspondence constitutes an argument against the patentability of software: since algorithms are mathematical proofs, patentability of the former would imply patentability of the latter. A theorem could be private property; a mathematician would have to pay for using it, and to trust the company that sells it but keeps its proof secret and rejects responsibility for any errors.
Generalizations
The correspondences listed here go much farther and deeper. For example, cartesian closed categories are generalized by closed monoidal categories. The internal language of these categories is the linear type system (corresponding to linear logic), which generalizes simply-typed lambda calculus as the internal language of cartesian closed categories. Moreover, these can be shown to correspond to cobordisms, which play a vital role in string theory.
An extended set of equivalences is also explored in homotopy type theory, which became a very active area of research around 2013 and still is. Here, type theory is extended by the univalence axiom ("equivalence is equivalent to equality") which permits homotopy type theory to be used as a foundation for all of mathematics (including set theory and classical logic, providing new ways to discuss the axiom of choice and many other things). That is, the Curry–Howard correspondence that proofs are elements of inhabited types is generalized to the notion of homotopic equivalence of proofs (as paths in space, the identity type or equality type of type theory being interpreted as a path).
References
Seminal references
De Bruijn, Nicolaas (1968), Automath, a language for mathematics, Department of Mathematics, Eindhoven University of Technology, TH-report 68-WSK-05. Reprinted in revised form, with two pages commentary, in: Automation and Reasoning, vol 2, Classical papers on computational logic 1967–1970, Springer Verlag, 1983, pp. 159–200.
.
Extensions of the correspondence
refs=
.
.
.
. (Full version of the paper presented at Logic Colloquium '90, Helsinki. Abstract in JSL 56(3):1139–1140, 1991.)
.
. (Full version of a paper presented at Logic Colloquium '91, Uppsala. Abstract in JSL 58(2):753–754, 1993.)
.
.
. (Full version of a paper presented at 2nd WoLLIC'95, Recife. Abstract in Journal of the Interest Group in Pure and Applied Logics 4(2):330–332, 1996.)
, concerns the adaptation of proofs-as-programs program synthesis to coarse-grain and imperative program development problems, via a method the authors call the Curry–Howard protocol. Includes a discussion of the Curry–Howard correspondence from a Computer Science perspective.
. (Full version of a paper presented at LSFA 2010, Natal, Brazil.)
Philosophical interpretations
. (Early version presented at Logic Colloquium '88, Padova. Abstract in JSL 55:425, 1990.)
. (Early version presented at Fourteenth International Wittgenstein Symposium (Centenary Celebration) held in Kirchberg/Wechsel, August 13–20, 1989.)
.
Synthetic papers
, the contribution of de Bruijn by himself.
, contains a synthetic introduction to the Curry–Howard correspondence.
, contains a synthetic introduction to the Curry–Howard correspondence.
Books
, reproduces the seminal papers of Curry-Feys and Howard, a paper by de Bruijn and a few other papers.
, notes on proof theory and type theory, that includes a presentation of the Curry–Howard correspondence, with a focus on the formulae-as-types correspondence
, notes on proof theory with a presentation of the Curry–Howard correspondence.
.
, concerns the adaptation of proofs-as-programs program synthesis to coarse-grain and imperative program development problems, via a method the authors call the Curry–Howard protocol. Includes a discussion of the Curry–Howard correspondence from a Computer Science perspective.
.
Further reading
— gives a categorical view of "what happens" in the Curry–Howard correspondence.
External links
Howard on Curry-Howard
The Curry–Howard Correspondence in Haskell
The Monad Reader 6: Adventures in Classical-Land: Curry–Howard in Haskell, Pierce's law.
1934 in computing
1958 in computing
1969 in computing
Dependently typed programming
Proof theory
Logic in computer science
Type theory
Philosophy of computer science |
100563 | https://en.wikipedia.org/wiki/System%20on%20a%20chip | System on a chip | A system on a chip (SoC; or ) is an integrated circuit (also known as a "chip") that integrates all or most components of a computer or other electronic system. These components almost always include a central processing unit (CPU), memory, input/output ports and secondary storage, often alongside other components such as radio modems and a graphics processing unit (GPU) – all on a single substrate or microchip. It may contain digital, analog, mixed-signal, and often radio frequency signal processing functions (otherwise it is considered only an application processor).
Higher-performance SoCs are often paired with dedicated and physically separate memory and secondary storage (almost always LPDDR and eUFS or eMMC, respectively) chips, that may be layered on top of the SoC in what's known as a package on package (PoP) configuration, or be placed close to the SoC. Additionally, SoCs may use separate wireless modems.
SoCs are in contrast to the common traditional motherboard-based PC architecture, which separates components based on function and connects them through a central interfacing circuit board. Whereas a motherboard houses and connects detachable or replaceable components, SoCs integrate all of these components into a single integrated circuit. An SoC will typically integrate a CPU, graphics and memory interfaces, hard-disk and USB connectivity, random-access and read-only memories and secondary storage and/or their controllers on a single circuit die, whereas a motherboard would connect these modules as discrete components or expansion cards.
An SoC integrates a microcontroller, microprocessor or perhaps several processor cores with peripherals like a GPU, Wi-Fi and cellular network radio modems, and/or one or more coprocessors. Similar to how a microcontroller integrates a microprocessor with peripheral circuits and memory, an SoC can be seen as integrating a microcontroller with even more advanced peripherals.
More tightly integrated computer system designs improve performance and reduce power consumption as well as semiconductor die area than multi-chip designs with equivalent functionality. This comes at the cost of reduced replaceability of components. By definition, SoC designs are fully or nearly fully integrated across different component modules. For these reasons, there has been a general trend towards tighter integration of components in the computer hardware industry, in part due to the influence of SoCs and lessons learned from the mobile and embedded computing markets. SoCs can be viewed as part of a larger trend towards embedded computing and hardware acceleration.
SoCs are very common in the mobile computing (such as in smartphones and tablet computers) and edge computing markets. They are also commonly used in embedded systems such as WiFi routers and the Internet of Things.
Types
In general, there are three distinguishable types of SoCs:
SoCs built around a microcontroller,
SoCs built around a microprocessor, often found in mobile phones;
Specialized application-specific integrated circuit SoCs designed for specific applications that do not fit into the above two categories
Applications
SoCs can be applied to any computing task. However, they are typically used in mobile computing such as tablets, smartphones, smartwatches and netbooks as well as embedded systems and in applications where previously microcontrollers would be used.
Embedded systems
Where previously only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability and mean time between failure, and SoCs offer more advanced functionality and computing power than microcontrollers. Applications include AI acceleration, embedded machine vision, data collection, telemetry, vector processing and ambient intelligence. Often embedded SoCs target the internet of things, industrial internet of things and edge computing markets.
Mobile computing
Mobile computing based SoCs always bundle processors, memories, on-chip caches, wireless networking capabilities and often digital camera hardware and firmware. With increasing memory sizes, high end SoCs will often have no memory and flash storage and instead, the memory and flash memory will be placed right next to, or above (package on package), the SoC. Some examples of mobile computing SoCs include:
Samsung Electronics: list, typically based on ARM
Exynos, used mainly by Samsung's Galaxy series of smartphones
Qualcomm:
Snapdragon (list), used in many LG, Xiaomi, Google Pixel, HTC and Samsung Galaxy smartphones. In 2018, Snapdragon SoCs are being used as the backbone of laptop computers running Windows 10, marketed as "Always Connected PCs".
Personal computers
In 1992, Acorn Computers produced the A3010, A3020 and A4000 range of personal computers with the ARM250 SoC. It combined the original Acorn ARM2 processor with a memory controller (MEMC), video controller (VIDC), and I/O controller (IOC). In previous Acorn ARM-powered computers, these were four discrete chips. The ARM7500 chip was their second-generation SoC, based on the ARM700, VIDC20 and IOMD controllers, and was widely licensed in embedded devices such as set-top-boxes, as well as later Acorn personal computers.
SoCs are being applied to mainstream personal computers as of 2018. They are particularly applied to laptops and tablet PCs. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighter integration of hardware and firmware modules, and LTE and other wireless network communications integrated on chip (integrated network interface controllers).
ARM-based:
Qualcomm Snapdragon
ARM250
ARM7500(FE)
Apple M1
x86-based:
Intel Core CULV
Structure
An SoC consists of hardware functional units, including microprocessors that run software code, as well as a communications subsystem to connect, control, direct and interface between these functional modules.
Functional components
Processor cores
An SoC must have at least one processor core, but typically an SoC has more than one core. Processor cores can be a microcontroller, microprocessor (μP), digital signal processor (DSP) or application-specific instruction set processor (ASIP) core. ASIPs have instruction sets that are customized for an application domain and designed to be more efficient than general-purpose instructions for a specific type of workload. Multiprocessor SoCs have more than one processor core by definition.
Whether single-core, multi-core or manycore, SoC processor cores typically use RISC instruction set architectures. RISC architectures are advantageous over CISC processors for SoCs because they require less digital logic, and therefore less power and area on board, and in the embedded and mobile computing markets, area and power are often highly constrained. In particular, SoC processor cores often use the ARM architecture because it is a soft processor specified as an IP core and is more power efficient than x86.
Memory
SoCs must have semiconductor memory blocks to perform their computation, as do microcontrollers and other embedded systems. Depending on the application, SoC memory may form a memory hierarchy and cache hierarchy. In the mobile computing market, this is common, but in many low-power embedded microcontrollers, this is not necessary. Memory technologies for SoCs include read-only memory (ROM), random-access memory (RAM), Electrically Erasable Programmable ROM (EEPROM) and flash memory. As in other computer systems, RAM can be subdivided into relatively faster but more expensive static RAM (SRAM) and the slower but cheaper dynamic RAM (DRAM). When an SoC has a cache hierarchy, SRAM will usually be used to implement processor registers and cores' L1 caches whereas DRAM will be used for lower levels of the cache hierarchy including main memory. "Main memory" may be specific to a single processor (which can be multi-core) when the SoC has multiple processors, in which case it is distributed memory and must be sent via on-chip to be accessed by a different processor. For further discussion of multi-processing memory issues, see cache coherence and memory latency.
Interfaces
SoCs include external interfaces, typically for communication protocols. These are often based upon industry standards such as USB, FireWire, Ethernet, USART, SPI, HDMI, I²C, etc. These interfaces will differ according to the intended application. Wireless networking protocols such as Wi-Fi, Bluetooth, 6LoWPAN and near-field communication may also be supported.
When needed, SoCs include analog interfaces including analog-to-digital and digital-to-analog converters, often for signal processing. These may be able to interface with different types of sensors or actuators, including smart transducers. They may interface with application-specific modules or shields. Or they may be internal to the SoC, such as if an analog sensor is built in to the SoC and its readings must be converted to digital signals for mathematical processing.
Digital signal processors
Digital signal processor (DSP) cores are often included on SoCs. They perform signal processing operations in SoCs for sensors, actuators, data collection, data analysis and multimedia processing. DSP cores typically feature very long instruction word (VLIW) and single instruction, multiple data (SIMD) instruction set architectures, and are therefore highly amenable to exploiting instruction-level parallelism through parallel processing and superscalar execution. DSP cores most often feature application-specific instructions, and as such are typically application-specific instruction-set processors (ASIP). Such application-specific instructions correspond to dedicated hardware functional units that compute those instructions.
Typical DSP instructions include multiply-accumulate, Fast Fourier transform, fused multiply-add, and convolutions.
Other
As with other computer systems, SoCs require timing sources to generate clock signals, control execution of SoC functions and provide time context to signal processing applications of the SoC, if needed. Popular time sources are crystal oscillators and phase-locked loops.
SoC peripherals including counter-timers, real-time timers and power-on reset generators. SoCs also include voltage regulators and power management circuits.
Intermodule communication
SoCs comprise many execution units. These units must often send data and instructions back and forth. Because of this, all but the most trivial SoCs require communications subsystems. Originally, as with other microcomputer technologies, data bus architectures were used, but recently designs based on sparse intercommunication networks known as networks-on-chip (NoC) have risen to prominence and are forecast to overtake bus architectures for SoC design in the near future.
Bus-based communication
Historically, a shared global computer bus typically connected the different components, also called "blocks" of the SoC. A very common bus for SoC communications is ARM's royalty-free Advanced Microcontroller Bus Architecture (AMBA) standard.
Direct memory access controllers route data directly between external interfaces and SoC memory, bypassing the CPU or control unit, thereby increasing the data throughput of the SoC. This is similar to some device drivers of peripherals on component-based multi-chip module PC architectures.
Computer buses are limited in scalability, supporting only up to tens of cores (multicore) on a single chip. Wire delay is not scalable due to continued miniaturization, system performance does not scale with the number of cores attached, the SoC's operating frequency must decrease with each additional core attached for power to be sustainable, and long wires consume large amounts of electrical power. These challenges are prohibitive to supporting manycore systems on chip.
Network on a chip
In the late 2010s, a trend of SoCs implementing communications subsystems in terms of a network-like topology instead of bus-based protocols has emerged. A trend towards more processor cores on SoCs has caused on-chip communication efficiency to become one of the key factors in determining the overall system performance and cost. This has led to the emergence of interconnection networks with router-based packet switching known as "networks on chip" (NoCs) to overcome the bottlenecks of bus-based networks.
Networks-on-chip have advantages including destination- and application-specific routing, greater power efficiency and reduced possibility of bus contention. Network-on-chip architectures take inspiration from communication protocols like TCP and the Internet protocol suite for on-chip communication, although they typically have fewer network layers. Optimal network-on-chip network architectures are an ongoing area of much research interest. NoC architectures range from traditional distributed computing network topologies such as torus, hypercube, meshes and tree networks to genetic algorithm scheduling to randomized algorithms such as random walks with branching and randomized time to live (TTL).
Many SoC researchers consider NoC architectures to be the future of SoC design because they have been shown to efficiently meet power and throughput needs of SoC designs. Current NoC architectures are two-dimensional. 2D IC design has limited floorplanning choices as the number of cores in SoCs increase, so as three-dimensional integrated circuits (3DICs) emerge, SoC designers are looking towards building three-dimensional on-chip networks known as 3DNoCs.
Design flow
A system on a chip consists of both the hardware, described in , and the software controlling the microcontroller, microprocessor or digital signal processor cores, peripherals and interfaces. The design flow for an SoC aims to develop this hardware and software at the same time, also known as architectural co-design. The design flow must also take into account optimizations () and constraints.
Most SoCs are developed from pre-qualified hardware component IP core specifications for the hardware elements and execution units, collectively "blocks", described above, together with software device drivers that may control their operation. Of particular importance are the protocol stacks that drive industry-standard interfaces like USB. The hardware blocks are put together using computer-aided design tools, specifically electronic design automation tools; the software modules are integrated using a software integrated development environment.
SoCs components are also often designed in high-level programming languages such as C++, MATLAB or SystemC and converted to RTL designs through high-level synthesis (HLS) tools such as C to HDL or flow to HDL. HLS products called "algorithmic synthesis" allow designers to use C++ to model and synthesize system, circuit, software and verification levels all in one high level language commonly known to computer engineers in a manner independent of time scales, which are typically specified in HDL. Other components can remain software and be compiled and embedded onto soft-core processors included in the SoC as modules in HDL as IP cores.
Once the architecture of the SoC has been defined, any new hardware elements are written in an abstract hardware description language termed register transfer level (RTL) which defines the circuit behavior, or synthesized into RTL from a high level language through high-level synthesis. These elements are connected together in a hardware description language to create the full SoC design. The logic specified to connect these components and convert between possibly different interfaces provided by different vendors is called glue logic.
Design verification
Chips are verified for validation correctness before being sent to a semiconductor foundry. This process is called functional verification and it accounts for a significant portion of the time and energy expended in the chip design life cycle, often quoted as 70%. With the growing complexity of chips, hardware verification languages like SystemVerilog, SystemC, e, and OpenVera are being used. Bugs found in the verification stage are reported to the designer.
Traditionally, engineers have employed simulation acceleration, emulation or prototyping on reprogrammable hardware to verify and debug hardware and software for SoC designs prior to the finalization of the design, known as tape-out. Field-programmable gate arrays (FPGAs) are favored for prototyping SoCs because FPGA prototypes are reprogrammable, allow debugging and are more flexible than application-specific integrated circuits (ASICs).
With high capacity and fast compilation time, simulation acceleration and emulation are powerful technologies that provide wide visibility into systems. Both technologies, however, operate slowly, on the order of MHz, which may be significantly slower – up to 100 times slower – than the SoC's operating frequency. Acceleration and emulation boxes are also very large and expensive at over US$1 million.
FPGA prototypes, in contrast, use FPGAs directly to enable engineers to validate and test at, or close to, a system's full operating frequency with real-world stimuli. Tools such as Certus are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs with capabilities similar to a logic analyzer.
In parallel, the hardware elements are grouped and passed through a process of logic synthesis, during which performance constraints, such as operational frequency and expected signal delays, are applied. This generates an output known as a netlist describing the design as a physical circuit and its interconnections. These netlists are combined with the glue logic connecting the components to produce the schematic description of the SoC as a circuit which can be printed onto a chip. This process is known as place and route and precedes tape-out in the event that the SoCs are produced as application-specific integrated circuits (ASIC).
Optimization goals
SoCs must optimize power use, area on die, communication, positioning for locality between modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use a multi-chip module architecture without accounting for the area utilization, power consumption or performance of the system to the same extent.
Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hard combinatorial optimization problem, and can indeed be NP-hard fairly easily. Therefore, sophisticated optimization algorithms are often required and it may be practical to use approximation algorithms or heuristics in some cases. Additionally, most SoC designs contain multiple variables to optimize simultaneously, so Pareto efficient solutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducing trade-offs in system design.
For broader coverage of trade-offs and requirements analysis, see requirements engineering.
Targets
Power consumption
SoCs are optimized to minimize the electrical power used to perform the SoC's functions. Most SoCs must use low power. SoC systems often require long battery life (such as smartphones), can potentially spending months or years without a power source needing to maintain autonomous function, and often are limited in power use by a high number of embedded SoCs being networked together in an area. Additionally, energy costs can be high and conserving energy will reduce the total cost of ownership of the SoC. Finally, waste heat from high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is the integral of power consumed with respect to time, and the average rate of power consumption is the product of current by voltage. Equivalently, by Ohm's law, power is current squared times resistance or voltage squared divided by resistance:
SoCs are frequently embedded in portable devices such as smartphones, GPS navigation devices, digital watches (including smartwatches) and netbooks. Customers want long battery lives for mobile computing devices, another reason that power consumption must be minimized in SoCs. Multimedia applications are often executed on these devices, including video games, video streaming, image processing; all of which have grown in computational complexity in recent years with user demands and expectations for higher-quality multimedia. Computation is more demanding as expectations move towards 3D video at high resolution with multiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery.
Performance per watt
SoCs are optimized to maximize power efficiency in performance per watt: maximize the performance of the SoC given a budget of power usage. Many applications such as edge computing, distributed processing and ambient intelligence require a certain level of computational performance, but power is limited in most SoC environments. The ARM architecture has greater performance per watt than x86 in embedded systems, so it is preferred over x86 for most SoC applications requiring an embedded processor.
Waste heat
SoC designs are optimized to minimize waste heat output on the chip. As with other integrated circuits, heat generated due to high power density are the bottleneck to further miniaturization of components. The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erode reliability of the circuit over time. High temperatures and thermal stress negatively impact reliability, stress migration, decreased mean time between failures, electromigration, wire bonding, metastability and other performance degradation of the SoC over time.
In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of high transistor counts on modern devices, oftentimes a layout of sufficient throughput and high transistor density is physically realizable from fabrication processes but would result in unacceptably high amounts of heat in the circuit's volume.
These thermal effects force SoC and other chip designers to apply conservative design margins, creating less performant devices to mitigate the risk of catastrophic failure. Due to increased transistor densities as length scales get smaller, each process generation produces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneous heat fluxes, which cannot be effectively mitigated by uniform passive cooling.
Throughput
SoCs are optimized to maximize computational and communications throughput.
Latency
SoCs are optimized to minimize latency for some or all of their functions. This can be accomplished by laying out elements with proper proximity and locality to each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules, functional units and memories. In general, optimizing to minimize latency is an NP-complete problem equivalent to the boolean satisfiability problem.
For tasks running on processor cores, latency and throughput can be improved with task scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints.
Methodologies
Systems on chip are modeled with standard hardware verification and validation techniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect to multiple-criteria decision analysis on the above optimization targets.
Task scheduling
Task scheduling is an important activity in any computer system with multiple processes or threads sharing a single processor core. It is important to reduce and increase for embedded software running on an SoC's . Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involving shared resources.
SoCs often schedule tasks according to network scheduling and randomized scheduling algorithms.
Pipelining
Hardware and software tasks are often pipelined in processor design. Pipelining is an important principle for speedup in computer architecture. They are frequently used in GPUs (graphics pipeline) and RISC processors (evolutions of the classic RISC pipeline), but are also applied to application-specific tasks such as digital signal processing and multimedia manipulations in the context of SoCs.
Probabilistic modeling
SoCs are often analyzed though probabilistic models, and Markov chains. For instance, Little's law allows SoC states and NoC buffers to be modeled as arrival processes and analyzed through Poisson random variables and Poisson processes.
Markov chains
SoCs are often modeled with Markov chains, both discrete time and continuous time variants. Markov chain modeling allows asymptotic analysis of the SoC's steady state distribution of power, heat, latency and other factors to allow design decisions to be optimized for the common case.
Fabrication
SoC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology. The netlists described above are used as the basis for the physical design (place and route) flow to convert the designers' intent into the design of the SoC. Throughout this conversion process, the design is analyzed with static timing modeling, simulation and other tools to ensure that it meets the specified operational parameters such as frequency, power consumption and dissipation, functional integrity (as described in the register transfer level code) and electrical integrity.
When all known bugs have been rectified and these have been re-verified and all physical design checks are done, the physical design files describing each layer of the chip are sent to the foundry's mask shop where a full set of glass lithographic masks will be etched. These are sent to a wafer fabrication plant to create the SoC dice before packaging and testing.
SoCs can be fabricated by several technologies, including:
Full custom ASIC
Standard cell ASIC
Field-programmable gate array (FPGA)
ASICs consume less power and are faster than FPGAs but cannot be reprogrammed and are expensive to manufacture. FPGA designs are more suitable for lower volume designs, but after enough units of production ASICs reduce the total cost of ownership.
SoC designs consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. With fewer packages in the system, assembly costs are reduced as well.
However, like most very-large-scale integration (VLSI) designs, the total cost is higher for one large chip than for the same functionality distributed over several smaller chips, because of lower yields and higher non-recurring engineering costs.
When it is not feasible to construct an SoC for a particular application, an alternative is a system in package (SiP) comprising a number of chips in a single package. When produced in large volumes, SoC is more cost-effective than SiP because its packaging is simpler. Another reason SiP may be preferred is waste heat may be too high in a SoC for a given purpose because functional components are too close together, and in an SiP heat will dissipate better from different functional modules since they are physically further apart.
Benchmarks
SoC research and development often compares many options. Benchmarks, such as COSMIC, are developed to help such evaluations.
See also
List of system-on-a-chip suppliers
Post-silicon validation
ARM architecture
Single-board computer
System in package
Network on a chip
Programmable SoC
Application-specific instruction set processor (ASIP)
Platform-based design
Lab on a chip
Organ on a chip in biomedical technology
Multi-chip module
Notes
References
Further reading
465 pages.
External links
SOCC Annual IEEE International SoC Conference
Baya free SoC platform assembly and IP integration tool
Systems on Chip for Embedded Applications, Auburn University seminar in VLSI
Instant SoC SoC for FPGAs defined by C++
Computer engineering
Electronic design
Microtechnology
Hardware acceleration
Computer systems
Application-specific integrated circuits |
66849595 | https://en.wikipedia.org/wiki/2021%20Troy%20Trojans%20baseball%20team | 2021 Troy Trojans baseball team | The 2021 Troy Trojans baseball team represented Troy University during the 2021 NCAA Division I baseball season. The Trojans played their home games at Riddle–Pace Field and were led by sixth-year head coach Mark Smartt. They were members of the Sun Belt Conference.
Preseason
Signing Day Recruits
Sun Belt Conference Coaches Poll
The Sun Belt Conference Coaches Poll was released on February 15, 2021 and the Trojans were picked to finish fourth in the East Division with 44 votes.
Preseason All-Sun Belt Team & Honors
Aaron Funk (LR, Pitcher)
Jordan Jackson (GASO, Pitcher)
Conor Angel (LA, Pitcher)
Wyatt Divis (UTA, Pitcher)
Lance Johnson (TROY, Pitcher)
Caleb Bartolero (TROY, Catcher)
William Sullivan (TROY, 1st Base)
Luke Drumheller (APP, 2nd Base)
Drew Frederic (TROY, Shortstop)
Cooper Weiss (CCU, 3rd Base)
Ethan Wilson (USA, Outfielder)
Parker Chavers (CCU, Outfielder)
Rigsby Mosley (TROY, Outfielder)
Eilan Merejo (GSU, Designated Hitter)
Andrew Beesly (ULM, Utility)
Personnel
Roster
Coaching staff
Schedule and results
Schedule Source:
*Rankings are based on the team's current ranking in the D1Baseball poll.
Posteason
Conference Accolades
Player of the Year: Mason McWhorter – GASO
Pitcher of the Year: Hayden Arnold – LR
Freshman of the Year: Garrett Gainous – TROY
Newcomer of the Year: Drake Osborn – LA
Coach of the Year: Mark Calvi – USA
All Conference First Team
Connor Cooke (LA)
Hayden Arnold (LR)
Carlos Tavera (UTA)
Nick Jones (GASO)
Drake Osborn (LA)
Robbie Young (APP)
Luke Drumheller (APP)
Drew Frederic (TROY)
Ben Klutts (ARST)
Mason McWhorter (GASO)
Logan Cerny (TROY)
Ethan Wilson (USA)
Cameron Jones (GSU)
Ben Fitzgerald (LA)
All Conference Second Team
JoJo Booker (USA)
Tyler Tuthill (APP)
Jeremy Lee (USA)
Aaron Barkley (LR)
BT Riopelle (CCU)
Dylan Paul (UTA)
Travis Washburn (ULM)
Eric Brown (CCU)
Grant Schulz (ULM)
Tyler Duncan (ARST)
Parker Chavers (CCU)
Josh Smith (GSU)
Andrew Miller (UTA)
Noah Ledford (GASO)
References:
Rankings
References
Troy
Troy Trojans baseball seasons
Troy Trojans baseball |
31147994 | https://en.wikipedia.org/wiki/Xymon | Xymon | Xymon, a network monitoring application using free software, operates under the GNU General Public License; its central server runs on Unix and Linux hosts.
History
The application was inspired by the open-source version of Big Brother, a network monitoring application, and maintains backward compatibility with it. Between 2002 and 2004 Henrik Storner wrote an open-source software add-on called bbgen toolkit, then in March 2005 a stand-alone version was released called Hobbit. Versions of this were released between 2005 and 2008, but since a prior user of the trademark "Hobbit" existed, the tool was finally renamed Xymon. In January 2012, Quest Software discontinued development of Big Brother.
Functionality
Xymon offers graphical monitoring, showing the status of various network services of each device, as well as a range of application and operating system metrics such as listing the number of mail messages queued after a defined level of downtime. The web-based graphical display uses a red/yellow/green condition icon for each host/test, on top of a colored background indicating the current worst status across all hosts and tests. The user can click on a colored icon to view more specific details and (where available) relevant graphs of metric statistics. Built-in reporting tools include SLA-type reports (availability) and the historical state of services (snapshots). Xymon supports the generation of alarms sent by email, and can also use external tools to send messages via other means (e.g. SMS).
Networked hosts and devices are monitored by a Xymon server using network probes supporting a large and extensible range of protocols, including SMTP, HTTP/S and DNS. Hosts that use a supported operating system can also run a Xymon client (also free software), to additionally collect operating system and application monitoring metrics and report them to the Xymon server. Clients are available for Unix and Linux (in formats including source tarball, RPM and Debian package) from the Xymon download site at SourceForge. Windows hosts can use the Big Brother client for Windows, the BBWin client or the WinPSClient written in the Windows PowerShell scripting language.
Plugins extend monitoring to new types of applications and services, and many extension scripts for Big Brother will run unchanged on Xymon.
See also
Big Brother
MRTG
Nagios
References
External links
xymon.com
Network management
Internet Protocol based network software
Free network management software
Multi-agent systems
Network analyzers
Linux security software |
14259066 | https://en.wikipedia.org/wiki/Dual%20EC%20DRBG | Dual EC DRBG | Dual_EC_DRBG (Dual Elliptic Curve Deterministic Random Bit Generator) is an algorithm that was presented as a cryptographically secure pseudorandom number generator (CSPRNG) using methods in elliptic curve cryptography. Despite wide public criticism, including a backdoor, for seven years it was one of the four (now three) CSPRNGs standardized in NIST SP 800-90A as originally published circa June 2006, until it was withdrawn in 2014.
Weakness: a potential backdoor
Weaknesses in the cryptographic security of the algorithm were known and publicly criticised well before the algorithm became part of a formal standard endorsed by the ANSI, ISO, and formerly by the National Institute of Standards and Technology (NIST). One of the weaknesses publicly identified was the potential of the algorithm to harbour a kleptographic backdoor advantageous to those who know about it—the United States government's National Security Agency (NSA)—and no one else. In 2013, The New York Times reported that documents in their possession but never released to the public "appear to confirm" that the backdoor was real, and had been deliberately inserted by the NSA as part of its Bullrun decryption program. In December 2013, a Reuters news article alleged that in 2004, before NIST standardized Dual_EC_DRBG, NSA paid RSA Security $10 million in a secret deal to use Dual_EC_DRBG as the default in the RSA BSAFE cryptography library, which resulted in RSA Security becoming the most important distributor of the insecure algorithm. RSA responded that they "categorically deny" that they had ever knowingly colluded with the NSA to adopt an algorithm that was known to be flawed, saying "we have never kept [our] relationship [with the NSA] a secret".
Sometime before its first known publication in 2004, a possible kleptographic backdoor was discovered with the Dual_EC_DRBG's design, with the design of Dual_EC_DRBG having the unusual property that it was theoretically impossible for anyone but Dual_EC_DRBG's designers (NSA) to confirm the backdoor's existence. Bruce Schneier concluded shortly after standardization that the "rather obvious" backdoor (along with other deficiencies) would mean that nobody would use Dual_EC_DRBG. The backdoor would allow NSA to decrypt for example SSL/TLS encryption which used Dual_EC_DRBG as a CSPRNG.
Members of the ANSI standard group, to which Dual_EC_DRBG was first submitted, were aware of the exact mechanism of the potential backdoor and how to disable it, but did not take sufficient steps to unconditionally disable the backdoor or to widely publicize it. The general cryptographic community was initially not aware of the potential backdoor, until Dan Shumow and Niels Ferguson's publication, or of Certicom's Daniel R. L. Brown and Scott Vanstone's 2005 patent application describing the backdoor mechanism.
In September 2013, The New York Times reported that internal NSA memos leaked by Edward Snowden indicated that the NSA had worked during the standardization process to eventually become the sole editor of the Dual_EC_DRBG standard, and concluded that the Dual_EC_DRBG standard did indeed contain a backdoor for the NSA. As response, NIST stated that "NIST would not deliberately weaken a cryptographic standard."
According to the New York Times story, the NSA spends $250 million per year to insert backdoors in software and hardware as part of the Bullrun program. A Presidential advisory committee subsequently set up to examine NSA's conduct recommended among other things that the US government "fully support and not undermine efforts to create encryption standards".
On April 21, 2014, NIST withdrew Dual_EC_DRBG from its draft guidance on random number generators recommending "current users of Dual_EC_DRBG transition to one of the three remaining approved algorithms as quickly as possible."
Timeline of Dual_EC_DRBG
Description
Overview
The algorithm uses a single number as state. Whenever a new random number is requested, this integer is updated. The -th state is given by
The returned random integer is a function of the state. The -th random number is
The function depends on the fixed elliptic curve point . is similar except that it uses the point . The points and stay constant for a particular implementation of the algorithm.
Details
The algorithm allows for different constants, variable output length and other customization. For simplicity, the one described here will use the constants from curve P-256 (one of the 3 sets of constants available) and have fixed output length. The algorithm operates exclusively with a finite field () where is prime. The state, the seed and the random numbers are all elements of this field. Field size is
An elliptic curve over is given
where the constant is
The points on the curve are . Two of these points are given as the fixed points and
Their coordinates are
A function to extract the x-coordinate is used. It "converts" from elliptic curve points to numbers of the field. This number can then be used to exponentiate the fixed point to a new point and the process repeats.
Random numbers are also truncated (a bit) before being output
The functions and . These functions raise the fixed points to a power. "Raising to a power" in this context, means using the special operation defined for points on elliptic curves.
The generator is seeded with a point from
The -th state and random number
The random numbers
Security
The stated purpose of including the Dual_EC_DRBG in NIST SP 800-90A is that its security is based on computational hardness assumptions from number theory. A mathematical security reduction proof can then prove that as long as the number theoretical problems are hard, the random number generator itself is secure. However, the makers of Dual_EC_DRBG did not publish a security reduction for Dual_EC_DRBG, and it was shown soon after the NIST draft was published that Dual_EC_DRBG was indeed not secure, because it output too many bits per round. The output of too many bits (along with carefully chosen elliptic curve points P and Q) is what makes the NSA backdoor possible, because it enables the attacker to revert the truncation by brute force guessing. The output of too many bits was not corrected in the final published standard, leaving Dual_EC_DRBG both insecure and backdoored.
In many other standards, constants that are meant to be arbitrary are chosen by the nothing up my sleeve number principle, where they are derived from pi or similar mathematical constants in a way that leaves little room for adjustment. However, Dual_EC_DRBG did not specify how the default P and Q constants were chosen, possibly because they were constructed by NSA to be backdoored. Because the standard committee were aware of the potential for a backdoor, a way for an implementer to choose their own secure P and Q were included. But the exact formulation in the standard was written such that use of the alleged backdoored P and Q was required for FIPS 140-2 validation, so the OpenSSL project chose to implement the backdoored P and Q, even though they were aware of the potential backdoor and would have preferred generating their own secure P and Q. New York Times would later write that NSA had worked during the standardization process to eventually become the sole editor of the standard.
A security proof was later published for Dual_EC_DRBG by Daniel R.L. Brown and Kristian Gjøsteen, showing that the generated elliptic curve points would be indistinguishable from uniformly random elliptic curve points, and that if fewer bits were output in the final output truncation, and if the two elliptic curve points P and Q were independent, then Dual_EC_DRBG is secure. The proof relied on the assumption that three problems were hard: the decisional Diffie–Hellman assumption (which is generally accepted to be hard), and two newer less-known problems which are not generally accepted to be hard: the truncated point problem, and the x-logarithm problem. Dual_EC_DRBG was quite slow compared to many alternative CSPRNGs (which don't have security reductions), but Daniel R.L. Brown argues that the security reduction makes the slow Dual_EC_DRBG a valid alternative (assuming implementors disable the obvious backdoor). Note that Daniel R.L. Brown works for Certicom, the main owner of elliptic curve cryptography patents, so there may be a conflict of interest in promoting an EC CSPRNG.
The alleged NSA backdoor would allow the attacker to determine the internal state of the random number generator from looking at the output from a single round (32 bytes); all future output of the random number generator can then easily be calculated, until the CSPRNG is reseeded with an external source of randomness. This makes for example SSL/TLS vulnerable, since the setup of a TLS connection includes the sending of a randomly generated cryptographic nonce in the clear. NSA's alleged backdoor would depend on their knowing of the single e such that . This is a hard problem if P and Q are set ahead of time, but it's easier if P and Q are chosen. e is a secret key presumably known only by NSA, and the alleged backdoor is a kleptographic asymmetric hidden backdoor. Matthew Green's blog post The Many Flaws of Dual_EC_DRBG has a simplified explanation of how the alleged NSA backdoor works by employing the discrete-log kleptogram introduced in Crypto 1997.
Standardization and implementations
NSA first introduced Dual_EC_DRBG in the ANSI X9.82 DRBG in the early 2000s, including the same parameters which created the alleged backdoor, and Dual_EC_DRBG was published in a draft ANSI standard. Dual_EC_DRBG also exists in the ISO 18031 standard.
According to John Kelsey (who together with Elaine Barker was listed as author of NIST SP 800-90A), the possibility of the backdoor by carefully chosen P and Q was brought up at an ANSI X9F1 Tool Standards and Guidelines Group meeting. When Kelsey asked Don Johnson of Cygnacom about the origin of Q, Johnson answered in a 27 October 2004 email to Kelsey that NSA had prohibited the public discussion of generation of an alternative Q to the NSA-supplied one.
At least two members of the Members of the ANSI X9F1 Tool Standards and Guidelines Group which wrote ANSI X9.82, Daniel R. L. Brown and Scott Vanstone from Certicom, were aware of the exact circumstances and mechanism in which a backdoor could occur, since they filed a patent application in January 2005 on exactly how to insert or prevent the backdoor in DUAL_EC_DRBG. The working of the "trap door" mentioned in the patent is identical to the one later confirmed in Dual_EC_DRBG. Writing about the patent in 2014, commentator Matthew Green describes the patent as a "passive aggressive" way of spiting NSA by publicizing the backdoor, while still criticizing everybody on the committee for not actually disabling the backdoor they obviously were aware of. Brown and Vanstone's patent list two necessary conditions for the backdoor to exist:
1) Chosen Q
2) Small output truncation
According to John Kelsey, the option in the standard to choose a verifiably random Q was added as an option in response to the suspected backdoor, though in such a way that FIPS 140-2 validation could only be attained by using the possibly backdoored Q. Steve Marquess (who helped implement NIST SP 800-90A for OpenSSL) speculated that this requirement to use the potentially backdoored points could be evidence of NIST complicity. It is not clear why the standard did not specify the default Q in the standard as a verifyably generated nothing up my sleeve number, or why the standard did not use greater truncation, which Brown's patent said could be used as the "primary measure for preventing a key escrow attack". The small truncation was unusual compared to previous EC PRGs, which according to Matthew Green had only output 1/2 to 2/3 of the bits in the output function. The low truncation was in 2006 shown by Gjøsteen to make the RNG predictable and therefore unusable as a CSPRNG, even if Q had not been chosen to contain a backdoor. The standard says that implementations "should" use the small max_outlen provided, but gives the option of outputting a multiple of 8 fewer bits. Appendix C of the standard gives a loose argument that outputting fewer bits will make the output less uniformly distributed. Brown's 2006 security proof relies on outlen being much smaller the default max_outlen value in the standard.
The ANSI X9F1 Tool Standards and Guidelines Group which discussed the backdoor also included three employees from the prominent security company RSA Security. In 2004, RSA Security made an implementation of Dual_EC_DRBG which contained the NSA backdoor the default CSPRNG in their RSA BSAFE as a result of a secret $10 million deal with NSA. In 2013, after the New York Times reported that Dual_EC_DRBG contained a backdoor by the NSA, RSA Security said they had not been aware of any backdoor when they made the deal with NSA, and told their customers to switch CSPRNG. In the 2014 RSA Conference keynote, RSA Security Executive Chairman Art Coviello explained that RSA had seen declining revenue from encryption, and had decided to stop being "drivers" of independent encryption research, but to instead to "put their trust behind" the standards and guidance from standards organizations such as NIST.
A draft of NIST SP 800-90A including the Dual_EC_DRBG was published in December 2005. The final NIST SP 800-90A including Dual_EC_DRBG was published in June 2006. Documents leaked by Snowden have been interpreted as suggesting that the NSA backdoored Dual_EC_DRBG, with those making the allegation citing the NSA's work during the standardization process to eventually become the sole editor of the standard. The early usage of Dual_EC_DRBG by RSA Security (for which NSA was later reported to have secretly paid $10 million) was cited by the NSA as an argument for Dual_EC_DRBG's acceptance into the NIST SP 800-90A standard. RSA Security subsequently cited Dual_EC_DRBG's acceptance into the NIST standard as a reason they used Dual_EC_DRBG.
Daniel R. L. Brown's March 2006 paper on the security reduction of Dual_EC_DRBG mentions the need for more output truncation and a randomly chosen Q, but mostly in passing, and does not mention his conclusions from his patent that these two defects in Dual_EC_DRBG together can be used as a backdoor. Brown writes in the conclusion: "Therefore, the ECRNG should be a serious consideration, and its high efficiency makes it suitable even for constrained environments." Note that others have criticised Dual_EC_DRBG as being extremely slow, with Bruce Schneier concluding "It's too slow for anyone to willingly use it", and Matthew Green saying Dual_EC_DRBG is "Up to a thousand times slower" than the alternatives. The potential for a backdoor in Dual_EC_DRBG was not widely publicised outside of internal standard group meetings. It was only after Dan Shumow and Niels Ferguson's 2007 presentation that the potential for a backdoor became widely known. Shumow and Ferguson had been tasked with implementing Dual_EC_DRBG for Microsoft, and at least Furguson had discussed the possible backdoor in a 2005 X9 meeting. Bruce Schneier wrote in a 2007 Wired article that the Dual_EC_DRBG's flaws were so obvious that nobody would be use Dual_EC_DRBG: "It makes no sense as a trap door: It's public, and rather obvious. It makes no sense from an engineering perspective: It's too slow for anyone to willingly use it." Schneier was apparently unaware that RSA Security had used Dual_EC_DRBG as the default in BSAFE since 2004.
OpenSSL implemented all of NIST SP 800-90A including Dual_EC_DRBG at the request of a client. The OpenSSL developers were aware of the potential backdoor because of Shumow and Ferguson's presentation, and wanted to use the method included in the standard to choose a guarantied non-backdoored P and Q, but was told that to get FIPS 140-2 validation they would have to use the default P and Q. OpenSSL chose to implement Dual_EC_DRBG despite its dubious reputation for completeness, noting that OpenSSL tried to be complete and implements many other insecure algorithms. OpenSSL did not use Dual_EC_DRBG as the default CSPRNG, and it was discovered in 2013 that a bug made the OpenSSL implementation of Dual_EC_DRBG non-functioning, meaning that no one could have been using it.
Bruce Schneier reported in December 2007 that Microsoft added Dual_EC_DRBG support to Windows Vista, though not enabled by default, and Schneier warned against the known potential backdoor. Windows 10 and later will silently replace calls to Dual_EC_DRBG with calls to CTR_DRBG based on AES.
On September 9, 2013, following the Snowden leak, and the New York Times report on the backdoor in Dual_EC_DRBG, the National Institute of Standards and Technology (NIST) ITL announced that in light of community security concerns, it was reissuing SP 800-90A as draft standard, and re-opening SP800-90B/C for public comment. NIST now "strongly recommends" against the use of Dual_EC_DRBG, as specified in the January 2012 version of SP 800-90A. The discovery of a backdoor in a NIST standard has been a major embarrassment for the NIST.
RSA Security had kept Dual_EC_DRBG as the default CSPRNG in BSAFE even after the wider cryptographic community became aware of the potential backdoor in 2007, but there does not seem to have been a general awareness of BSAFE's usage of Dual_EC_DRBG as a user option in the community. Only after widespread concern about the backdoor was there an effort to find software which used Dual_EC_DRBG, of which BSAFE was by far the most prominent found. After the 2013 revelations, RSA security Chief of Technology Sam Curry provided Ars Technica with a rationale for originally choosing the flawed Dual EC DRBG standard as default over the alternative random number generators. The technical accuracy of the statement was widely criticized by cryptographers, including Matthew Green and Matt Blaze. On December 20, 2013, it was reported by Reuters that RSA had accepted a secret payment of $10 million from the NSA to set the Dual_EC_DRBG random number generator as the default in two of its encryption products. On December 22, 2013, RSA posted a statement to its corporate blog "categorically" denying a secret deal with the NSA to insert a "known flawed random number generator" into its BSAFE toolkit
Following the New York Times story asserting that Dual_EC_DRBG contained a backdoor, Brown (who had applied for the backdoor patent and published the security reduction) wrote an email to an IETF mailing list defending the Dual_EC_DRBG standard process:
Software and hardware which contained the possible backdoor
Implementations which used Dual_EC_DRBG would usually have gotten it via a library. At least RSA Security (BSAFE library), OpenSSL, Microsoft, and Cisco have libraries which included Dual_EC_DRBG, but only BSAFE used it by default. According to the Reuters article which revealed the secret $10 million deal between RSA Security and NSA, RSA Security's BSAFE was the most important distributor of the algorithm. There was a flaw in OpenSSL's implementation of Dual_EC_DRBG that made it non-working outside test mode, from which OpenSSL's Steve Marquess concludes that nobody used OpenSSL's Dual_EC_DRBG implementation.
A list of products which have had their CSPRNG-implementation FIPS 140-2 validated is available at the NIST. The validated CSPRNGs are listed in the Description/Notes field. Note that even if Dual_EC_DRBG is listed as validated, it may not have been enabled by default. Many implementations come from a renamed copy of a library implementation.
The BlackBerry software is an example of non-default use. It includes support for Dual_EC_DRBG, but not as default. BlackBerry Ltd has however not issued an advisory to any of its customers who may have used it, because they do not consider the probable backdoor a vulnerability. Jeffrey Carr quotes a letter from Blackberry:
The Dual EC DRBG algorithm is only available to third party developers via the Cryptographic APIs on the [Blackberry] platform. In the case of the Cryptographic API, it is available if a 3rd party developer wished to use the functionality and explicitly designed and developed a system that requested the use of the API.
Bruce Schneier has pointed out that even if not enabled by default, having a backdoored CSPRNG implemented as an option can make it easier for NSA to spy on targets which have a software-controlled command-line switch to select the encryption algorithm, or a "registry" system, like most Microsoft products, such as Windows Vista:
In December 2013 a proof of concept backdoor was published that uses the leaked internal state to predict subsequent random numbers, an attack viable until the next reseed.
In December 2015, Juniper Networks announced that some revisions of their ScreenOS firmware used Dual_EC_DRBG with the suspect P and Q points, creating a backdoor in their firewall. Originally it was supposed to use a Q point chosen by Juniper which may or may not have been generated in provably safe way. Dual_EC_DRBG was then used to seed ANSI X9.17 PRNG. This would have obfuscated the Dual_EC_DRBG output thus killing the backdoor. However, a "bug" in the code exposed the raw output of the Dual_EC_DRBG, hence compromising the security of the system. This backdoor was then backdoored itself by an unknown party which changed the Q point and some test vectors. Allegations that the NSA had persistent backdoor access through Juniper firewalls had already been published in 2013 by Der Spiegel.
The kleptographic backdoor is an example of NSA's NOBUS policy, of having security holes that only they can exploit.
See also
Cryptographically secure pseudorandom number generator
Random number generator attack
Crypto AG – a Swiss company specialising in communications and information security, who are widely believed to have allowed western security agencies (including NSA) to insert backdoors in their cryptography machines
References
External links
NIST SP 800-90A – Recommendation for Random Number Generation Using Deterministic Random Bit Generators
Dual EC DRBG – Collection of Dual_EC_DRBG information, by Daniel J. Bernstein, Tanja Lange, and Ruben Niederhagen.
On the Practical Exploitability of Dual EC in TLS Implementations – Key research paper by Stephen Checkoway et al.
The prevalence of kleptographic attacks on discrete-log based cryptosystems – Adam L. Young, Moti Yung (1997)
United States Patent Application Publication on the Dual_EC_DRBG backdoor, and ways to negate the backdoor.
Comments on Dual-EC-DRBG/NIST SP 800-90, Draft December 2005 Kristian Gjøsteen's March 2006 paper concluding that Dual_EC_DRBG is predictable, and therefore insecure.
A Security Analysis of the NIST SP 800-90 Elliptic Curve Random Number Generator Daniel R. L. Brown and Kristian Gjøsteen's 2007 security analysis of Dual_EC_DRBG. Though at least Brown was aware of the backdoor (from his 2005 patent), the backdoor is not explicitly mentioned. Use of non-backdoored constants and a greater output bit truncation than Dual_EC_DRBG specifies are assumed.
On the Possibility of a Back Door in the NIST SP800-90 Dual Ec Prng Dan Shumow and Niels Ferguson's presentation, which made the potential backdoor widely known.
The Many Flaws of Dual_EC_DRBG – Matthew Green's simplified explanation of how and why the backdoor works.
A few more notes on NSA random number generators – Matthew Green
Sorry, RSA, I'm just not buying it – Summary and timeline of Dual_EC_DRBG and public knowledge.
[Cfrg] Dual_EC_DRBG ... [was RE: Requesting removal of CFRG co-chair] A December 2013 email by Daniel R. L. Brown defending Dual_EC_DRBG and the standard process.
Broken cryptography algorithms
Kleptography
National Institute of Standards and Technology
National Security Agency
Pseudorandom number generators
Articles with underscores in the title |
13018947 | https://en.wikipedia.org/wiki/Presence%20%28telepresence%29 | Presence (telepresence) | Presence is a theoretical concept describing the extent to which media represent the world (in both physical and social environments). Presence is further described by Matthew Lombard and Theresa Ditton as “an illusion that a mediated experience is not mediated." Today, it often considers the effect that people experience when they interact with a computer-mediated or computer-generated environment. The conceptualization of presence borrows from multiple fields including communication, computer science, psychology, science, engineering, philosophy, and the arts. The concept of presence accounts for a variety of computer applications and Web-based entertainment today that are developed on the fundamentals of the phenomenon, in order to give people the sense of, as Sheridan called it, “being there."
Evolution of 'presence' as a concept
The specialist use of the word “presence” derives from the term “telepresence”, coined by Massachusetts Institute of Technology professor Marvin Minsky in 1980. Minsky's research explained telepresence as the manipulation of objects in the real world through remote access technology. For example, a surgeon may use a computer to control robotic arms to perform minute procedures on a patient in another room. Or a NASA technician may use a computer to control a rover to collect rock samples on Mars. In either case, the operator is granted access to real, though remote, places via televisual tools.
As technologies progressed, the need for an expanded term arose. Sheridan extrapolated Minsky’s original definition. Using the shorter “presence,” Sheridan explained that the term refers to the effect felt when controlling real world objects remotely as well as the effect people feel when they interact with and immerse themselves in virtual reality or virtual environments.
Lombard and Ditton went a step further and enumerated six conceptualizations of presence:
presence can be a sense of social richness, the feeling one gets from social interaction
presence can be a sense of realism, such as computer-generated environments looking, feeling, or otherwise seeming real
presence can be a sense of transportation. This is a more complex concept than the traditional feeling of one being there. Transportation also includes users feeling as though something is “here” with them or feeling as though they are sharing common space with another person together
presence can be a sense of immersion, either through the senses or through the mind
presence can provide users with the sense they are social actors within the medium. No longer passive viewers, users, via presence, gain a sense of interactivity and control
presence can be a sense of the medium as a social actor.
Lombard's work discusses the extent to which 'presence' is felt, and how strong the perception of presence is regarded without the media involved. The article reviews the contextual characteristics that contribute to an individual's feeling presence. The most important variables that are important in the determinants of presence are those that involve sensory richness or vividness - and the number and consistency of sensory outputs. Researchers believe that the greater the number of human senses for which a medium provides stimulation, the greater the capability of the medium to produce a sense of presence. Additional important aspects of a medium are visual display characteristics (image quality, image size, viewing distance, motion and color, dimensionality, camera techniques) as well as aural presentation characteristics, stimuli for other senses (interactivity, obtrusiveness of medium, live versus recorded or constructed experience, number of people), content variables (social realism, use of media conventions, nature of task or activity), and media user variables (willingness to suspend disbelief, knowledge of and prior experience with the medium). Lombard also discusses the effects of presence, including both physiological and psychological consequences of "the perceptual illusion of nonmediation."
Physiological effects of presence may include arousal, or vection and simulation sickness, while psychological effects may include enjoyment, involvement, task performance, skills training, desensitization, persuasion, memory and social judgement, or parasocial interaction and relationships.
Presence has been delineated into subtypes, such as physical-, social-, and self-presence. Lombard's working definition was "a psychological state in which virtual objects are experienced as actual objects in either sensory or nonsensory ways." Later extensions expanded the definition of "virtual objects" to specify that they may be either para-authentic or artificial. Further development of the concept of "psychological state" has led to study of the mental mechanism that permits humans to feel presence when using media or simulation technologies. One approach is to conceptualize presence as a cognitive feeling, that is, to take spatial presence as feedback from unconscious cognitive processes that inform conscious thought.
Case studies
Several studies provide insight into the concept of media influencing behavior.
Cheryl Bracken and Lombard suggested that people, especially children, interact with computers socially. The researchers found, via their study, that children who received positive encouragement from a computer were more confident in their ability, were more motivated, recalled more of a story and recognized more features of a story than those children who received only neutral comments from their computer.
Nan, Anghelcev, Myers, Sar, and Faber found that the inclusion of anthropomorphic agents that relied on artificial intelligence on a Web site had positive effect on people’s attitudes toward the site. The research of Bracken and Lombard and Nan et al. also speak to the concept of presence as transportation. The transportation in this case refers to the computer-generated identity. Users, through their interaction, have a sense that these fabricated personalities are really “there”.
Communication has been a central pillar of presence since the term’s conception. Many applications of the Internet today largely depend on virtual presence since its conception. Rheingold and Turkle offered MUDs, or multi-user dungeons, as early examples of how communication developed a sense of presence on the Web prior to the graphics-heavy existence it has developed today. “MUDs...[are] imaginary worlds in computer databases where people use words and programming languages to improvise melodramas, build worlds and all the objects in them, solve puzzles, invent amusements and tools, compete for prestige and power, gain wisdom, seek revenge, indulge greed and lust and violent impulses." While Rheingold focused on the environmental sense of presence that communication provided, Turkle focused on the individual sense of presence that communication provided. “MUDs are a new kind of virtual parlor game and a new form of community. In addition, text-based MUDs are a new form of collaboratively written literature. MUD players are MUD authors, the creators as well as consumers of media content. In this, participating in a MUD has much in common with script writing, performance art, street theater, improvisational theater – or even commedia dell’arte."
Further blurring the lines of behavioral spheres, Gabriel Weimann wrote that media scholars have found that virtual experiences are very similar to real-life experiences, and people can confuse their own memories and have trouble remembering if those experiences were mediated or not.
Philipp, Vanman, and Storrs demonstrated that unconscious feelings of social presence in a virtual environment can be invoked with relatively impoverished social representations. The researchers found that the mere presence of virtual humans in an immersive environment caused people to be more emotionally expressive than when they were alone in the environment. The research suggests that even relatively impoverished social representations can lead to people to behave more socially in an immersive environment.
Presence in popular culture
Sheridan's view of presence earned its first pop culture reference in 1984 with William Gibson’s pre-World Wide Web science fiction novel "Neuromancer", which tells the story of a cyberpunk cowboy of sorts who accesses a virtual world to hack into organizations.
Joshua Meyrowitz's 1986 "No Sense of Place" discusses the impact of electronic media on social behavior. The novel discusses how social situations are transformed by media. Media, he claims, can change one's 'sense of place,'by mixing traditionally private versus public behaviors - or back-stage and front-stage behaviors, respectively, as coined by Erving Goffman. Meyrowitz suggests that television alone will transform the practice of front-stage and back-stage behaviors, as television would provide increased information to different groups who may physically not have access to specific communities but through media consumption are able to determine a mental place within the program. He references Marshall McLuhan's concept that 'the medium is the message,' and that media provide individuals with access to information. With new and changing media, Meyrowitz says that the patterns of information and shifting accesses to information change social settings, and help do determine a sense of place and behavior. With the logic that behavior is connected to information flow, Meyrowitz states that front- and back-stage behaviors are blurred and may be impossible to untangle.
See also
Collective consciousness
Social presence
Telepresence
Virtual reality
Surround sound
Hyperpersonal model
Noosphere
Social reality
Pictorial realism
Blended Space
References
Further reading
Bob G. Witmer, Michael J. Singer (1998). Measuring Presence in Virtual Environments: A Presence Questionnaire
G. Riva, J, Waterworth (2003). Presence and the Self: a cognitive neuroscience approach.
W. IJsselsteijn, G. Riva (2003). Being There: The experience of presence in mediated environments.
Telepresence
Virtual reality
Mass media technology
Cognitive science
Philosophy of mind |
1086559 | https://en.wikipedia.org/wiki/Code%20reuse | Code reuse | Code reuse, also called software reuse, is the use of existing software, or software knowledge, to build new software, following the reusability principles.
Code reuse may be achieved by different ways depending on a complexity of a programming language chosen and range from a lower-level approaches like code copy-pasting (e.g. via snippets), simple functions (procedures or subroutines) or a bunch of objects or functions organized into modules (e.g. libraries) or custom namespaces, and packages, frameworks or software suites in higher-levels.
Code reuse implies dependencies which can make code maintanability harder. At least one study found that code reuse reduces technical debt.
Overview
Ad hoc code reuse has been practiced from the earliest days of programming. Programmers have always reused sections of code, templates, functions, and procedures. Software reuse as a recognized area of study in software engineering, however, dates only from 1968 when Douglas McIlroy of Bell Laboratories proposed basing the software industry on reusable components.
Code reuse aims to save time and resources and reduce redundancy by taking advantage of assets that have already been created in some form within the software product development process. The key idea in reuse is that parts of a computer program written at one time can be or should be used in the construction of other programs written at a later time.
Code reuse may imply the creation of a separately maintained version of the reusable assets. While code is the most common resource selected for reuse, other assets generated during the development cycle may offer opportunities for reuse: software components, test suites, designs, documentation, and so on.
The software library is a good example of code reuse. Programmers may decide to create internal abstractions so that certain parts of their program can be reused, or may create custom libraries for their own use. Some characteristics that make software more easily reusable are modularity, loose coupling, high cohesion, information hiding and separation of concerns.
For newly written code to use a piece of existing code, some kind of interface, or means of communication, must be defined. These commonly include a "call" or use of a subroutine, object, class, or prototype. In organizations, such practices are formalized and standardized by domain engineering, also known as software product line engineering.
The general practice of using a prior version of an extant program as a starting point for the next version, is also a form of code reuse.
Some so-called code "reuse" involves simply copying some or all of the code from an existing program into a new one. While organizations can realize time to market benefits for a new product with this approach, they can subsequently be saddled with many of the same code duplication problems caused by cut and paste programming.
Many researchers have worked to make reuse faster, easier, more systematic, and an integral part of the normal process of programming. These are some of the main goals behind the invention of object-oriented programming, which became one of the most common forms of formalized reuse. A somewhat later invention is generic programming.
Another, newer means is to use software "generators", programs which can create new programs of a certain type, based on a set of parameters that users choose. Fields of study about such systems are generative programming and metaprogramming.
Types of reuse
Concerning motivation and driving factors, reuse can be:
Opportunistic – While getting ready to begin a project, the team realizes that there are existing components that they can reuse.
Planned – A team strategically designs components so that they'll be reusable in future projects.
Reuse can be categorized further:
Internal reuse – A team reuses its own components. This may be a business decision, since the team may want to control a component critical to the project.
External reuse – A team may choose to license a third-party component. Licensing a third-party component typically costs the team 1 to 20 percent of what it would cost to develop internally. The team must also consider the time it takes to find, learn and integrate the component.
Concerning form or structure of reuse, code can be:
Referenced – The client code contains a reference to reused code, and thus they have distinct life cycles and can have distinct versions.
Forked – The client code contains a local or private copy of the reused code, and thus they share a single life cycle and a single version.
Fork-reuse is often discouraged because it's a form of code duplication, which requires that every bug is corrected in each copy, and enhancements made to reused code need to be manually merged in every copy or they become out-of-date. However, fork-reuse can have benefits such as isolation, flexibility to change the reused code, easier packaging, deployment and version management.
Systematic
Systematic software reuse is a strategy for increasing productivity and improving the quality of the software industry. Although it is simple in concept, successful software reuse implementation is difficult in practice. A reason put forward for this is the dependence of software reuse on the context in which it is implemented. Some problematic issues that need to be addressed related to systematic software reuse are:
a clear and well-defined product vision is an essential foundation to an software product line (SPL).
an evolutionary implementation strategy would be a more pragmatic strategy for the company.
there exist a need for continuous management support and leadership to ensure success.
an appropriate organisational structure is needed to support SPL engineering.
the change of mindset from a project-centric company to a product-oriented company is essential.
Examples
Software libraries
A very common example of code reuse is the technique of using a software library. Many common operations, such as converting information among different well-known formats, accessing external storage, interfacing with external programs, or manipulating information (numbers, words, names, locations, dates, etc.) in common ways, are needed by many different programs. Authors of new programs can use the code in a software library to perform these tasks, instead of "re-inventing the wheel", by writing fully new code directly in a program to perform an operation. Library implementations often have the benefit of being well-tested and covering unusual or arcane cases. Disadvantages include the inability to tweak details which may affect performance or the desired output, and the time and cost of acquiring, learning, and configuring the library.
Design patterns
A design pattern is a general solution to a recurring problem. Design patterns are more conceptual than tangible and can be modified to fit the exact need. However, abstract classes and interfaces can be reused to implement certain patterns.
Frameworks
Developers generally reuse large pieces of software via third-party applications and frameworks. Though frameworks are usually domain-specific and applicable only to families of applications.
Higher-order function
In functional programming higher-order functions can be used in many cases where design patterns or frameworks were formerly used.
Retrocomputing
Retrocomputing encompasses reuse of code, simply because retro programs are being run on older computers, or emulators for them.
Computer security
In computer security code-reuse is employed as a software exploit method.
When an attacker is not able to directly input code to modify the control flow of a program, for example in presence of code injection defenses such as W^X, he or she can redirect the control flow to code sequences existing in memory.
Examples of code-reuse attacks are return-to-libc attack, return-oriented programming, and jump-oriented programming.
Components
A component, in an object-oriented extent, represents a set of collaborative classes (or only one class) and its interfaces. The interfaces are responsible for enabling the replacement of components. Reusable components can also be isolated and synchronized between SCM repositories using component source code management technologies (CSCM).
Outside computers
The whole concept of "code reuse" can also encompass engineering applications outside software. For instance, parametric modeling in computer-aided design allows for creating reusable designs. Standardization results in creation of interoperable parts that can be then reused in many contexts.
Criticism
Code reuse results in dependency on the component being reused. Rob Pike opined that "A little copying is better than a little dependency". When he joined Google, the company was putting heavy emphasis on code reuse. He believes that Google's codebase still suffers from results of that former policy in terms of compilation speed and maintainability.
See also
Don't repeat yourself
ICSR
Inheritance
Language binding
Not invented here (antonym)
Polymorphism
Procedural programming
Reinventing the wheel (antonym)
Reusability
Reuse metrics
Single source of truth
Software framework
Virtual inheritance
References
External links
ReNews – an information site about software reuse and domain engineering
Software Reuse Tips Article
Programming principles
Repurposing
Computer libraries |
8480367 | https://en.wikipedia.org/wiki/Paul%20Kan%20Man-Lok | Paul Kan Man-Lok | Dr. Paul Kan Man-lok, CBE, JP, Chevalier de la Légion d'Honneur,Silver Bauhinia Star is a Chinese technology engineer.
He is credited with having developed the world's first Chinese and multi-lingual wireless communication software/system in 1987 and is renowned as the "Father of Multi-lingual Messaging" for pioneering the wireless communication transmission software in various languages. He has made significant contributions to the information technology industry in both Hong Kong and China over the last 39 years and is currently the Chairman of three information technology companies (Champion Technology Holdings, Kantone Holdings and Digital Hong Kong.com) which are listed on the Hong Kong Stock Exchange. The total asset value of the three companies exceeds 6 billion dollars. He is also an Independent Non-executive Director of CLP Holdings Ltd.
Paul Kan is currently Chairman of Hong Kong IT Alliance; Chairman of the Hong Kong Information Technology Industry Council; Chairman of Information and Communications Technology Services Advisory Committee, HKTDC; Chairman of ICT Working Group, the Hong Kong–United Kingdom Business Partnership; and Convener of the Hungarian–Hong Kong Innovative Business Council.
In 1990, Kan set up "A Better Tomorrow", a charity which has since contributed to community programs in education, arts, culture and religion.The cultural activities increased substantially over the years and in 2005, Kan set up as separate non profit organizations and registered as charities the [http://www.cwchf.org Chinese Cultural Heritage Protection Foundation for Heritage activities and to save world relics by the World Cultural Relics Protection Foundation.
Paul Kan was a programmer of the Hong Kong Government in the 1960s, providing software expertise to various government departments including education, transport, security, industry and trade. Over the next 17 years, he worked with various international groups as a systems analyst for computer projects, including the Swire Group for its aviation, trading and shipping systems. He was ultimately promoted to be the General Manager of Asiadata Ltd, a joint venture computing services bureau of HSBC, Jardines, Barclays Bank and Cable & Wireless. In 1987, he founded the Champion Technology Group and soon launched the world's first Chinese message receiver and wireless communication system, a major milestone and a solid foundation for his subsequent and continuing pursuit in the information technology arena. The Champion Technology Group has since positioned itself in the development of wireless communications software and related applications in various environments. Products of the Group are distributed in over 50 countries.
He was named CBE by Queen Elizabeth II in the 2006 New Year's Honours List This award recognised Mr. Kan's contribution to business and high standard of management ethics in his business endeavours
Kan was invested with the honour of Commendatore dell'Ordine della Stella della Solidarieta Italiana (Comm OSSI) by the Italian Prime Minister Hon. Romano Prodi in 2006. The award is granted to recipients in recognition of their contribution to economic and cultural activities for Italy.
He was awarded Chevalier de la Légion d'Honneur by the French Government in 2007 and received it from H.E. Mrs. Anne-Marie Idrac, Secretary for Foreign Trade of France in 2009
In December 2008, he was awarded the Knights Commander Grand Cross of the Royal Order Kingdom of Poland. Mr. Kan was appointed a High Member of the Royal Council in 2009.
In the 2009 Honours List of the Hong Kong SAR Government, Kan was awarded the Silver Bauhinia Star. He was also appointed a JP by the Hong Kong SAR Government in 2006.
Mr. Kan was appointed as Honorary Consul of the Republic of Hungary in the Hong Kong SAR and the Macau SAR in January 2011.
References
Living people
1947 births
Hong Kong businesspeople
Commanders of the Order of the British Empire |
460979 | https://en.wikipedia.org/wiki/Nagpur | Nagpur | Nagpur (Marathi pronunciation: [naːɡpuːɾ]) is the third largest city and the winter capital of the Indian state of Maharashtra. It is the 13th largest city in India by population and according to an Oxford's Economics report, Nagpur is projected to be the fifth fastest growing city in the world from 2019 to 2035 with an average growth of 8.41%. It has been proposed as one of the Smart Cities in Maharashtra and is one of the top ten cities in India in Smart City Project execution.
In the latest rankings of 100 developing smart cities given by the Union Ministry of Urban Development, Nagpur stood first in Maharashtra state and second in India. Known as the "Orange City", Nagpur has officially become the greenest, safest, and technologically developed city in the Maharashtra state.
Nagpur is the seat of the annual winter session of the Maharashtra state assembly. It is a major commercial and political center of the Vidarbha region of Maharashtra. In addition, the city derives unique importance from being an important location for the Dalit Buddhist movement and the headquarters for the Hindu nationalist organization RSS. Nagpur is also known for the Deekshabhoomi, which is graded an A-class tourism and pilgrimage site, the largest hollow stupa among all the Buddhist stupas in the world. The regional branch of Bombay High Court is also situated within the city.
According to a survey by ABP News-Ipsos, Nagpur was identified as the best city in India topping in livability, greenery, public transport, and health care indices in 2013. The city was adjudged the 20th cleanest city in India and the top mover in the western zone as per Swachh Sarvekshan 2016. It was awarded as the best city for innovation and best practice in Swachh Sarvekshan 2018. It was also declared as open defecation free in January 2018 under Swachh Bharat Mission. It is also one of the safest cities for women in India. The city also ranked 25th in Ease of Living index 2020 among 111 cities in India. It was ranked the 8th most competitive city in the country by the Institute for Competitiveness for the year 2017.
It is famous for Nagpur oranges and is sometimes known as the Orange City for being a major trade center of oranges cultivated in large part of the region. It is also called the Tiger Capital of India or the Tiger Gateway of India as many tiger reserves are located in and around the city and also hosts the regional office of National Tiger Conservation Authority. The city was founded in 1702 by the Gond King Bakht Buland Shah of Deogarh and later became a part of the Maratha Empire under the royal Bhonsale dynasty. The British East India Company took over Nagpur in the 19th century and made it the capital of the Central Provinces and Berar. After the first re-organisation of states, the city lost its status as the capital. Following the informal Nagpur Pact between political leaders, it was made the second capital of Maharashtra.
History
Etymology
Nagpur is named after the Great river Nag which flows through the city. The old Nagpur city (today called 'Mahal') is situated on north banks of the river Nag. The suffix pur means "city" in many Indian languages.
One of the earlier names of Nagpur was "Fanindrapura". It derives its origin from the Marathi word fana (फण; meaning hood of a cobra). Nagpur's first newspaper was named Fanindramani, which means a jewel that is believed to be suspended over a cobra's hood. It is this jewel that lights up the darkness, hence the name of the newspaper. B. R. Ambedkar claimed that both the city and the river are named after the 'Nags' who were opponents of the Indo-Aryans. During British rule, the name of the city was spelt and pronounced as "Nagpore".
Early and medieval history
Human existence around present-day Nagpur can be traced back 3000 years to the 8th century BCE. Mehir burial sites at the Drugdhamna (near the Mhada colony) indicate that the megalithic culture existed around Nagpur and is still followed. The first reference to the name "Nagpur" is found in a 10th-century copper-plate inscription discovered at Devali in the neighbouring Wardha district. The inscription is a record of grant of a village situated in the Visaya (district) of Nagpura-Nandivardhana during the time of the Rastrakuta king Krsna III in the Saka year 862 (940 CE). Towards the end of the 3rd century, King Vindhyasakti is known to have ruled the Nagpur region. In the 4th century, the Vakataka Dynasty ruled over the Nagpur region and surrounding areas and had good relations with the Gupta Empire. The Vakataka king Prithvisena I moved his capital to Nagardhan (ancient name Nandivardhana), from Nagpur. After the Vakatakas, the region came under the rule of the Hindu kingdoms of the Badami Chalukyas, the Rashtrakutas. The Paramaras or Panwars of Malwa appear to have controlled the Nagpur region in the 11th century. A prashasti inscription of the Paramara king Lakshmadeva (r. c. 1086–1094) has been found at Nagpur. Subsequently, the region came under the Yadavas of Devagiri. In 1296, Allauddin Khilji invaded the Yadava Kingdom after capturing Deogiri, after which the Tughlaq Dynasty came to power in 1317. In the 17th century, the Mughal Empire conquered the region, however during Mughal era, regional administration was carried out by the Gond kingdom of Deogarh in the Chhindwara district of the modern-day state of Madhya Pradesh. In the 18th century, Bhonsles of the Maratha Empire established the Nagpur Kingdom based in the city.
Modern history
The king who actually founded Nagpur was Bakht Buland Shah of Deogarh. An able administrator, he incentivised large-scale immigration of Marathi Hindu cultivators to increase economic activity. After Bhakt Buland Shah, the next Raja of Deogarh was Chand Sultan, who resided principally in the country below the hills, fixing his capital at Nagpur, which he turned into a walled town. On Chand Sultan's death in 1739, Wali Shah, an illegitimate son of Bakht Buland, usurped the throne and Chand Sultan's widow invoked the aid of the Maratha leader Raghoji Bhosale of Berar in the interest of her sons Akbar Shah and Burhan Shah. The usurper was put to death and the rightful heirs placed on the throne. After 1743, a series of Maratha rulers came to power, starting with Raghoji Bhosale, who conquered the territories of Deogarh, Chanda and Chhattisgarh by 1751.
Nagpur was burnt substantially in 1765 and again partially in 1811 by marauding Pindaris. However, the development of the city of Nagpur continued. In 1803 Raghoji II Bhosale joined the Peshwa against the British in the Second Anglo-Maratha War, but the British prevailed. After Raghoji II's death in 1816, his son Parsaji was deposed and murdered by Mudhoji II Bhosale. Despite the fact that he had entered into a treaty with the British in the same year, Mudhoji joined the Peshwa in the Third Anglo-Maratha War in 1817 against the British but suffered a defeat at Sitabuldi in present-day Nagpur. The fierce battle was a turning point as it laid the foundations of the downfall of the Bhosales and paved the way for the British acquisition of Nagpur city. Mudhoji was deposed after a temporary restoration to the throne, after which the British placed Raghoji III Bhosale, the grandchild of Raghoji II, on the throne. During the rule of Raghoji III (which lasted till 1840), the region was administered by a British resident. In 1853, the British took control of Nagpur after Raghoji III died without leaving an heir.
From 1853 to 1861, the Nagpur Province (which consisted of the present Nagpur region, Chhindwara, and Chhattisgarh) became part of the Central Provinces and Berar and came under the administration of a commissioner under the British central government, with Nagpur as its capital. Berar was added in 1903. The advent of the Great Indian Peninsula Railway (GIP) in 1867 spurred its development as a trade centre. Tata group started its first textile mill at Nagpur, formally known as Central India Spinning and Weaving Company Ltd. The company was popularly known as "Empress Mills" as it was inaugurated on 1 January 1877, the day queen Victoria was proclaimed Empress of India.
The non-co-operation movement was launched in the Nagpur session of 1920. The city witnessed a Hindu–Muslim riot in 1923 which had profound impact on K. B. Hedgewar, who in 1925 founded the Rashtriya Swayamsevak Sangh (RSS), a Hindu nationalist organisation in Mohitewada Mahal, Nagpur with an idea of creating a Hindu nation. After the 1927 Nagpur riots RSS gained further popularity in Nagpur and the organisation grew nationwide.
After Indian independence
After India gained independence in 1947, Central Provinces and Berar became a province of India. In 1950, the Central Provinces and Berar was reorganised as the Indian state of Madhya Pradesh with Nagpur as its capital. When the Indian states were reorganised along the linguistic lines in 1956, Nagpur and Berar regions were transferred to the state of Bombay, which was split into the states of Maharashtra and Gujarat in 1960. At a formal public ceremony held on 14 October 1956 in Nagpur, B. R. Ambedkar and his supporters converted to Buddhism, which started the Dalit Buddhist movement that is still active. In 1994, the city of Nagpur witnessed its most violent day in modern times: in the Gowari stampede, police fired on Gowari protestors demanding Scheduled Tribe status and caused a mass panic.
Nagpur completed 300 years of establishment in the year 2002. A big celebration was organised to mark the event.
Geography
Topography
Nagpur is located at the exact centre of the Indian subcontinent, close to the geometric center of the quadrilateral connecting the four major metros of India, viz. Chennai, Mumbai, New Delhi and Kolkata. The city has the Zero Mile Stone locating the geographical centre of India, which was used by the British to measure all distances within the Indian subcontinent.
The city lies on the Deccan plateau of the Indian subcontinent and has a mean altitude of 310.5 meters above sea level. The underlying rock strata are covered with alluvial deposits resulting from the flood plain of the Kanhan River. In some places, these give rise to granular sandy soil. In low-lying areas, which are poorly drained, the soil is alluvial clay with poor permeability characteristics. In the eastern part of the city, crystalline metamorphic rocks such as gneiss, schist and granites are found, while in the northern part yellowish sandstones and clays of the lower Gondwana formations are found.
Nagpur city is dotted with natural and artificial lakes. The largest lake is Ambazari Lake. Other natural lakes include Gorewada Lake and Telankhedi lake. Sonegaon and Gandhisagar Lakes are artificial, created by the city's historical rulers. Nag river, Pilli Nadi, and nallas form the natural drainage pattern for the city. Nagpur is known for its greenery and was adjudged the cleanest and second greenest in India after Chandigarh in 2010.
Climate
Nagpur has tropical savannah climate (Aw in Köppen climate classification) with dry conditions prevailing for most of the year. It receives about 163 mm of rainfall in June. The amount of rainfall is increased in July to 294 mm. Gradual decrease of rainfall has been observed from July to August (278 mm) and September (160 mm). The highest recorded daily rainfall was 304 mm on 14 July 1994. Summers are extremely hot, lasting from March to June, with May being the hottest month. Winter lasts from November to February, during which temperatures drop below 10 °C (50 °F). The highest recorded temperature in the city was 47.9 °C on 29 May 2013, while the lowest was 3.5 °C on 29 December 2018.
Extreme weather
The average number of heat wave days occurring in Nagpur in the summer months of March, April and May is 0.5, 2.4 and 7.2 days, respectively. May is the most uncomfortable and hottest month with, for example, 20 days of heat waves being experienced in 1973, 1988 and 2010. The summer season is characterised by other severe weather activity like thunderstorms, dust storms, hailstorms and squalls. Generally, hailstorms occur during March and dust storms during March and April. These occur infrequently (1 per 10 days). Squalls occur more frequently with 0.3 per day in March and April rising to 0.8 per day in May. Due to the heat waves in the city the Indian government with the help of New York-based National Resources Defense Council has launched a heat wave program since March 2016.
Administration
Nagpur was the capital of Central Provinces and Berar for 100 years. After the State Reorganisation in 1956, Nagpur and Vidarbha region become part of the new Maharashtra State. With this Nagpur lost the capital status and hence a pact was signed between leaders, the Nagpur Pact. According to the pact, one session of state legislature and the state legislative council takes place in Vidhan Bhavan, Nagpur.Usually the winter session takes place in the city exception being in 1966, 1971 and 2018 when the monsoon session took place in the city. Nagpur has a district court and its own bench of the Bombay High Court which was established on 9 January 1936. The city consists of six Vidhan Sabha constituencies namely Nagpur West, Nagpur South, Nagpur South West, Nagpur East, Nagpur North and Nagpur Central. These constituencies are part of the Nagpur Lok Sabha constituency.
Local government
The Municipal Council for Nagpur was established in 1864. At that time, the area under the jurisdiction of the Nagpur Municipal Council was 15.5 km2 and the population was 82,000. The duties entrusted to the Nagpur Municipal Council were to maintain cleanliness and arrange for street lights and water supply with government assistance. The Municipal Corporation came into existence in March 1951. Nagpur is administered by the Nagpur Municipal Corporation (NMC), which is a democratically elected civic governing body. The Corporation elects a Mayor who along with a Deputy Mayor heads the organisation. The mayor carries out the activities through various committees such as the Standing Committee, health and sanitation committee, education committee, water works, public works, public health and market committee. Since January 2021, the mayor of Nagpur is Dayashankar Tiwari, The administrative head of the corporation is the Municipal Commissioner, an Indian Administrative Service (IAS) officer appointed by the state government. The Municipal Commissioner along with the Deputy Municipal Commissioners, carry out various activities related to engineering, health and sanitation, taxation and its recovery. Various departments such as public relations, library, health, finance, buildings, slums, roads, street lighting, traffic, establishment, gardens, public works, local audit, legal services, waterworks, education, octroi and fire services manage their specific activities. The activities of NMC are administered by its zonal offices. There are 10 zonal offices in Nagpur – Laxmi Nagar, Dharampeth, Hanuman Nagar, Dhantoli, Nehru Nagar, Gandhi Baugh, Sataranjipura, Lakkadganj, Ashi Nagar and Mangalwari. These zones are divided into 145 wards. Each ward is represented by a corporator, a majority of whom are elected in local elections. NMC has various departments including healthcare, education, and a fire brigade dedicated for each service and project of the city.
Nagpur Improvement Trust (NIT) is a local planning authority which works with the NMC and carries out the development of the civic infrastructure and new urban areas on its behalf. NIT is headed by a chairman, an Indian Administrative Service Officer appointed by the state government. Since the 1990s the urban agglomeration had rapidly expanded beyond the city's municipal boundaries. This growth had presented challenges for the future growth of the city and its fringes in an organised manner. With a view of achieving balanced development within the region, the Nagpur Improvement Trust (NIT) was notified as the Special Planning Authority (SPA) for the Nagpur Metropolitan Area (NMA) and entrusted with the preparation of a Statutory Development Plan as per provisions of the MRTP Act, 1966. The notified NMA comprises areas outside the Nagpur city and includes 721 villages under 9 tehsils of the Nagpur District spreading across an area of 3,567 km2. In 1999, the government of Maharashtra declared that the Nagpur Metropolitan Area shall comprise all of Nagpur city, Nagpur Gramin (rural areas near Nagpur), Hingna, Parseoni, Mauda and Kamptee Taluka and parts of Savner, Kalmeshwar, Umred and Kuhi. The boundaries of the "Metro region" around the municipal corporation limits of the city have been defined as per the notification. In 2002, the government extended the jurisdiction of the Nagpur Improvement Trust (NIT) by 25 to 40 kilometres. This new area was defined under clause 1(2) of NIT Act-1936 as "Nagpur Metropolitan Area". Maharashtra State Cabinet in 2016 had paved the way for NIT to become Nagpur Metropolitan Region Development Authority (NMRDA) NMRDA was notified by the Government of Maharashtra in March 2017. NMRDA has been made on the lines of Mumbai Metropolitan Region Development Authority. NMRDA has been mandated to monitor development in the metropolis comprising 721 villages across nine tehsils in the district. The body is headed by Metropolitan Commissioner, an Indian Administrative Service Officer appointed by the state government as was with the NIT chairman. Although delayed, NIT was to be dissolved and merged with NMC till 15 June 2018 as stated by the state government but has been given a stay order from Nagpur Bench of Bombay High Court in June 2018.
The Maharashtra government had appointed Larsen & Toubro (L&T) as the implementation partner to convert Nagpur into the country's first large scale, integrated, smart city. The state government had also decided to develop the city complete with five hubs, from textile centres to defence sector. Nagpur was selected from Maharashtra among other cities under Government of India's Smart Cities Mission. City was selected in the third round of selection. For the implementation of the projects under Smart Cities Mission a special purpose vehicle was formed which was named Nagpur Smart and Sustainable City Development Corporation Ltd.
Nagpur Police is headed by a Police Commissioner who is of the rank of Additional Director General of Police of Maharashtra Police. Nagpur Police is divided into 5 Zones, each headed by a Deputy Commissioner of Police, while traffic zones are divided into eight zones each headed by an inspector. The state C.I.D Regional Headquarters are situated in Nagpur. and State Reserve Police Force Campus
Utility services
Originally, all the utility services of the city were carried out by NMC departments, but from 2008 onwards privatisation had started for major utility services. The Orange City Water Private Limited (OCW), a joint venture of Veolia Water India Pvt. Ltd and Vishwaraj Infrastructure Ltd., manages the water supply for the city as well as Nagpur Municipal Corporation's water treatment plants at Gorewada, all the elevated service reservoirs, ground service reservoirs, master balancing reservoirs commonly known as water tanks. This joint venture was established in November 2011 and was awarded the contract to execute 24x7 water supply project and operational and maintenance of waterworks for 25 years. Kanak Resources Management Ltd. was awarded the contract for garbage collection in the city as per Nagpur Bin Free Project in 2009 by NMC. It was replaced by AG Enviro Infra Project Pvt Ltd and BVG India in 2019. In electricity supply, which was first managed by MSEB was then replaced by MSEDCL. After some years the distribution franchisee system was introduced to reduce the losses in the divisions and so Spanco Nagpur Discom Ltd.(SNDL) was awarded the distribution franchisee for 15 years to manage three of the four divisions from Nagpur Urban circle namely, Civil Lines, Mahal and Gandhibagh on 23 February 2011 by MSEDCL. The power distribution and maintenance for the fourth division i.e. Congress Nagar division was still managed by MSEDCL. As SNDL mounted losses it intimated MSEDCL to takeover the franchises as it was unable to maintain the franchisee areas under it. MSEDCL thus took over all the Nagpur urban circle areas as in September 2019. Nagpur Fire Brigade has nine fire stations at various locations in the city. India Post which is a governmental postal department has two head post offices and many post offices and sub-post offices at various locations in the city and are part of the logistics services in the city along with various other private operators.
Health care
NMC in collaboration with Central Government, State Government, UNICEF, World Health Organization and Non-governmental organisation conducts and maintains various health schemes in the city. City health line is an initiative started by NMC dedicated to the health of citizens of Nagpur. This includes providing computerised comparative information and action in the field to local citizens. NMC runs three indoor patient hospitals including Indira Gandhi Rugnalaya at LAD square, Panchpaoli Maternity Hospital in Panchpaoli and Isolation Hospital in Immamwada. Besides, the civic body runs three big diagnostic centres at Mahal, Sadar and also at Indira Gandhi Rugnalaya. Apart from these, NMC has 57 outpatient dispensaries (OPDs), including 23 health posts sanctioned under Union Government's schemes, 15 allopathy hospitals, 12 ayurvedic hospitals, three homoeopathy hospitals, three naturopathy hospitals and one Unani hospital. In 2013, ABP News-Ipsos declared Nagpur the country's best city for health care services. The city is home to numerous hospitals, some run by the government and some private and consists of various super-specialty and multi-specialty ones. Recently various cancer speciality hospitals providing treatment till tertiary care for cancer patients have been established in the city making it a natural medical hub for nearby areas and boosting healthcare system in the city. Nagpur is a health hub for Central India and caters to a large geographical area arbitrarily bounded by Delhi in the north, Kolkata in the east, Mumbai-Pune in the west and Hyderabad in the south. People from Madhya Pradesh, Chhattisgarh, Uttar Pradesh, Orissa, Andhra Pradesh and Telangana regularly come to Nagpur for their health needs. Nagpur boasts of super-specialty physicians and surgeons serving its population in both public sector government-run hospitals and well equipped private hospitals catering to all strata of society. AIIMS is the latest feather in the cap of Nagpur health care services which will be located near MIHAN.
According to 2005 National Family Health Survey, Nagpur has a fertility rate of 1.9 which is below the replacement level. The infant mortality rate was 43 per 1,000 live births, and the mortality rate for children under five was 50 per 1,000 live births. About 57% slum and 72% non-slum children have received all the mandatory vaccines which include BCG, measles and full courses of polio and DPT. In Nagpur, 78 percent of poor children are anaemic, including 49 percent who have moderate to severe anaemia. About 45% of children under five years of age and 31% of women are underweight. The poor people from the city mostly cite the reason of the lack of a nearby facility, poor quality of care and excessive waiting time for not visiting any government hospitals for treatment. According to the National Family Health Survey (NFHS-4) of 2015-16 for Nagpur, households having improved drinking water source is 95.3%, households having improved sanitation facility is 77.3% and households having clean fuel for cooking is 87.6%. Health Insurance coverage among households in the city are 19.5%. Female sterilisation is more prominent than male sterilisation in Nagpur. Institutional births in the city is 97%. Children below 5 years who are anaemic are 43.50%, while women and men in the age group of 15 to 49 years who are anaemic are 45.00% and 21.20%, respectively.
Military establishments
Nagpur is an important city for the Indian armed forces. Maintenance Command of Indian Air Force has its current headquarters at Vayusena Nagar in Nagpur. It houses Mi-8 helicopters and the IAF carriers IL-76 and handles the maintenance, repair, and operations of all aircraft, helicopters and other equipment.
The ordnance factory and staff college of ordnance factory Ambajhari and National Academy of Defence Production for Group A officer of ordnance factories are in the western part of the city. Sitabuldi Fort is managed by the Uttar Maharashtra and Gujarat sub area hq.of the Indian Army and citizens are allowed to visit the premises on Republic day, Maharashtra day and Independence day.
The 'raison d'être' for Kamptee, the military cantonment, is still operational. Kamptee Cantonment houses the Officers Training Academy for National Cadets Corps, which is the only one of its kind. It is also the regimental centre of one of the oldest and most respected regiments in the Indian Army, the Brigade of the Guards. Guards, located at Kamptee, are the only regiment in the Indian Army which have won two PVC (Param Veer Chakra), the highest gallantry awarded to soldiers for wartime operations. There are also other military establishments and a well equipped military hospital to care for the health of the armed forces personnel. The Army Postal Service centre is also operational in the cantonment since 1948, to provide training to personnel of Department of Post who volunteer themselves for the Army. Nagpur's National Civil Defence College provides civil defence and disaster management training to pupils from all over India and abroad. Indian Air Force's IL-76 transport planes nicknamed "Gajraj" are also based in Nagpur.
Demographics
Population
2011 census, Nagpur municipality has a population of 2,405,665. The total population constitute, 1,225,405 males and 1,180,270 females. The total children (ages 0–6) are 247,078, of whom 128,290 are boys and 118,788 are girls. Children form 10.27% of total population of Nagpur. The total number of slums number 179,952, in which 859,487 people reside. This is around 35.73% of the total population of Nagpur. The municipality has a sex ratio of 963 females per 1,000 males and child sex ratio of 926 girls per 1,000 boys. 1,984,123 people are literate, of whom 1,036,097 are male and 948,026 are female. Average literacy rate of Nagpur city are 91.92%. Men are 94.44% and women are 89.31% literate.
Religion
Hinduism is majority religion in Nagpur city with 69.46% followers. Buddhism is second most popular religion in Nagpur city with 15.57% following it. In Nagpur city, Islam is followed by 11.95%, Christianity by 1.15%, Jainism by 0.90% and Sikhism by 0.68%. Around 0.10% stated 'Other Religion' and approximately 0.20% stated 'No Particular Religion'.
Economy
Nagpur is an emerging metropolis. Nagpur's nominal GDP was estimated to be around 1,406,860 million in 2019-20, making it the largest economic center in entire central India. Nagpur district has a per-capita GDP of 270,617 as of 2019-20 financial year, being the highest in the central India In 2004, it was ranked the fastest-growing city in India in terms of the number of households with an annual income of ₹10 million or more. Nagpur has been the main centre of commerce in the region of Vidarbha since its early days and is an important trading location. Although, Nagpur's economic importance gradually declined relative to Mumbai and Pune after the merging of Vidarbha into Maharashtra because of a period of neglect by the state government, the city's economy later recovered.
The city is important for the banking sector as it hosts the regional office of Reserve Bank of India, which was opened on 10 September 1956. The Reserve Bank of India has two branches in Nagpur, one of which houses India's entire gold assets. Sitabuldi market in central Nagpur, known as the heart of the city, is the major commercial market area.
Nagpur is home to ice-cream manufacturer Dinshaws, Indian dry food manufacturer Haldiram's, Indian ready-to-cook food manufacturer Actchawa, spice manufacturer Suruchi International and Ayurvedic products company Vicco and Baidyanath.
For centuries, Nagpur has been famous for its orange gardens in the country, hence the name "Orange City". Orange cultivation has been expanding and it is the biggest marketplace for oranges in the country. The Maharashtra Agro Industrial Development Corporation has its multi fruit processing division called Nagpur Orange Grower's Association (NOGA) which has an installed capacity of 4,950 MT of fruits per annum. Orange is also exported to various regions in the country as well to other countries. Nagpur is also famous for the cotton and silk which is woven by its large Koshti population of handloom weavers which are around 5,000.
Nagpur and the Vidarbha region have a very prominent power sector as compared to the rest of Maharashtra. Koradi Thermal Power Station and Khaparkheda Thermal Power Station are two major thermal power stations located near Nagpur and operated by MSPGCL. NTPC has a super thermal power plant called Mauda Super Thermal Power Station in Mauda around 40 km from Nagpur and Vidarbha Industries Power Limited (a subsidiary of Reliance Power) is situated at Butibori
The Multi-modal International Hub Airport at Nagpur (MIHAN) is an ongoing project for the Dr. Babasaheb Ambedkar International Airport, Nagpur. The government of Maharashtra formed a special purpose entity, Maharashtra Airport Development Company, for the development of MIHAN.
Prominent Information Technology companies such as TCS, Tech Mahindra, HCL, GlobalLogic, Persistent Systems and Hexaware are located at various IT parks in Nagpur . Infosys has commenced its construction work for its Nagpur campus at MIHAN Special Economic Zone. TAL Manufacturing Solutions has its facility in the SEZ for manufacturing structural components for Boeing's 787 Dreamliner aeroplane. Air India has its MRO Facility in the SEZ which was constructed by Boeing. Dassault Reliance Aerospace Limited (DRAL) has its manufacturing facility in MIHAN where it is manufacturing Falcon jets. Pharmaceutical company Lupin also has its facility in the SEZ.
Apart from MIHAN SEZ the city has three prominent MIDC areas nearby. The Butibori industrial area is one of the largest in Asia in terms of area. The estate's largest unit is Indo Rama Synthetics, which manufactures synthetic polyester yarn. Other units in Butibori include the power transmission company Gammon India Limited (T & D), Gammon India Ltd. (Infra), KEC, Calderys India, Unitech Power Transmissions Limited, ACC Nihon Castings Ltd and Electrolux. CEAT Tyres has its tyre manufacturing plant in Butibori The Hingna industrial estate on the western fringes of the city is made up of around 900 small and medium industrial units. The major ones among them are the tractor manufacturing plant of Mahindra and Mahindra, casting units of NECO Ltd., Candico), Bharat Containers making aluminium aerosol cans Pix Transmissions, and Sanvijay Rolling & Engineering Ltd. (SREL). Kalmeshwar MIDC has 164 industrial plots. JSW Steel, KTM Textile, ESAB India Ltd, ZIM Pharma Ltd, Metlok Pvt. Ltd., Unijuels life sciences, Chemfield Pharmaceuticals Private Ltd., Minex Injection Product Private Ltd., Minex Metallurgical Co.Ltd. and Porohit Textile are a few big names.
Owing to rich natural resources in the region, mining is a major activity. Several government organisations related to the mining industry are based in Nagpur, which includes Western Coalfields Limited (one of the eight fully owned subsidiaries of Coal India Limited), MOIL and Indian Bureau of Mines.
Education
Nagpur is a major education hub in Central India.
There are two types of schools in the city. NMC (Government) run schools and private schools run by trusts. These schools follow the 10+2+3/4 plan (15 years of schooling leading to the first degree), the first ten years constituting school education consisting of four years primary level, three years of upper primary level and three years of high school level with a public examination at the end of tenth class and 12th class constituting the Secondary and Higher Secondary Board Examination, respectively. This is followed by either a general degree course in a chosen field of study or a professional degree course, such as law, engineering and medicine. These schools are governed by either of the following boards: Maharashtra State Board of Secondary and Higher Secondary Education, Central Board for Secondary Education (CBSE), Indian Certificate of Secondary Education (ICSE) and The International Baccalaureate (IB).
Admission to professional graduation colleges in Nagpur is through MHT-CET, JEE (Main), CAT, CLAT, GATE, CMAT, GMAT and NEET.
Nagpur has four state universities: Rashtrasant Tukadoji Maharaj Nagpur University (founded in 1923 as Nagpur University, one of the oldest in the country and having more than 600 affiliated colleges), Maharashtra Animal and Fishery Sciences University, Kavikulaguru Kalidas Sanskrit University and Maharashtra National Law University.
Hislop College established in 1883 is one of the oldest college in Nagpur, named after Scottish missionary Stephen Hislop (1817–1863), who was a noted evangelist, educationist and geologist. Vasantrao Naik Government Institute of Arts and Social Sciences (established in 1885 as Morris college) is an old college in the city. College of Agriculture is another old college in the city, founded in 1906 by the then British Government. It is one of the first five agriculture colleges in the country.
Nagpur has four government medical colleges: Government Medical College, Indira Gandhi Government Medical College, Nagpur, Government Dental College and Government Ayurvedic College, and also a private MBBS institute, N. K. P. Salve Institute of Medical Sciences and Research Center. Medical colleges in the city are affiliated to Maharashtra University of Health Sciences. All India Institute of Medical Sciences has been established in 2018 and it has started its classes from GMCH campus temporarily till its own campus gets constructed.
Most engineering colleges in the city are affiliated with Rashtrasant Tukadoji Maharaj Nagpur University. Laxminarayan Institute of Technology (established 1942) is a chemical engineering and technology institute located in Nagpur and managed directly by Rashtrasant Tukadoji Maharaj Nagpur University. Government Polytechnic, Nagpur (established 1914) is one of the oldest polytechnic in India. Visvesvaraya National Institute of Technology, located in the city, is the only NIT in Maharashtra. Indian Institute of Information Technology has been established as a PPP with TCS and Ceinsys (erstwhile ADCC Infocad) as industry partners in 2016. Other prominent engineering colleges in the city include G. H. Raisoni College of Engineering Nagpur, Shri Ramdeobaba College of Engineering and Management, Kavikulguru Institute of Technology and Science, KDK College of Engineering, Yeshwantrao Chavan College of Engineering and Government College of Engineering, Cummins College Of Engineering For Women, Nagpur .
Nagpur has two major management institutes, Indian Institute of Management established in 2015 and Institute of Management Technology, private management college, established in 2004. Symbiosis International University has its campus in the city which contains two of its institute namely Symbiosis Institute of Business Management and Symbiosis Law School. G.S. College of Commerce and Economics, established in 1945, is the first commerce institute in the region to get autonomous status.
Nagpur also has other centrally funded institutes like National Power Training Institute, Central Institute for Cotton Research, Central Institute of Mining and Fuel Research, Central Power Research Institute, National Academy of Direct Taxes, National Civil Defence College, National Research Centre for Citrus, Petroleum and Explosives Safety Organisation, and National Environmental Engineering Research Institute.
Government Chitrakala Mahavidyalaya is also a premier institute in the city. Nagpur also has an IGNOU and YCMOU regional centre.
Culture
Cultural events and literature
The city contains people from other Indian states as well as people belonging to the world's major faiths, and yet is known for staying calm during communal conflicts in India. Nagpur plays host to cultural events throughout the year. Cultural and literary societies in Nagpur include Vidarbha Sahitya Sangh (for development of Marathi), Vidarbha Rashtrabhasha Prachar Samiti (promotion and spreading Hindi) and Vidarbha Hindi Sahitya Sammelan (for promoting Hindi). Marathi Sahitya Sammelan, the conference on Marathi Literature were held twice in Nagpur city. Nagpur also hosts the annual Orange City Literature festival since 2019 and Vidarbha Literary Fest since 2020, featuring local and international authors. Nagpur is the head office of Aadim Samvidhan Sanrakshan Samiti (working for the rights of scheduled tribes).
The South-Central Zone Cultural Centre also sponsors cultural events in Nagpur city, such as the Orange City Craft Mela and Folk Dance Festival, Vidarbha which is noted for its numerous folk-dances such as the human tiger. Newspapers are published from Nagpur in Marathi, English and Hindi. In addition, the Government of Maharashtra organises a week-long Kalidas Festival, a series of music and dance performances, by national level artists. Nagpur Municipal Corporation in partnership with Maharashtra Tourism Development Corporation organises Nagpur Mohotsav at Yeshwant Stadium, in which many distinguish artists participate. The Nagpur Municipal Corporation also organises the Orange City International Film Festival (OCIFF) annually, in association with Saptak, Pune Film Foundation, Vidarbha Sahitya Sangh, and Rashtrasant Tukadoji Maharaj Nagpur University (RTMNU).
The Nagpur Central Museum (est. 1863) maintains collections are mainly for Vidarbha region.
Three brothers Ghulam Ali (Kotwal), Mohammad Saaduddin (Subedar) and Mohammad Saladuddin (Minister and Kotwal) from Jhajjar are remembered as great scholars of Urdu and Persian during the reign of Maharaja Senasaheb Subha Chhatrapati Raghuji Bapusaheb Bhonsle III. They founded 'Jhajjar Bagh' at Hansapuri (Now Mominpura). In this location, they built their residence 'Aina-e Mahal', a well and a Masjid (now Masjid Ahle Hadith). 'Jhajjar Bagh' also known as 'Subedar ka Bada' was located where nowadays Mohammad Ali Road at Mominpura, Jamia Masjid, Mohammad Ali Sarai and Furqania Madrasa are located.
The state government has approved a new safari park of international standards besides Gorewada Lake. In 2013 NMC erected the gigantic Namantar Shahid Smarak in memory of Namantar Andolan martyrs.
The Orange City LGBTQ Pride March is also held annually in Nagpur, along with the Nagpur LGBT Queer Carnival during the pride month
Religious places and festivals
Deekshabhoomi, the largest hollow stupa or the largest dome shaped monument and an important place of the Buddhist movement is, located in Nagpur. Every year on the day of Vijayadashami, i.e. Dussehra, followers of Ambedkar visit Deekshabhoomi to mark the conversion ceremony of Ambedkar and his followers in Nagpur into Buddhism that took place on 14 October 1956. It has been given 'A' grade tourist place status by Maharashtra Government in March 2016. 14 April, which is the birthdate of Ambedkar, is celebrated as Ambedkar Jayanti.
Jainism has a good presence in Nagpur. There are nearly 30 Jain temples. The old ones are Sengan Jain temple Ladpura, Parwarpura Jain temple, Kirana oli Jain temple, and Juna oli Jain temple. In west Nagpur Shri 1008 Shantinath Digamber Bhagwan temple is situated.
The most famous temple in Nagpur is Tekdi Ganesh Mandir, and is said to be one of the Swayambhu ("self-manifested") temples in the city. Sri Poddareshwar Ram Mandir and Shri Mahalaxmi Devi temple of Koradi are important Hindu temples.
Religious events are observed in the city throughout the year. Ram Navami is celebrated in Nagpur with shobha yatra with a procession of floats depicting events from the Ramayana. Processions are also held on important festivals of other religions such as Dhamma Chakra Pravartan Din, Vijayadashami, Eid E Milad, Guru Nanak Jayanti, Mahavir Jayanti, Durga puja, Ganesh Chaturthi and Moharram. Like the rest of India, Nagpurkars celebrate major Hindu festivals like Diwali, Holi and Dussera with enthusiasm. Celebrations lasting for several days are held on Ganesh Chaturthi and Durga Puja festivals in virtually every small locality in the city.
The city also contains a sizeable Muslim population, and famous places of worship for Muslims include the Jama Masjid-Mominpura and Bohri Jamatkhana-Itwari. The most famous shrine (dargah) of Tajuddin Muhammad Badruddin is at Tajabad. Annual Urs is celebrated in great enthusiasm and unity on 26th of Muharram. Nagpur Is also called as Tajpur as the holy shirine of Sufi Saint Baba Tajuddin.
The St. Francis De Sales Cathedral is located in Sadar as well as the All Saints Cathedral church. There are many south Indian temples in Nagpur like Sarveshwara Devalayam, where all south Indian festivals are celebrated like Sitarama Kalyanam, Radha Kalyanam Dhanurmasa celebration with Andal Kalyanam, Balaji temple in seminary hills where every year Bramhotsavam to Lord Balaji and Lord Kartikeya is celebrated here. There are 2 Ayyapa temples, one at Ayyapa Nagar and the other at Harihara Nagar, Raghvendraswami Mutt, Murugananda Swami Temple at Mohan Nagar, Nimishamba Devi temple Subramanyiam devastanam at Sitabuldi and many more such south Indian temples are here in Nagpur as there is quite a good populations of south Indians in Nagpur.
Marbat Festival is a unique festival for Nagpur and is organised every year a day after the bullock festival of 'Pola'. The tradition of taking out the Marbat processions of 'kali' (black) and 'pivli' (yellow) Marbats (idols), started in 1880 in the eastern part of the city. A number of 'badgyas' (mascots), representing contemporary symbols of evil, comprise another feature of the annual processions. This festival dates back to the 19th century when the Bhonsla dynasty ruled.
There is a Parsi Zoroastrian Agiary (Dar-e-Meher) in Nagpur, where the Parsi New Year is celebrated by the Parsi community in Nagpur.
Arts and crafts
The tradition of painting in Nagpur was patronised by the Royal House of the Bhonsales as well as common people. Illustrated manuscripts, including of the Bhagavat, Jnaaneshwari, Shakuntala, and Geeta, and the folk patachitras related to some festivals are available besides murals. The community of artists was called (painters), and this community has today turned to sculpt.
Textile was once an important industry in Nagpur. Good quality cotton was produced in abundant quantities thanks to a suitable soil and climate. With the introduction of the railways, cotton sales and goods transport flourished. Besides cotton textiles, silk and wool weaving was also practised in the district. Silk sarees and pagota, patka, dhoti, and borders were woven with the silk thread.
Cuisine
The Vidharbha region has its own distinctive cuisine known as the Varhadi cuisine or Saoji cuisine. Saoji or Savji cuisine was the main cuisine of the Savji community. This traditional food is famous for its spicy taste. The special spices used in the gravy include black pepper, dry coriander, bay leaves, grey cardamom, cinnamon, cloves, and ample use of poppy seeds. Non-vegetarian food especially chicken and mutton are commonly eaten in Saoji establishments in Nagpur. There are numerous Savji bhojanalays in Nagpur which are so popular in Maharashtra that the renowned Indian chef Sanjeev Kapoor once featured Savji mutton on one of his TV shows and the recipe is listed on his website. Nagpur is also famous for its oranges, which have some typical qualities have recently begun to attract international attention. Numerous beverages are made out of the oranges. Santra Barfi is also a famous dish, arising from orange which is produced locally in Nagpur. Mominpura is a majority Muslim area of the city and it is famous for its Mughal dishes and Biryani. The city is also famous for rare black chickens called Kadaknath Chicken which are cooked in varhadi style.
Nagpur is also famous for tarri poha, a variety of flattened rice, and has many food joints; each having their own way of preparing and serving it. Samosas are also famous in Nagpur and is available at many restaurants and food spots. Another famous food is Patodi and Kadhi.
Tourism
Tiger reserves
Nagpur is surrounded by many tiger reserves and acts as a gateway, hence called Tiger capital of India. Tiger reserves such as Pench Tiger Reserve is situated around 100 km from the city and can be reached through NH44 in Nagpur Jabalpur road. Tadoba National Park is situated south of the city and is around 141 km from the city. Umred Karhandla Wildlife Sanctuary, Bor Wildlife Sanctuary, Navegaon National Park, Melghat Tiger Reserve and Kanha Tiger Reserve are the other tiger reserves which are located at a radius of 200 km from the city. The city has its own reserved forest area at Seminary Hills and Gorewada.
Zoos, Gardens and Lakes
Maharajbagh zoo is an existing zoo which is located in the heart of the city near Sitabuldi and consists of a variety of animals. The zoo is going through fund crunches and does not have a proper plan for which the Central Zoo Authority had derecognised the zoo in November 2018. Its recognition has since been extended under the directions from MoEFCC. Gorewada Zoo is an upcoming international zoo project which is being set up beside Gorewada Lake It is being jointly developed by Forest Development Corporation of Maharashtra and Essel Group.
The city consist of various natural and man made lakes. Khindsi Lake, Ambazari Lake and Gorewada Lake are the natural lakes of the city while Futala Lake, Shukrawari Lake, Sakkardara Lake, Zilpi Lake and Sonegaon lake are the man made lakes. The city also has various gardens which consist of Ambazari Garden, Telankhedi Garden, Satpuda Botanical Garden, Japanese Garden and Children's Traffic Park.
Religious places
Nagpur boasts many religious structures that hold importance for differing religious beliefs. Deekshabhoomi and Dragon Palace are important religious places for Buddhists across India and the world. Deekshabhoomi is the place where B. R. Ambedkar with millions of his followers embraced Buddhism in the year 1956. Dragon Palace Temple is situated at Kamptee which is around from the city. It also has a state of the art Vipassana centre which was inaugurated by President of India Ram Nath Kovind on 22 September 2017. Other prominent religious structures include Ramtek Fort Temple at Ramtek which is a temple built inside a fort and is away from Nagpur, Adasa Ganpati Temple located near Savner is one of the eight Ashta Vinayaks in Vidarbha, Baba Tajjuddin Dargah, Shri Shantinath Digambar Jain Mandir at Ramtek, Shree Ganesh Mandir Tekdi, located near Nagpur Railway Station and one of the Swayambhu temple of Lord Ganesha, Sai Baba Mandir at Wardha road, Telankhedi Hanuman Temple, Swaminarayan Temple, Koradi Temple, located at Koradi, Shri Poddareshwar Ram Temple, Balaji Temple, All Saints Cathedral and Gurudwara Guru Nanak Darbar.
Museums
The city also has some museums which are Nagpur Central Museum and Narrow Gauge Rail Museum. Raman Science Centre is a premium Science Centre of Central India, that has of late become a must see feature on the city's tourist landscape with many scientific experimental edutainment installations which also has a planetarium and a unique facility called the Science on a Sphere inside. Amusement parks such as Fun N Food Village, High Land Park, Fun Planet and Dwarka River Farms and Amusement Park are located in the city.
Sports
Nagpur is a big center for cricket in Vidarbha owing to the presence of the Vidarbha Cricket Association. Vidarbha Cricket Association (VCA) is the governing body of cricket activities in the Vidarbha region in Maharashtra. It is affiliated to the Board of Control for Cricket in India. Nagpur is one of the few Indian cities that has more than one international cricket stadium, the older one being the Vidarbha Cricket Association Ground situated in Civil Lines, and the new one, the Vidarbha Cricket Association Stadium, inaugurated in 2008 is situated in Jamtha, Wardha Road on the outskirts of the city.
Vidarbha Cricket Association Stadium has been built on Wardha road with a seating capacity of 45,000 people at a cost of . It is one of the fifteen test cricket venues in the country. Vidarbha Cricket Association Ground has been the venue for the 1987 Reliance World Cup and 1996 Wills World Cup. Vidarbha Cricket Association Stadium has been the venue for the 2011 Cricket World Cup and 2016 ICC World Twenty20. The stadium also hosts certain matches of the Indian Premier League and had been the home city for the now defunct Deccan Chargers in the 2010 season and was also the home city for Kings XI Punjab along with Mohali in the 2016 season. Vidarbha Cricket Association also has a cricket academy at the main centre in Vidarbha Cricket Association Ground and three more centres. It also has its own cricket teams which play in various formats as mandated by BCCI. The Vidarbha cricket team had won the Ranji Trophy and Irani Cup consecutively in 2017-18 and 2018-19 season.
Vidarbha Hockey Association is a body governing field hockey in the Vidarbha Region and is affiliated to Hockey India as an associate member. Vidarbha Hockey Association Stadium is the hockey ground owned and managed by Vidarbha Hockey Association.
Nagpur District Football Association(NDFA) is the district governing body for football in Nagpur, Maharashtra and is affiliated with the Western India Football Association, the state sports governing body. The Nagpur District Football Association is a district level football body and conducts various matches among the schools and clubs. It has its own league. NDFA Elite division Champions League, another football tournament, was held at Nagpur annually since 2010 till 2014 by Lokmat Group in Yeshwant Stadium. Indian Friends Football Club(IFFC), Rabbani, Rahul CLub and Young Muslim Football Club (YMFC) are renowned football clubs in the city. Other clubs include, Rabbani Club, Rahul Club, City Police, South East Central Railway, Qidwai Club, SRPF, New Globe and City Club. Nagpur FC has its own Football Academy in Dhanwate National College, Congress Nagar. Slum Soccer is a social initiative started by Vijay Barse for young runaways and former drug addicts to rehabilitate them through football.
Badminton tournaments in the city are organised by Nagpur District Badminton Association (NDBA) which is affiliated to Maharashtra Badminton Association which in turn is a member of Badminton Association of India. Nagpur District Table Tennis Association organises table tennis tournaments at district level and is affiliated to Maharashtra Table Tennis Association. The city also has a divisional sports complex which consist of Indoor stadium and other gymnastic facilities.
The city's major indoor arena is Vivekananda Nagar Indoor Sports Complex located near Mankapur. The arena hosts several political events, concerts and sports events like badminton, basketball, lawn tennis.
The city also has various running events, for general public, organised by various institutions.
Media
The Hitavada is the largest selling broadsheet English daily newspaper of Central India. It was founded in 1911 by freedom fighter Gopal Krishna Gokhale in Nagpur. Other English dailies circulated in the city include The Times of India, The Indian Express, The Economic Times and Marathi dailies circulated in city include Lokmat Times and Sakal. Lokmat is the largest circulated Marathi newspaper in Nagpur, and has its administrative office in the city. Tarun Bharat, Deshonatti, Maharashtra Times, Punya Nagari, Lokshahi Varta, Sakal, Nagpur-news.in, Nagpur Today News ,Divya Marathi and Loksatta are other Marathi dailies available. Hindi newspapers such as Yugdharma, Nava Bharat, Dainik Bhaskar and Lokmat Samachar are also circulated. Employment News, which is published weekly, is also circulated in Hindi, English and Urdu.
All India Radio is the oldest radio broadcaster in the city and has its office in the Civil Lines area. Vividh Bharati, the entertainment radio station, and Gyan Vani, the educational radio station, are the FM radio stations of All India Radio and are available in the frequency 100.6 FM & 107.8 FM, respectively. Other private FM broadcasting channels with their frequencies include Radio City at 91.1 FM, Red FM at 93.5 FM, My FM at 94.3 FM, Radio Mirchi at 98.3 FM, Mirchi Love FM at 91.9 FM and Big FM at 92.7 FM.
Television broadcasting in Nagpur began on 15 August 1982 with the launch of Doordarshan, the Government of India's public service broadcaster. It transmits DD National and DD News, which are free-to-air terrestrial television channels and one regional satellite channel called DD Sahyadri. Private satellite channels started in the 1990s. Lord Buddha TV and Awaaz India TV are Free-to-air television which are based in the city and are available in various cable operators and DTH platforms Satellite TV channels are accessible via cable subscription, direct-broadcast satellite services or internet-based television. Cable TV operators or multi system operators in the city include UCN cable network, GTPL, In cable, BCN and Diamond cable network. All the DTH operators in the country are available in the city viz. Airtel digital TV, DD Free Dish, Dish TV, Sun Direct, Reliance Digital TV, D2h, and Tata Sky.The city also has its own Regional DTH operator namely UCN which serves the Vidarbha region of Maharashtra headquartered in the city itself.
Broadband Internet service is available in the city and is provided by various Internet service providers. Wi-Fi is available in major educational institutions and certain areas in the city, including government institutions under Smart City plan by NSSCDCL. Currently 3G services in the city are provided by BSNL, Airtel, Vodafone Idea Limited, and 4G services in the city are provided by Airtel, Jio, Vodafone Idea Limited and BSNL.
Transport
Rail
Railways started in Nagpur way back in 1867 when portion of Bombay-Bhusaval-Nagpur line was opened for traffic and train service from Nagpur to Calcutta was started in 1881. Today, a total of 254 trains stop at Nagpur railway station. These include passenger, express, mail, Duronto, Rajdhani, Garib Rath trains. Of these 65 are daily trains and 22 terminate/originate from Nagpur. Almost 1.6 lakh passengers board/leave Nagpur Railway Station Nagpur railway station, one of the oldest and busiest Stations of India was inaugurated in its present from on 15 January 1925 by the then Governor Sir Frank. Apart from the Nagpur railway station, Ajni Railway Station and Itwari Railway Station are the important stations of the city. Other railway stations in the city include Ajni, Motibagh, Kalamna, Itwari and Godhani. Nagpur-Ajni rail route which is just long, is the shortest train run in Indian Railways primarily meant for crew to travel from Nagpur station to the workshop at Ajni.
The city is the divisional headquarters for the Central Railway and South East Central Railway Zone of Indian Railways. Nagpur is a city with two divisional headquarters, a rare distinction it shares with Lucknow, which has headquarters for two different divisions in Northern Railway zone and North Eastern Railway zone.
Nagpur Metro Rail
The Nagpur Metro project was announced by the state government of Maharashtra with the expenses of INR 4,400 Cr and 3,800 Cr for its first phase which consists of two corridors – north–south corridor and east–west corridor of .
The site inspection began in March 2012 with initiatives from Nagpur Improvement Trust. The project is executed by a SPV called Maharashtra Metro Rail Corporation Limited (erstwhile Nagpur Metro Rail Corporation Ltd.). In July 2015, the project was approved by the government of Maharashtra. Prime Minister Narendra Modi inaugurated operations on Nagpur Metro on 7 March 2019 via video conferencing along with Maharashtra Chief Minister Devendra Fadnavis and Union Cabinet Minister Nitin Gadkari.
Nagpur broad-gauge Metro Rail
The Nagpur broad-gauge Metro is a commuter rail project planned Nagpur and extending up to adjacent districts of Wardha and Bhandara. The project is estimated to cost INR 418 Cr and consists of four routes, each originating from Nagpur and terminating at Narkhed, Ramtek, Wardha and Bhandara.
Road
Nagpur is a major junction for roadways as India's two major national highways, Srinagar-Kanyakumari (National Highway 44) and Mumbai-Kolkata (NH 53(Economic Corridor1(EC1)) pass through the city. National Highway 47 connects Nagpur to Bamanbore in Gujrat. Nagpur is at the junction of two Asian Highways namely AH43 Agra to Matara, Sri Lanka and AH46 connecting Kharagpur, India to Dhule, India. The highway to Mumbai via Aurangabad, a shorter route, was re-built on the national highway basis. This highway significantly reduces the distance travelled by NH 6 and NH 3 between two cities. The new proposed Mumbai–Nagpur Expressway between Nagpur and Mumbai will be and projected to cost . In 2009, NHAI announced the extension of the existing NH 204 to Nagpur via Kolhapur-Sangli- Solapur-Tuljapur-Latur-Nanded-Yavatmal-Wardha and connecting it to the NH-7 at Butibori near Nagpur. The entire NH 204 highway has been included in the national highway mega projects for upgradation to 4-lane. One more national highway NH-547 Savner-Chhindwara-Narsinghpur has connected with NH 47 at Savner near Nagpur providing another optional connectivity with the northern part of India.
Maharashtra State road transport Corporation (MSRTC) runs cheaper transport service for intercity, interstate, and intrastate travel. It has two bus stations in Nagpur: Nagpur Bus Sthanak (CBS-1) at Ganeshpeth and MorBhawan (CBS-2) at Jhansi Rani Square, Sitabuldi. It operates 1600 daily services from CBS-1 to long and short distances within the state and to places in other surrounding states. It also operates 750 daily services from CBS-2 to short distances within Vidarbha.
The civic body through its bus operators (three red and one green) plies 487 buses by which over 1.60 lakh people commute. The city bus operation is named as Aapli Bus. The operators consist of diesel, ethanol and CNG run buses. A total of 5500 trips of 123 routes are covered by city buses. A common mobility card called MAHA-CARD has also been issued which will help people commute with buses and upcoming metro rail. A Green bus project featuring India's first ethanol-powered buses was established in August 2014.
Autorickshaws and private taxi operators under Ola Cabs and Uber also ply in city.
Air transport
Dr. Babasaheb Ambedkar International Airport is operated by Mihan India Private Limited (MIPL) and owned by Airports Authority of India.
Nagpur's Air Traffic Control (ATC) is the busiest in India, with more than 300 flights flying over the city every day in 2004. In October 2005, Nagpur's Sonegaon Airport was declared an international airport and was renamed Dr. Babasaheb Ambedkar International Airport.
Nagpur is well connected by daily direct flights to Mumbai, Delhi, Hyderabad, Visakhapatnam, Kolkata, Bangalore, Pune, Chennai, Kochi, Indore, Ahmedabad and Raipur operated by Air India, IndiGo and GoAir. Air Arabia operates a 4 times a week to and fro flight between Nagpur and Sharjah and Qatar Airways operates a daily direct flight to and from Doha.
The Nagpur Airport has received Special Achievement Award 2012–2013 from Airports Authority of India. Nagpur became the first airport in India to commission the INDRA system and also has ADS-B system. No other airport in the country had commissioned INDRA yet. Nagpur Airport became the first airport in the country to receive an ISO 27000 certificate. In fact, Nagpur is not only the first in India but also the first in world to be certified for Air navigation service provider (ANSP). There are seven airports in the world which have ISO 27000, but none of them have it for ANSP.
Nagpur is currently witnessing an economic boom as the Multi-modal International Cargo Hub and Airport at Nagpur (MIHAN) is under development. MIHAN will be used for handling heavy cargo coming from southeast Asia and the Middle East. The project will include Special Economic Zone (SEZ) for information technology (IT) companies.
The government of India has identified Nagpur Airport as one of the safe airports for diverted flights and emergency landing. In fact, many flights have used the airport during emergencies. This is because all international and domestic airlines had already been informed by the government to go to Nagpur during emergencies. The availability of excellent fire fighting equipment, air traffic control equipment and the latest radar, and being a city with good hospitals and hotels, made the airport a good choice during emergencies.
Nagpur Airport has an annual capacity of 10 lakh passengers, but it handled 19 lakh passengers in 2016-17 and 21 lakh passengers in 2017-18 which is an increase of 14% year on year. Airport expansion and improvement of service is in the cards and privatisation of the airport has been proposed.
Notable people
Sister cities
Jinan, Shandong, China
See also
MIHAN
Nagpur Metro
Nagpur District
Make In Maharashtra
List of Maratha dynasties and states
List of forts
References
External links
Cities and towns in Nagpur district
Neighbourhoods in Maharashtra
Vidarbha
Cities in Maharashtra
Metropolitan cities in India |
1249771 | https://en.wikipedia.org/wiki/HeliOS | HeliOS | Helios is a discontinued Unix-like operating system for parallel computers. It was developed and published by Perihelion Software. Its primary architecture is the Transputer. Helios' microkernel implements a distributed namespace and messaging protocol, through which services are accessed. A POSIX compatibility library enables the use of Unix application software, and the system provides most of the usual Unix utilities.
Work on Helios began in the autumn of 1986. Its success was limited by the commercial failure of the Transputer, and efforts to move to other architectures met with limited success. Perihelion ceased trading in 1998.
Development
In the early 1980s, Tim King joined MetaComCo from the University of Bath, bringing with him some rights to an operating system called TRIPOS.
MetaComCo secured a contract from Commodore to work on AmigaOS, with the AmigaDOS component being derived from TRIPOS. In 1986, King left MetaComCo to found Perihelion Software, and began development of a parallel operating system, initially targeted at the INMOS Transputer series of processors. Helios extended TRIPOS' use of a light-weight message passing architecture to networked parallel machines.
Helios 1.0 was the first commercial release in the summer of 1988, followed by version 1.1 in autumn 1989, 1.1a in early 1990, 1.2 in December 1990 followed by 1.2.1 and 1.2.2 updates. Version 1.3 was a significant upgrade with numerous utility, library, server and driver improvements. The last commercial release was 1.3.1. Later Tim King and Nick Garnett gave permission to release the sources under the GNU Public License v3.
Kernel and nucleus
Helios was designed for a network of multiple nodes, connected by multiple high-bandwidth communications links. Nodes can be dedicated processing nodes, or processors with attached I/O devices. Small systems might consist of a host PC or workstation connected to a set of several processing nodes, while larger systems might have hundreds of processing nodes supported by dedicated nodes for storage, graphics, or user terminals.
A Helios network requires at least one I/O Server node that is able to provide a file system server, console server and reset control for the processing nodes. At power on, the Helios nucleus is bootstrapped from the I/O server into the network. Each node is booted using a small first-stage loader that then downloads and initialises the nucleus proper. Once running, a node communicates with its neighbours, booting them in turn, if required.
The Helios nucleus is composed of the kernel, libraries, loader service and the processor manager service.
Kernel
The Helios kernel is effectively a microkernel, providing a minimal abstraction above the hardware with most services implemented as non-privileged server processes. It provides memory allocation, process management, message passing and synchronisation primitives.
Libraries
The Helios nucleus contains three libraries: the system, server and utility libraries. The utility library provides some basic library routines for C programming that are shared by the other libraries. The system library provides the basic kernel interface, converting C function calls into messages sent to and from the kernel. It implements an abstraction that allows communication between processes regardless of their location in the network. The server library provides name space support functions for writing Helios servers, as described below.
Loader and processor manager
The remaining components of the nucleus are the loader and processor manager servers. Once the kernel is loaded, these processes are bootstrapped, and they integrate the newly running node into the Helios network.
Naming and servers
A key feature in Helios is its distributed name system. A Helios network implements a single unified name space, with a virtual root node, optional virtual network structuring nodes, nodes for each processor, and sub-processor name spaces provided by services. Names are similar to those in Unix, using a forward slash separating character and textual naming elements.
The name space is managed by the network server, which is started by the I/O server once the nucleus is booted on its first attached node. The network server uses a provided network map to allocate processor names and initialise drivers for hardware devices at specific nodes in the network. The kernel includes a name resolver, and manages a local cache of routes to previously resolved names.
Servers are Helios processes that implement the General Server Protocol, typically with the support of the server library. The server protocol is conceptually similar to the Unix VFS API, and more closely to Plan 9's 9P. It requires that servers represent their resources as files, with standardised open/read/write/close-style operations. Similar to facilities such as /proc in Plan 9 and other Unix-like operating systems, resources such as files, I/O devices, users, and processes are all represented as virtual files in the namespace served by their managing process.
Key servers in Helios are the previously mentioned loader, processor manager and network server, together with the session manager, the window server and the file server. Others include the keyboard, mouse, RS232 and Centronics servers (built into the host I/O server), the null server (like Unix's /dev/null), and the logger server (like Unix's syslog).
Programming and utilities
From a user's perspective, Helios is quite similar to Unix. Most of the usual utility programs are provided, some with extensions to reflect the availability of multiple machines.
What is not immediately apparent is that Helios extends the notion of Unix pipes into a language called Component Distribution Language (CDL). In CDL, a typical Unix shell pipeline such as is called a task force, and is transparently distributed by the Task Force Manager server across the available CPUs. CDL extends traditional Unix syntax with additional operators for bi-directional pipes, sequential and parallel process farm operators, load balancing and resource management.
Helios applications can be written using C, C++, FORTRAN and Modula-2. The POSIX library assists in porting existing Unix software, and provides a familiar environment for programmers. Helios does not support programs written in the occam programming language.
Hardware
Helios was predominantly intended to be used with Transputer systems. It is compatible with products from various manufacturers including INMOS' TRAM systems, the Meiko CS, Parsytec MultiCluster and SuperCluster, and the Telmat T.Node. The Atari Transputer Workstation was perhaps the highest profile Helios hardware, at least outside academia.
Helios can run on T4xx and T8xx, 32-bit Transputers (but not the T2xx 16-bit models) and includes device drivers for various SCSI, Ethernet and graphics hardware from Inmos, Transtech, and others.
In its later versions, Helios was ported to the TI TMS320C40 DSP and to the ARM architecture, the latter used by the Active Book tablet device.
References
Further reading
External links
Ram Meenakshisundaram's Transputer Home Page
Transputer.net Helios Library
Distributed operating systems
Microkernel-based operating systems
Unix variants
1988 software |
16918885 | https://en.wikipedia.org/wiki/Mark%20Pelczarski | Mark Pelczarski | Mark Pelczarski wrote and published some of the earliest digital multimedia computer software. In 1979 while teaching computer science at Northern Illinois University, he self-published Magic Paintbrush, which was one of the first digital paint programs for the Apple II, the first consumer computer that had color graphics capabilities.
Pelczarski was hired as an editor at SoftSide magazine in 1980, but then left to start Penguin Software in 1981 to publish his optimistically-titled Complete Graphics System, which included digital imaging and 3D wireframe rendering.
In the next year he co-wrote and published Special Effects and Graphics Magician with David Lubar, who was then writing for Creative Computing magazine. Special Effects produced digital effects with images and also contained one of the first uses of digital paintbrushes. Graphics Magician featured one of the first uses of vector graphics for image compression, as well as animation routines that made it easy for programmers to add animation to their software. Graphics Magician was licensed by most of the software publishers in the early 1980s for adding graphics and animation to their games and educational software, won numerous awards, and was one of the best selling programs of the time. It was the forerunner of software like Adobe Flash for compressed images and animation.
Pelczarski wrote a monthly column for Softalk magazine about computer graphics programming, and those columns were later collected into a book, Graphically Speaking.
In 1986 Pelczarski wrote and published one of the first digital music performance programs, MIDI Onstage, which allowed control of MIDI devices to accompany live performances. Soon after, he built the digital portion of recording studios for Jimmy Buffett and Dan Fogelberg.
Pelczarski returned to college teaching and taught one of the first online courses in 1996. As part of the development for the course he wrote Dialogue, one of the first web forum applications, which was made available free to dozens of other universities around the world as they entered into online education.
References
General References
Year of birth missing (living people)
Living people
Computer programmers
Northern Illinois University faculty |
4105196 | https://en.wikipedia.org/wiki/NetSys | NetSys | NetSys International (Pty) Ltd. is a South African–based company, specialising in solutions for the weather and aviation industries.
History
The company was registered in 1981 as a private company (Company registration no: 1981/02825/07).
NetSys was founded for the purpose of developing network communications equipment and automatic message switching systems. The first customer was the South African Defence Force which ordered a logistics network comprising some 50 nodes of proprietary networking devices. The experience gained during this deployment was instrumental in entering into the international market. In 1988, NetSys was awarded a contract by the American computer company, Control Data Corporation, to develop a meteorological message-switching system on the back of the NetSys network devices. This development and subsequent investment in aviation related product development enabled NetSys to become a major solutions supplier in the Meteorological and Aviation industries.
Since the work done for Control Data Corporation, NetSys has won a selection of tenders from a number of prestigious organisations. The main product was Weatherman, a message switch tailored for effective management of weather reports. In 1996 NetSys entered into the new area of Aviation with the commissioning of an automated Met and AIS pilot briefing system in Sweden, under the system name of Met/AIS. Isak Lombard, an executive director of NetSys, also joined NetSys in 1996 to become part of the Met/AIS team. The Met/AIS system became FlightMan that was subsequently acquired by aviation authorities in many other countries.
NetSys International, in a strategic move to ensure that it remained a leader in its chosen fields, sought ISO Accreditation during 1998, achieving this goal on 29 March 1999 when NetSys received its ISO 9001 Accreditation Certificate from DEKRA – ITS Certification Services (Pty) Ltd.
In another strategic move, NetSys decided in 2001 to supply commercial off the shelf (COTS) hardware for its system solutions – Cisco Systems for synchronous connections and Wide Area Network interfaces, 3Com for Local Area Network connections, Digi for asynchronous ports, Hewlett-Packard and Dell for server hardware are given preference when proposing new deals.
In 2003, NetSys decided to consolidate its applications under one umbrella user-interface to provide a consistent look-and-feel for a diverse set of applications and different types of users. The solution was based on a client-server architecture where much of the existing proven server software (NSSRV) are re-used but exposed in modern client software (NSWS). This new solution is called the NetSys Solutions approach, where necessary elements from the server and client software are chosen and utilised to create a best-fit solution for the customer's needs.
NetSys entered a new development direction in 2004 with the creation of its NOTAM management software. This system is currently in operation in Taiwan.
Origin of the name
NetSys was founded as Network Systems. The name was changed in 1988 to a shorter form, Netsys International, and finally in 2004 by the managing director of the time, to NetSys International.
In 1993, Netsys UK was registered to act as the NetSys UK representative.
Meteorological communication centres/Regional telecommunication hubs
NetSys has a number of users that falls into this category. These users are typically responsible for the ingestion and dissemination of weather data. They provide a service to their clients by selectively distributing data based on routing tables. The routing tables are based on the WMO headers, or it may be AFTN send groups and addresses. Such a communication centre normally has a number of data sources and destinations which may support a variety of protocols. The product that is traditionally used is WeatherMan. More recent users use the NSSRV MHS with the modern NSWS Control Centre.
The NetSys software fulfills the needs of a Regional Telecommunications Hub (RTH), as specified by the World Meteorological Organization (WMO). In this category NetSys has its message switching software in South Africa (Pretoria), India (Delhi), Hungary (Budapest), Switzerland (Zurich) and Poland (Warsaw).
Another sub-category of these users are those responsible for message communication but more under the auspices of the International Civil Aviation Authority (ICAO). These users are typically connected to the AFTN for the reception and transmission of MET data. The CoreMet system of UK NATS at the Heathrow communication centre is an example of such a site. This system is also responsible for the data quality and the uplink of all OPMET data disseminated over the SADIS satellite broadcast. In this regard, WeatherMan is used extensively to automatically correct systematic errors and rejected messages are sent to the NSWS Control Centre for manual correction with the aid of system provided diagnostics and hints.
Another category of software that is used by NATS for example is the NSWS Met Data Monitor. It is used to monitor the timely arrival of data from different locations. Late arrivals are displayed on a dynamic map and with this functionality NATS can very quickly identify regional communication breakdowns around the globe. It also broadcasts reports of late arrivals as administrative messages to SADIS users, thus applying some peer pressure for performance improvement.
Data banks
It might be required by international organisations that some countries provide international and local data banks of OPMET or NOTAM. Examples of which are the European OPMET databanks in Brussels and Vienna, and EAD at Eurocontrol. The NetSys systems at BelgoControl are responsible for providing the Brussels databank.
Flight information services/Flight planning centres
Certain civil aviation authorities put a high value on quality and personalised pre-flight briefing. They require a system that will automatically deliver pre-flight bulletins where it is needed: e-mailed, faxed or delivered on paper. Each bulletin is tailor made from the flight route, derived from the flight plan. A typical requirement is to provide the most essential information possible, not to overload the cockpit with unnecessary paper and irrelevant data. For this purpose, these types of users normally require a narrow route briefing, where only information that touches or overlaps a flight corridor is included. An example of an excellent flight briefing service is the FPC at Arlanda in Sweden. A perfect product for such a customer is NSWS Flight Briefing with the necessary NSSRV components added as required, e.g. fax drivers. Legal records of all delivered flight briefings are also kept to provide an exact history. This may be crucial during an accident investigation.
Large airports
Many airports require a larger and integrated system. Normally this involves connecting a number of fringe systems and centralising data collection and distribution, the display of CCTV screens at control towers, processing of runway instrumentation output (e.g. RVR and wind) and much more. Here NetSys has the ability to use its NSSRV MHS to centralise the collection of data and distribute it in a controlled manner to all client systems. It allows a centralised archiving function that becomes crucial during accident investigations. NetSys can also integrate many data formats and massage them to fit the client's needs. An example is the special message format requirements for ATIS or VOLMET systems as implement in Belgium and India for example. NetSys developed one of the first interfaces to an AMHS to exchange meteorological data using the X.400 protocol.
Remote airports
There are many airports that do not have their own forecasters or a proper infrastructure to link them with forecasting centres but that need to supply pilots with weather briefings. This is a potential larger market but with relatively low margin and project costs. NSWS WAFS together with a satellite receiver for SADIS or ISCS is used in this scenarios. NetSys has many such small sites, many of which have been sponsored by IATA or managed by ICAO. Examples of current sites include Saudi Arabia, Oman, Ecuador, Mongolia, Afghanistan and China. NSWS WAFS displays the WAFS data and allows the user to produce Meteorological briefings.
Aviation forecasting
Some civil aviation authorities have a meteorological department that provide local weather forecasting. An example is BelgoControl in Belgium, where the organisation has a number of forecasters that produce local forecasts specifically for aviation customers such as airlines, freight operators and private pilots. NetSys also maintains a climate database for BelgoControl in which SYNOP, METAR and TAF messages for selected stations are decoded up to element level and stored for research and quality control purposes. TAMC in Taiwan is another example of a user that produces its own SIGWX charts with NSWS Forecaster.
Flight information centres/NOTAM Offices
There are many NOTAM offices in the world that still use a paper system to manage the NOTAM. The lack of computerisation is because of the wide deviation from NOTAM standards, as well as the high importance of this data. Contrary to Met data, NOTAM data volumes are low but each is of high importance. Carefully assigned sequence numbers guard the integrity of the data base. This market is potentially huge but faces its own challenges. Integration with an AIP is essential should one want to fully utilise FPL routes. NetSys can offer our NSWS NOTAM product specifically for NOTAM offices and our NSWS Flight Briefing for customers that require extended flight briefing.
Major customer installations
Including the major sites listed below, NetSys provides customer support in over 17 countries around the world.
SMI in Zurich, Switzerland
LFV in Stockholm, Sweden
NATS at Heathrow in London, United Kingdom
IMD in Delhi, Kolkatta, Mumbai and Chennai, India
Belgocontrol in Brussels, Belgium
IM at Lisbon and Islands, Portugal
ANWS in Taipei, Taiwan
SAWS in Pretoria, South Africa
ATMB in Beijing, Shanghai and Guangzhou, China
Environment of South Africa
Information technology companies of South Africa
Companies based in the City of Tshwane
Organisations based in Pretoria |
47040254 | https://en.wikipedia.org/wiki/Scientific%20Research%20Group%20In%20Egypt | Scientific Research Group In Egypt | Scientific Research Group in Egypt (SRGE) is a group of young Egyptian researchers established under the chairman of the group founder Prof. Aboul Ella Hassanien (Professor of Information Technology at the Faculty of Computer and Information, Cairo University). The main target of the group is establishing a research community for sharing common interests. Therefore the research map of the group consists of multidisciplinary research interests including: Networks, Intelligent Environment, bio-informatics, chemoinformatics, and information security.
SRGE Objectives
Encourage young academic researchers to collaborate within a well-established research community.
Applying new computational designing and solutions for a number of new emerged challenges, especially those related to the Egyptian society.
Establishing connections and joint research activities with other international research groups.
Helping PhD/Master group’s members to produce high-quality academic research, and guiding them to publish their work in a prestigious and well-known journals.
SRGE Research Directions
Bioinformatics and Biomedical Engineering
The Bioinformatics team focus on the clinical results to design a computer-based decision support systems. The team is responsible for designing methods for sampling, analyzing, and investigating collected data. The current research area includes: Medical image processing, Breast cancer, Liver fibrosis and tumor detection, within sonar, MRI, fMRI, and CT images.
Intelligent Environment and Applications
The Intelligent environment (IE) refers to the application of ubiquitous and wireless devices to solve some of the environmental-related challenges; such as monitoring water/air pollution levels, smart home applications. Moreover, IE allowing smart devices to automatically react to the people reactions such as voice, gestures, or movements, therefore they are suitable for disabled people.
Network and Information Security (NIS)
Information security is one of the challenging issue for any ICT emerging technology. The NIC team is responsible for detecting the security challenges and provide suitable solutions. A number of current researches have been conducted such as image authentications through watermarking and biometrics approaches, and secure communication and cryptography.
Social networks and Graph Mining
The social networks along with the increase importance of the graph as an important modeling tool to define complex structure, had encouraged the SRGE board member to establish this research direction. Currently a number of PhD and MSc. students carried out a number of project to analysis some large amount of structured data through the application of data and graph mining approaches.
Animal Identification
The new advances in wireless communication raise the ability to use the human biometric identification approaches for animal identification domain and overcome some of the traditional identification methodologies. The animal biometrics include: muzzle prints, iris patterns, and vascular patterns detections. The responsible team will design and enhance the current application of computational-based animal detection methodologies.
References
External links
Official website
Computer science research organizations
Research and development organizations
Information technology in Egypt |
29435262 | https://en.wikipedia.org/wiki/HITAC | HITAC | HITAC (HItachi Transistor Automatic Computer) is the designation for the majority of Hitachi large and midrange computer models spanning several decades. The HITAC 301, released in May 1958, was Hitachi's first fully transistorized computer model. Earlier Hitachi computers made use of semiconductors known as parametrons.
Beginnings
Hitachi Ltd. (now referred to as Hitachi), first began research in analog computers in 1951 with digital Computers following in 1956. As a prototype for the parametron logic circuit, the HIPAC-MK-1 was developed. The main transmission lines designed by Tadami power development utilized these as tensiometers. It was then that as Parametron calculators, the HIPAC 101 and HIPAC 103 became commercialized products. Parallel research in transistor computers soon became commercialized as well.
HIPAC MK-1(1957)
38 bit-words.fixed-point number. 1024 words of Drum memory.
HIPAC 101(1960)
42 bit-words. fixed-point number. 2048 words of Drum memory. In Paris, 1959, Automath was exhibited.
HIPAC 103(1961)
48 bit-words. fixed/floating-point. 1024/4096 words of Magnetic-core memory. (8192 words of Drum memory).
As a base, the ETL Mark IV transistorized computer model was introduced and was the first commercialized in 1959. The Japan Electronics and Information Technology Industries Association produced this model intended for business use. In the following year, the HITAC 501 was released and now supplies the Eastern Osaka Kansai Electric substation as a control system. Additionally, the Electric Testing Lab still accepts orders for the ETL Mark V. Using this as a base, researchers at the University of Kyoto cooperated to improve the technology which resulted in the HITAC 102 (also referred to as the Kyoto-Daigaku Digital Computer 1). Researchers at Japan's Economic Planning Agency introduced an improved version (HITAC 102B) as a substitute to the punch card system. In 1961, the HITAC 201 was developed as a smaller computer for business use.
HITAC 301(1959)
BCD12 fixed decimal point expression. 1960 words of Drum memory. (High speed access of 60 words is possible)
HITAC 501(1960)
First computer as a control system (Insufficient Details)
HITAC 102(1960)
ETL Mark V prototype
HITAC 201 (1961)
BCD11 fixed decimal point expression. 4000 words of Drum memory.
Moreover, in 1958, The Railway Technical Research Institute of Japan dispatched MARS. Although the first version of MARS did not include the HITAC system, its successor MARS101 utilized the HITAC 3030. After that point, Hitachi consistently used it as its mainframe.
Hitachi |
1964509 | https://en.wikipedia.org/wiki/Korea%20Computer%20Center | Korea Computer Center |
The Korea Computer Center (KCC) is the North Korean government information technology research center. It was founded on 24 October 1990. KCC, which administered the .kp country code top-level domain until 2011, employs more than 1,000 people.
KCC operates eight development and production centers, as well as eleven regional information centers. It runs the KCC Information Technology College and its Information Technology Institute. The KCC has branch offices in China, Germany, Syria and the United Arab Emirates. It has an interest in Linux research, and started the development of the Red Star OS distribution localised for North Korea.
KCC is a part of the political establishment and not entirely an IT company per se. Its technological state and general modernity are seen as lagging well behind the rest of the world, even with the general zeitgeist in North Korea. For example, the .kp ccTLD was registered in 2007, but KCC did not manage to get a working registry for three years, despite the support of a European company. KCC has still not implemented a working ccTLD infrastructure, something the North Korean government has had as a goal for several years.
While KCC mainly works on projects within North Korea, it has since 2001 served clients in Europe, China, South Korea, Japan, and the Middle East. It operates Naenara, North Korea's official web portal.
Nosotek is another North Korean IT venture company that develops computer games; two of them were published by News Corporation. Another such company is the Pyongyang Information Center.
In early 2015, the KCC was reorganized, with all functions not related to the development of Red Star OS being transferred to other entities.
Products
"Sam heug" search engine
"Naenara" web browser
"Chosun Jang-Gi", a computer game
Kwangmyong, North Korea's closed national intranet
"Chosun Ryo-Li", a food-related website
"Hana", a Korean language input method editor
"Koryo", English-Korean/Korean-English translation software using an electronic pen
"Nunbora", a Korean language voice recognition software
"Pulgunbyol" (Red Star OS), a Linux distribution
"Cyber Friend", a video conference system
"Cyber Star", an electronic education system
"SilverStar Paduk", a Go computer game
"HMS Player", a media player
Samjiyon tablet
See also
Internet in North Korea
Economy of North Korea
References
External links
Korea Computer Centre
Government agencies of North Korea
Information technology in North Korea
Computer science institutes
Research institutes in North Korea
1990 establishments in North Korea
Information technology research institutes
Computer companies of North Korea |
13532731 | https://en.wikipedia.org/wiki/NDPMon | NDPMon | The Neighbor Discovery Protocol Monitor (NDPMon) is a diagnostic software application used by network administrators for monitoring ICMPv6 packets in Internet Protocol version 6 (IPv6) networks. NDPMon observes the local network for anomalies in the function of nodes using Neighbor Discovery Protocol (NDP) messages, especially during the Stateless Address Autoconfiguration. When an NDP message is flagged, it notifies the administrator by writing to the syslog or by sending an email report. It may also execute a user-defined script. For IPv6, NDPMon is an equivalent of Arpwatch for IPv4, and has similar basic features with added attacks detection.
NDPMon runs on Linux distributions, Mac OS X, FreeBSD, NetBSD and OpenBSD. It uses a configuration file containing the expected and valid behavior for nodes and routers on the link. This includes the router addresses (MAC and IP) and the prefixes, flags and parameters announced.
NDPMon also maintains a list of neighbors on the link and monitors all advertisements and network changes. It permits tracking the usage of cryptographically generated interface identifiers or temporary global addresses when Privacy extensions are enabled.
NDPMon is free software published under the GNU Lesser General Public License version 2.1.
Alerts and reports
NDPMon generates various reports and alerts, including:
wrong couple MAC/IP: the MAC address is valid, so is the IP address, but not both of them together
wrong router MAC: invalid MAC address
wrong router IP address, invalid IP address
wrong prefix: invalid IPv6 prefix
wrong RA flags: invalid flags in the RA
wrong RA params: wrong parameter in the RA (lifetimes, timers...)
wrong router redirect: the router which emitted the redirect is not valid
router flag in Neighbor Advertisement: a node not declared as a router announced itself as one
Duplicate Address Detection DOS: duplicate address detection denial of service
changed ethernet address: a Global IPv6 address has a new MAC address
flip flop: a node uses two MAC addresses one after the other
reused old Ethernet address: reuse of an old MAC address
Unknown MAC Manufacturer: MAC vendor unknown, might be a forged one
new station: new node on the link
new IPv6 Global Address: new IPv6 Global address for a node
new IPv6 Link Local Address: new IPv6 Link Local address for a node
wrong couple MAC/LLA: wrong couple source Ethernet and source LLA addresses, i.e. Ethernet and Link Local Addresses are found but in different neighbors
Ethernet mismatch: link layer Ethernet address and address in ICMPv6 option do not match
IP Multicast
Ethernet Broadcast
Available plugins
A set of plugins are available for NDPMon:
MAC vendor resolution: compares the vendor part of a MAC address with a known base
Web interface: caches and alerts are converted to HTML files using XSLT for real time display in a Web server
Countermeasures: packets are forged and sent to deprecated rogue RAs or NAs
Syslog filtering: logrotate and logs redirection to /var/log/ndpmon.log
Remote probes (Experimental): distributed monitoring and logging to a central instance using SOAP/TLS
Custom rules (Experimental): lets users define their own rules for raising alerts
See also
Internet Protocol Suite
References
External links
Sourceforge project site
Internet Protocol based network software |
12852616 | https://en.wikipedia.org/wiki/Mary%20Shaw%20%28computer%20scientist%29 | Mary Shaw (computer scientist) | Mary Shaw (born 1943) is an American software engineer, and the Alan J. Perlis Professor of Computer Science in the School of Computer Science at Carnegie Mellon University, known for her work in the field of software architecture.
Biography
Early life
Mary M. Shaw was born in Washington D.C. in 1943. Her father (Eldon Shaw) was a civil engineer and economist for the Department of Agriculture and her mother (Mary Shaw) was a homemaker. Shaw attended high school in Bethesda, Maryland, during the Sputnik cold war era where technology was rapidly improving.
In high school, Shaw participated for two summers during high school in an after school program which taught students about computers. This program run by International Business Machines (IBM) and was a chance for student to explore fields outside of the normal curriculum. This was Shaw's first introduction to computers.
Studies and career
Shaw obtained her BA from Rice University around 1965, and her PhD in computer science from Carnegie-Mellon University in 1972.
After her graduation at Rice University, Shaw had started her career in industry, working as systems programmer at the Research Analysis Corporation. She also continued to do research at Rice University. In 1972 she joined the Carnegie Mellon University faculty, where she was eventually appointed Professor of Computer Science. From 1984 to 1987 she was also Chief Scientist at its Software Engineering Institute, from 1992 to 1999 Associate Dean for Professional Education, and from 2001 to 2006 Co-Director of the Sloan Software Industry Center.
In 2011, Mary Shaw and David Garlan received the Outstanding Research Award from ACM SIGSOFT, the Association of Computing Machinery's Special Interest Group on Software Engineering, for their "significant and lasting software engineering research contributions through the development and promotion of software architecture."
On October 3, 2014, U.S. President Barack Obama awarded Shaw with National Medal of Technology and Innovation. She was named recipient of the award in 2012.
Work
Shaw's main area of research interest is software engineering, including architectural, educational and historical aspects. Shaw authored seminal works in the field of software architecture along with David Garlan.
Software Architecture, 1996
Shaw's most cited work "Software Architecture: Perspectives on an Emerging Discipline," co-authored with David Garlan, examines the concept of "architectures for software systems as well as better ways to support software development." The book aims:
"... to bring together the useful abstractions of systems design and the notations and tools of the software developer, and look at patterns used for system organization... to illustrate the discipling and examine the ways in which architectural design can impact software design. Our selection emphasizes informal descriptions, touching lightly on formal notations and specifications and on tools to support them."
In this work Garlan and Shaw "describe an architecture of a system as a collection of computational components together with a description of the interactions between these components—the connectors." A component is simply described as "the elements that perform computation."
Reception
In 2011 Shaw and Garlan were awarded the Outstanding Research Award for 2011 by the Carnegie Mellon University (CMU) in honor of their pioneering research in the field of Software Architecture. William Scherlis, the director of CMU's Institute for Software Research, commented on Shaw and Garlan contribution:
The term 'software architecture' was first used in the late 1960s, but its significance didn't become clear until almost 20 years later, when David and Mary asserted that architecture could be addressed using systematic approaches. Their work and that of their colleagues here at Carnegie Mellon has since led to engineering methods for architectural modeling, analysis and identification of architecture-level patterns, the use of which has now become standard in the engineering of larger scale software systems.
Selected publications
Mary Shaw and Frank Hole. Computer analysis of chronological seriation, 1967.
Mary Shaw, Alan Perlis and Frederick Sayward (eds.) Software metrics: an analysis and evaluation, 1981.
Mary Shaw (ed). Carnegie-Mellon curriculum for undergraduate computer science, 1985.
Mary Shaw and David Garlan. Software Architecture: Perspectives on an Emerging Discipline, Prentice Hall, 1996.
Mary Shaw, Sufficient Correctness and Homeostasis in Open Resource Coalitions: How Much Can You Trust Your Software System, "" 2000,
Articles, a selection:
Mary Shaw. "Reduction of Compilation Costs Through Language Contraction". In: Communications of the ACM, 17(5):245–250, 1974.
Mary Shaw. "Prospects for an Engineering Discipline of Software". in: IEEE Software, 7(6):15–24, 1990.
Mary Shaw. "Comparing Architectural Design Styles". in: IEEE Software, 12(6):27–41, 1995.
"Mary Shaw Facts." Mary Shaw Facts. Your Dictionary, n.d. Web. 01 Feb. 2017.
"Mary Shaw." Mary Shaw - Engineering and Technology History Wiki. ETHW, n.d. Web. 01 Feb. 2017.
References
External links
Mary Shaw home page
1943 births
Living people
American computer scientists
American engineering writers
Software engineering researchers
Rice University alumni
Rice University faculty
Carnegie Mellon University alumni
Carnegie Mellon University faculty
American women computer scientists |
3034286 | https://en.wikipedia.org/wiki/Electronic%20lock | Electronic lock | An electronic lock (or electric lock) is a locking device which operates by means of electric current. Electric locks are sometimes stand-alone with an electronic control assembly mounted directly to the lock. Electric locks may be connected to an access control system, the advantages of which include: key control, where keys can be added and removed without re-keying the lock cylinder; fine access control, where time and place are factors; and transaction logging, where activity is recorded. Electronic locks can also be remotely monitored and controlled, both to lock and to unlock.
Operation
Electric locks use magnets, solenoids, or motors to actuate the lock by either supplying or removing power. Operating the lock can be as simple as using a switch, for example an apartment intercom door release, or as complex as a biometric based access control system.
There are two basic types of locks: "preventing mechanism" or operation mechanism.
Types
Electromagnetic lock
The most basic type of electronic lock is a magnetic lock (informally called a "mag lock"). A large electro-magnet is mounted on the door frame and a corresponding armature is mounted on the door. When the magnet is powered and the door is closed, the armature is held fast to the magnet. Mag locks are simple to install and are very attack-resistant. One drawback is that improperly installed or maintained mag locks can fall on people, and also that one must unlock the mag lock to both enter and to leave. This has caused fire marshals to impose strict rules on the use of mag locks and access control practice in general. Additionally, NFPA 101 (Standard for Life Safety and Security), as well as the ADA (Americans with Disability Act) require "no prior knowledge" and "one simple movement" to allow "free egress". This means that in an emergency, a person must be able to move to a door and immediately exit with one motion (requiring no push buttons, having another person unlock the door, reading a sign, or "special knowledge").
Other problems include a lag time (delay), because the collapsing magnetic field holding the door shut does not release instantaneously. This lag time can cause a user to collide with the still-locked door. Finally, mag locks fail unlocked, in other words, if electrical power is removed they unlock. This could be a problem where security is a primary concern. Additionally, power outages could affect mag locks installed on fire listed doors, which are required to remain latched at all times except when personnel are passing through. Most mag lock designs would not meet current fire codes as the primary means of securing a fire listed door to a frame. Because of this, many commercial doors (this typically does not apply to private residences) are moving over to stand-alone locks, or electric locks installed under a Certified Personnel Program.
The first mechanical recodable card lock was invented in 1976 by Tor Sørnes, who had worked for VingCard since the 1950s. The first card lock order was shipped in 1979 to Westin Peachtree Plaza Hotel, Atlanta, US. This product triggered the evolution of electronic locks for the hospitality industry.
Electronic strikes
Electric strikes (also called electric latch release) replace a standard strike mounted on the door frame and receive the latch and latch bolt. Electric strikes can be simplest to install when they are designed for one-for-one drop-in replacement of a standard strike, but some electric strike designs require that the door frame be heavily modified. Installation of a strike into a fire listed door (for open backed strikes on pairs of doors) or the frame must be done under listing agency authority, if any modifications to the frame are required (mostly for commercial doors and frames). In the US, since there is no current Certified Personnel Program to allow field installation of electric strikes into fire listed door openings, listing agency field evaluations would most likely require the door and frame to be de-listed and replaced.
Electric strikes can allow mechanical free egress: a departing person operates the lockset in the door, not the electric strike in the door frame. Electric strikes can also be either "fail unlocked" (except in Fire Listed Doors, as they must remain latched when power is not present), or the more-secure "fail locked" design. Electric strikes are easier to attack than a mag lock. It is simple to lever the door open at the strike, as often there is an increased gap between the strike and the door latch. Latch guard plates are often used to cover this gap.
Electronic deadbolts and latches
Electric mortise and cylindrical locks are drop-in replacements for door-mounted mechanical locks. An additional hole must be drilled in the door for electric power wires. Also, a power transfer hinge is often used to get the power from the door frame to the door. Electric mortise and cylindrical locks allow mechanical free egress, and can be either fail unlocked or fail locked. In the US, UL rated doors must retain their rating: in new construction doors are cored and then rated. but in retrofits, the doors must be re-rated.
Electrified exit hardware, sometimes called "panic hardware" or "crash bars", are used in fire exit applications. A person wishing to exit pushes against the bar to open the door, making it the easiest of mechanically-free exit methods. Electrified exit hardware can be either fail unlocked or fail locked. A drawback of electrified exit hardware is their complexity, which requires skill to install and maintenance to assure proper function. Only hardware labeled "Fire Exit Hardware" can be installed on fire listed doors and frames and must meet both panic exit listing standards and fire listing standards.
Motor-operated locks are used throughout Europe. A European motor-operated lock has two modes, day mode where only the latch is electrically operated, and night mode where the more secure deadbolt is electrically operated.
In South Korea, most homes and apartments have installed electronic locks, which are currently replacing the lock systems in older homes. South Korea mainly uses a lock system by Gateman.
Passive electronic lock
The "passive" in passive electronic locks means no power supply. Like electronic deadbolts, it is a drop-in replacement for mechanical locks. But the difference is that passive electronic locks do not require wiring and are easy to install.
The passive electronic lock integrates a miniature electronic single-chip microcomputer. There is no mechanical keyhole, only three metal contacts are retained. When unlocking, insert the electronic key into the keyhole of the passive electronic lock, that is, the three contacts on the head end of the key are in contact with the three contacts on the passive electronic lock. At this time, the key will supply power to the passive electronic lock, and at the same time, read the ID number of the passive electronic lock for verification. When the verification is passed, the key will power the coil in the passive electronic lock. The coil generates a magnetic field and drives the magnet in the passive electronic lock to unlock. At the moment, turn the key to drive the mechanical structure in the passive electronic lock to unlock the lock body. After successful unlocking, the key records the ID number of the passive electronic lock and also records the time of unlocking the passive electronic lock. Passive electronic locks can only be unlocked by a key with unlocking authority, and unlocking will fail if there is no unlocking authority.
Passive electronic locks are currently used in a number of specialized fields, such as power utilities, water utilities, public safety, transportation, data centers, etc.
Programmable lock
The programmable electronic lock system is realized by programmable keys, electronic locks and software. When the identification code of the key matches the identification code of the lock, all available keys are operated to unlock. The internal structure of the lock contains a cylinder, which has a contact (lock slot) that is in contact with the key, and a part of it is an electronic control device to store and verify the received identification code and respond (whether it is unlocked). The key contains a power supply device, usually a rechargeable battery or a replaceable battery in the key, used to drive the system to work; it also includes an electronic storage and control device for storing the identification code of the lock.
The software is used to set and modify the data of each key and lock.
Using this type of key and lock control system does not need to change user habits. In addition, compared with the previous mechanical device, its advantage is that only one key can open multiple locks instead of a bunch of keys like the current one. A single key can contain many lock identification codes; which can set the unlock permission for a single user.
Authentication methods
A feature of electronic locks is that the locks can deactivated or opened by authentication, without the use of a traditional physical key:
Numerical codes, passwords, and passphrases
Perhaps the most common form of electronic lock uses a keypad to enter a numerical code or password for authentication. Some feature an audible response to each press. Combination lengths are usually between four and six digits long.
Security tokens
Another means of authenticating users is to require them to scan or "swipe" a security token such as a smart card or similar, or to interact a token with the lock. For example, some locks can access stored credentials on a personal digital assistant (PDA) or smartphone, by using infrared, Bluetooth, or NFC data transfer methods.
Biometrics
As biometrics become more and more prominent as a recognized means of positive identification, their use in security systems increases. Some electronic locks take advantage of technologies such as fingerprint scanning, retinal scanning, iris scanning and voice print identification to authenticate users.
RFID
Radio-frequency identification (RFID) is the use of an object (typically referred to as an "RFID tag") applied to or incorporated into a product, animal, or person for the purpose of identification and tracking using radio waves. Some tags can be read from several meters away and beyond the line of sight of the reader. This technology is also used in some modern electronic locks. The technology has been approved since before the 1970s but has become much more prevalent in recent years due to its usages in things like global supply chain management and pet microchipping.
See also
Access badge
Common Access Card (CAC)
Credential
Electric strike
Electromagnetic lock
Keycard
Physical security
References
Locks (security device)
Electronic circuits
Articles containing video clips |
19151448 | https://en.wikipedia.org/wiki/Identity%20correlation | Identity correlation | In information systems, identity correlation is a process that reconciles and validates the proper ownership of disparate user account login IDs (user names) that reside on systems and applications throughout an organization and can permanently link ownership of those user account login IDs to particular individuals by assigning a unique identifier (also called primary or common keys) to all validated account login IDs.
The process of identity correlation validates that individuals only have account login IDs for the appropriate systems and applications a user should have access to according to the organization's business policies, access control policies and various application requirements.
A unique identifier, in the context of identity correlation, is any identifier which is guaranteed to be unique among all identifiers used for a group of individuals and for a specific purpose. There are three main types of unique identifiers, each corresponding to a different generation strategy:
Serial numbers, assigned incrementally
Random numbers, selected from a number space much larger than the maximum (or expected) number of objects to be identified. Although not really unique, some identifiers of this type may be appropriate for identifying objects in many practical applications, and so are referred to as “unique” within this context
Name or codes allocated by choice, but are forced to be unique by keeping a central registry such as the EPC Information Services of the EPCglobal Network
For the purposes of identity correlation, a unique identifier is typically a serial number or random number selected from a number space much larger than the maximum number of individuals who will be identified. A unique identifier, in this context, is typically represented as an additional attribute in the directory associated with each particular data source. However, adding an attribute to each system-specific directory may affect application requirements or specific business requirements, depending on the requirements of the organization. Under these circumstances, unique identifiers may not be an acceptable addition to an organization.
Basic Requirements of Identity Correlation
Identity Correlation involves several factors:
1. Linking Disparate Account IDs Across Multiple Systems or Applications
Many organizations must find a method to comply with audits that require it to link disparate application user identities with the actual people who are associated with those user identities.
Some individuals may have a fairly common first and/or last name, which makes it difficult to link the right individual to the appropriate account login ID, especially when those account login IDs are not linked to enough specific identity data to remain unique.
A typical construct of the login ID, for example, can be the 1st character of givenname + next 7 of sn, with incremental uniqueness. This would produce login IDs like jsmith12, jsmith 13, jsmith14, etc. for users John Smith, James Smith and Jack Smith, respectively.
Conversely, one individual might undergo a name change either formally or informally, which can cause new account login IDs that the individual appropriates to appear drastically different in nomenclature to the account login IDs that individual acquired prior to any change.
For example, a woman could get married and decide to use her new surname professionally. If her name was originally Mary Jones but she is now Mary Smith, she could call HR and ask them to update her contact information and email address with her new surname. This request would update her Microsoft Exchange login ID to mary.smith to reflect that surname change, but it might not actually update her information or login credentials in any other system she has access to. In this example, she could still be mjones in Active Directory and mj5678 in RACF.
Identity correlation should link the appropriate system account login IDs to individuals who might be indistinguishable, as well as to those individuals who might appear to be drastically different from a system-by-system standpoint, but should be associated with the same individual.
For more details on this topic, please see: The Second Wave: Linking Identities to Contexts
2. Discovering Intentional and Unintentional Inconsistencies in Identity Data
Inconsistencies in identity data typically develop over time in organizations as applications are added, removed or changed and as individuals attain or retain an ever-changing stream of access rights as they matriculate into and out of the organization.
Application user login IDs do not always have a consistent syntax across different applications or systems and many user login IDs are not specific enough to directly correlate it back to one particular individual within an organization.
User data inconsistencies can also occur due to simple manual input errors, non-standard nomenclature, or name changes that might not be identically updated across all systems.
The identity correlation process should take these inconsistencies into account to link up identity data that might seem to be unrelated upon initial investigation.
3. Identifying Orphan or Defunct Account Login IDs
Organizations can expand and consolidate from mergers and acquisitions, which increases the complexity of business processes, policies and procedures as a result.
As an outcome of these events, users are subject to moving to different parts of the organization, attaining a new position within the organization, or matriculating out of the organization altogether. At the same time, each new application that is added has the potential to produce a new completely unique user ID.
Some identities may become redundant, others may be in violation of application-specific or more widespread departmental policies, others could be related to non-human or system account IDs, and still others may simply no longer be applicable for a particular user environment.
Projects that span different parts of the organization or focus on more than one application become difficult to implement because user identities are often not properly organized or recognized as being defunct due to changes in the business process.
An identity correlation process must identify all orphan or defunct account identities that no longer belong from such drastic shifts in an organization's infrastructure.
4. Validating Individuals to their Appropriate Account IDs
Under such regulations as Sarbanes-Oxley and Gramm-Leach-Bliley Act, it is required for organizations to ensure the integrity of each user across all systems and account for all access a user has to various back-end systems and applications in an organization.
If implemented correctly, identity correlation will expose compliance issues. Auditors frequently ask organizations to account for who has access to what resources. For companies that have not already fully implemented an enterprise identity management solution, identity correlation and validation is required to adequately attest to the true state of an organization's user base.
This validation process typically requires interaction with individuals within an organization who are familiar with the organization's user base from an enterprise-wide perspective, as well as those individuals who are responsible and knowledgeable of each individual system and/or application-specific user base.
In addition, much of the validation process might ultimately involve direct communication with the individual in question to confirm particular identity data that is associated with that specific individual.
5. Assigning a unique primary or common key for every system or application Account ID that is attached to each individual
In response to various compliance pressures, organizations have an option to introduce unique identifiers for its entire user base to validate that each user belongs in each specific system or application in which he/she has login capabilities.
In order to effectuate such a policy, various individuals familiar with the organization's entire user base, as well as each system-specific user-base, must be responsible for validating that certain identities should be linked together and other identities should be disassociated from each other.
Once the validation process is complete, a unique identifier can be assigned to that individual and his or her associated system-specific account login IDs.
Approaches to Linking Disparate Account IDs
As mentioned above, in many organizations, users may sign into different systems and applications using different login IDs. There are many reasons to link these into ``enterprise-wide'' user profiles.
There are a number of basic strategies to perform this correlation, or "ID Mapping:"
Assume that account IDs are the same:
In this case, mapping is trivial.
This actually works in many organizations, in cases where a rigorous and standardized process has been used to assign IDs to new users for a long time.
Import mapping data from an existing system:
If an organization has implemented a robust process for mapping IDs to users over a long period, this data is already available and can be imported into any new Identity management system.
Exact matching on attribute values:
Find one identity attribute or a combination of attributes on one system which correlate to one or more attributes on another system.
Connect IDs on the two systems by finding users whose attribute(s) are the same.
Approximate matching on attribute values:
The same as above, but instead of requiring attributes or expressions to match exactly, tolerate some differences.
This allows for misspelled, inconsistently capitalized and otherwise somewhat diverse names and similar identity values.
The risk here is that accounts which should not be connected will accidentally be matched by this process.
Self-service login ID reconciliation:
Invite users to fill in a form and indicate which IDs, on which systems, they own.
Users might lie or make mistakes—so it's important to validate user input, for example by asking users to also provide passwords and to check those passwords.
Users might not recognize system names—so it's important to offer alternatives or ask users for IDs+passwords in general, rather than asking them to specify which system those IDs are for.
Hire a consultant and/or do it manually:
This still leaves open the question of where the data comes from—perhaps by interviewing every user in question?
Common Barriers to Performing Identity Correlation
1. Privacy Concerns
Often, any process that requires an in-depth look into identity data brings up a concern for privacy and disclosure issues. Part of the identity correlation process infers that each particular data source will need to be compared against an authoritative data source to ensure consistency and validity against relevant corporate policies and access controls.
Any such comparison that involves an exposure of enterprise-wide, authoritative, HR-related identity data will require various non-disclosure agreements either internally or externally, depending on how an organization decides to undergo an identity correlation exercise.
Because authoritative data is frequently highly confidential and restricted, such concerns may bar the way from performing an identity correlation activity thoroughly and sufficiently.
2. Extensive Time and Effort Requirements
Most organizations experience difficulties understanding the inconsistencies and complexities that lie within their identity data across all of their data sources. Typically, the process can not be completed accurately or sufficiently by undergoing a manual comparison of two lists of identity data or even executing simple scripts to find matches between two different data sets. Even if an organization can dedicate full-time individuals to such an effort, the methodologies themselves usually do not expose an adequate enough percentage of defunct identities, validate an adequate enough percentage of matched identities, or identify system (non-person) account IDs to pass the typical requirements of an identity-related audit.
See also
Sarbanes-Oxley Act(SOX)
Gramm-Leach-Bliley Act (GLBA)
Health Insurance Portability and Accountability Act(HIPAA)
Information Technology Audit (ITA)
Manual efforts to accomplish identity correlation require a great deal of time and people effort, and do not guarantee that the effort will be completed successfully or in a compliant fashion.
Because of this, automated identity correlation solutions have recently entered the marketplace to provide more effortless ways of handling identity correlation exercises.
Typical automated identity correlation solution functionality includes the following characteristics:
Analysis and comparison of identities within multiple data sources
Flexible match criteria definitions and assignments for any combination of data elements between any two data sources
Easy connectivity either directly or indirectly to all permissible sources of data
Out-of-the-box reports and/or summaries of data match results
Ability to manually override matched or unmatched data combinations
Ability to view data results on fine-grained level
Assignment of unique identifiers to pre-approved or manually validated matched data.
Export abilities to send verified user lists back to source systems and/or provisioning solutions
Ability to customize data mapping techniques to refine data matches
Role-based access controls built into the solution to regulate identity data exposures as data is loaded, analyzed, and validated by various individuals both inside and outside of the organization
Ability to validate identity data against end-users more quickly or efficiently than through manual methodologies
Collection of identity attributes from personal mobile devices through partial identity extraction
Profiling of web surfing and social media behavior through tracking mechanisms
Biometric measurements of the users in question can correlate identities across systems
Centralized identity brokerage systems that administer and access identity attributes across identity silos
Three Methods of Identity Correlation Project Delivery
Identity correlation solutions can be implemented under three distinct delivery models. These delivery methodologies are designed to offer a solution that is flexible enough to correspond to various budget and staffing requirements, as well as meet both short and/or long-term project goals and initiatives.
Software Purchase – This is the classic Software Purchase model where an organization purchases a software license and runs the software within its own hardware infrastructure.
Training is available and recommended
Installation Services are optional
Identity Correlation as a Service (ICAS) – ICAS is a subscription-based service where a client connects to a secure infrastructure to load and run correlation activities. This offering provides full functionality offered by the identity correlation solution without owning and maintaining hardware and related support staff.
Turn-Key Identity Correlation – A Turn-key methodology requires a client to contract with and provide data to a solutions vendor to perform the required identity correlation activities. Once completed, the solutions vendor will return correlated data, identify mismatches, and provide data integrity reports.
Validation activities will still require some direct feedback from individuals within the organization who understand the state of the organizational user base from an enterprise-wide viewpoint, as well as those individuals within the organization who are familiar with each system-specific user base. In addition, some validation activities might require direct feedback from individuals within the user base itself.
A Turn-Key solution can be performed as a single one-time activity or monthly, quarterly, or even as part of an organization's annual validation activities. Additional services are available, such as:
Email Campaigns to help resolve data discrepancies
Consolidated or merged list generation
See Also: Related Topics
Related or associated topics which fall under the category of identity correlation may include:
Compliance Regulations / Audits
Sarbanes-Oxley Act (SOX)
Gramm-Leach-Bliley Act
Health Insurance Portability and Accountability Act
Information Technology Audit
Management of identities
Identity Management
Unique identifier (Common Key)
Identifier
User Name
User ID
Provisioning
Metadirectory
Access control
Access control
Single Sign On (SSO)
Web Access Management
Directory services
Directory service
Lightweight Directory Access Protocol (LDAP)
Metadata
Virtual directory
Other categories
Role-based access control (RBAC)
Federation of user access rights on web applications across otherwise un-trusted networks
References
Information systems |
333676 | https://en.wikipedia.org/wiki/List%20of%20content%20management%20systems | List of content management systems | Content management systems (CMS) are used to organize and facilitate collaborative content creation. Many of them are built on top of separate content management frameworks. The list is limited to notable services.
Open source software
This section lists free and open-source software that can be installed and managed on a web server.
Systems listed on a light purple background are no longer in active development.
Java
Java packages/bundle
Microsoft ASP.NET
Perl
PHP
Python
Ruby on Rails
ColdFusion Markup Language (CFML)
JavaScript
Others
Software as a service (SaaS)
This section lists proprietary software that includes software, hosting, and support with a single vendor. This section includes free services.
Proprietary software
This section lists proprietary software to be installed and managed on a user's own server. This section includes freeware proprietary software.
Systems listed on a light purple background are no longer in active development.
Other content management frameworks
A content management framework (CMF) is a system that facilitates the use of reusable components or customized software for managing Web content. It shares aspects of a Web application framework and a content management system (CMS).
Below is a list of notable systems that claim to be CMFs.
See also
Comparison of web frameworks
Comparison of wiki software
Comparison of photo gallery software
References
External links
Content management systems |
39129 | https://en.wikipedia.org/wiki/OS/390 | OS/390 | OS/390 is an IBM operating system for the System/390 IBM mainframe computers.
Overview
OS/390 was introduced in late 1995 in an effort to simplify the packaging and ordering for the key, entitled elements needed to complete a fully functional MVS operating system package. These elements included, but were not limited to:
Data Facility Storage Management Subsystem Data Facility Product (DFP)Provides access methods to enable I/O to, e.g., DASD subsystems, printers, Tape; provides utilities and program management
Job Entry Subsystem (JES)Provides the ability to submit batch work and manage print
IBM Communications ServerProvides VTAM and TCP/IP communications protocols
An additional benefit of the OS/390 packaging concept was to improve reliability, availability and serviceability (RAS) for the operating system, as the number of different combinations of elements that a customer could order and run was drastically reduced. This reduced the overall time required for customers to test and deploy the operating system in their environments, as well as reducing the number of customer-reported problems (PMRs), errors (APARs) and fixes (PTFs) arising from the variances in element levels.
In December 2001 IBM extended OS/390 to include support for 64-bit zSeries processors and added various other improvements, and the result is now named z/OS. IBM ended support for the older OS/390-branded versions in late 2004.
See also
OS/360
MVS
z/OS
z/TPF
z/VM
z/VSE
Linux on IBM Z
References
IBM mainframe operating systems
IBM ESA/390 operating systems
1995 software |
45568470 | https://en.wikipedia.org/wiki/List%20of%20TRS-80%20and%20Tandy-branded%20computers | List of TRS-80 and Tandy-branded computers | Tandy Corporation released several computer product lines starting in 1977, under both TRS-80 and Tandy branding.
TRS-80 was a brand associated with several desktop microcomputer lines sold by Tandy Corporation through their Radio Shack stores. It was first used on the original TRS-80 (later known as the Model I), one of the earliest mass-produced personal computers. However, Tandy later used the TRS-80 name on a number of different computer lines, many of which were technically unrelated to (and incompatible with) the original Model I and its replacements.
In addition to these, Tandy released a number of computers using the Tandy name itself.
Original TRS-80 ("Model I") and its successors
Model I
The original TRS-80 Micro Computer System (later known as the Model I to distinguish it from successors) was launched in 1977 and- alongside the Apple II and Commodore Pet- was one of the earliest mass-produced personal computers. The line won popularity with hobbyists, home users, and small-businesses.
The Model I included a full-stroke QWERTY keyboard, floating-point BASIC, a monitor, and a starting price of US$600.
By 1979, the TRS-80 had the largest selection of software in the microcomputer market.
In July 1980 the mostly-compatible TRS-80 Model III was launched, and the original Model I was discontinued.
Model III
In July 1980 Tandy released the Model III, a mostly-compatible replacement for the Model I.
Its improvements over the Model I included built-in lower case, a better keyboard, elimination of the cable spaghetti, 1500-baud cassette interface, and a faster (2.03 MHz) Z-80 processor. With the introduction of the Model III, Model I production was discontinued as it did not comply with new FCC regulations as of January 1, 1981 regarding electromagnetic interference.
The Model III could run about 80% of Model I software, but used an incompatible disk format. It also came with the option of integrated disk drives.
Model 4
The successor to the Model III was the Model 4. Its microprocessor was a faster Z80A 4 MHz CPU. Disk-based Model 4s had 64 kilobytes of RAM standard; an optional bank of additional 64 kilobytes was accessible to applications software using bank switching technology.
The Model 4's new hardware features included a larger display screen with 80 columns by 24 rows, inverse video, and an internal audio speaker. Its keyboard had three function keys and a control key. It used an all-new operating system derived from the advanced Model III LDOS 5, licensed from Logical Systems, now christened TRSDOS Version 6. A more modern version of Microsoft's BASIC interpreter more closely resembled the MS-DOS GW-BASIC, featuring PC-like functionality.
The Model 4 could run the industry-standard CP/M operating system without hardware modification (as was needed for the Model III). This afforded the user access to popular application software such as MicroPro's Wordstar, Ashton-Tate's dBase II, and Sorcim's Supercalc. Furthermore, the Model 4 could be booted with any Model III operating system and emulated the Model III with 100 percent compatibility. Prices started from $999 for the diskless version.
Early versions of the Model 4 mainboard were designed to accept a Zilog Z800 16 bit CPU upgrade board to replace the Z80 8 bit CPU but this option was never released, as Zilog failed to bring the new CPU to market.
Business systems
Tandy 10
Tandy's first design for the business market was a desk-based computer known as the Tandy 10 Business Computer System, which was released in 1978 but quickly discontinued.
TRS-80 Model II and successors
Model II
In October 1979 Tandy began shipping the TRS-80 Model II, which was targeted to the small-business market. It was not an upgrade of the Model I, but an entirely different system with state-of-the-art hardware and numerous features not found in the primitive Model I. The Model II was not compatible with the Model I and never had the same breadth of available software. This was somewhat mitigated by the availability of the CP/M from third parties.
Model 12
The Model II was replaced in 1982 by the TRS-80 Model 12. This was essentially a Model 16B (described below) without the Motorola processor, and could be upgraded to a Model 16B.
Model 16, Model 16B, and Tandy 6000
In February 1982, Tandy released the TRS-80 Model 16, as the follow-on to the Model II; an upgrade kit was available for Model II systems. The Model 16 adds a 6 MHz, 16-bit Motorola 68000 processor and memory card.
The Model 16 sold poorly at first and was reliant on existing Model II software early on. In early 1983, Tandy switched from TRSDOS-16 to Xenix.
The Model 16 evolved into the Model 16B with 256 KB in July 1983, and later the Tandy 6000, gaining an internal hard drive along the way and switching to an 8 MHz 68000.
The 16B was the most popular Unix computer in 1984, with almost 40,000 units sold.
Other systems
Color Computers
Tandy also produced the TRS-80 Color Computer (CoCo), based on the Motorola 6809 processor. This machine was clearly aimed at the home market, where the Model II and above were sold as business machines. It competed directly with the Commodore 64, Apple II, and Atari 8-bit family of computers. OS-9, a multitasking, multi-user operating system was supplied for this machine.
Model 100 line
In addition to the above, Tandy produced the TRS-80 Model 100 series of laptop computers. This series comprised the TRS-80 Model 100, Tandy 102, Tandy 200 and Tandy 600. The Model 100 was designed by the Japanese company Kyocera with software written by Microsoft. (The Model 100 firmware was the last Microsoft product to which Bill Gates was a major code contributor.) It was also marketed as the Micro Executive Workstation (MEWS).
The Model 100 had an internal 300 baud modem, built-in BASIC, and a limited text editor. It was possible to use the Model 100 with most phones in the world with the use of an optional acoustic coupler that fit over a standard telephone handset. The combination of the acoustic coupler, the machine's outstanding battery life (it could be used for days on a set of 4 AA cells), and its simple text editor made the Model 100/102 popular with journalists in the early 1980s. The Model 100 line also had an optional bar code reader, serial/RS-232 floppy drive and a Cassette interface.
Also available as an option to the Model 100 was an external expansion unit supporting video and a 5" disk drive, connected via the 40-pin expansion port in the bottom of the unit.
Tandy 200
The Tandy 200 was introduced in 1984 as a higher-end complement to the Model 100. The Tandy 200 had 24 KB RAM expandable to 72 KB, a flip-up 16 line by 40 column display, and a spreadsheet (Multiplan) included. The Tandy 200 also included DTMF tone-dialing for the internal modem. Although less popular than the Model 100, the Tandy 200 was also particularly popular with journalists in the late 1980s and early 1990s.
Reception
InfoWorld in 1985 disapproved of the computer's high cost of accessories ("and you'll find that the Tandy 200 has more accessories than a Barbie doll"), but called it "a big step up from the Model 100 for someone who needs a note-taker or spreadsheet on the run".
MC-10
The MC-10 was a short-lived and little-known Tandy computer, similar in appearance to the Sinclair ZX81.
It was a small system based on the Motorola 6803 processor and featured 4 KB of RAM. A 16 KB RAM expansion pack that connected on the back of the unit was offered as an option as was a thermal paper printer. A modified version of the MC-10 was sold in France as the Matra Alice.
Programs loaded using a cassette which worked much better than those for the Sinclair. A magazine was published which offered programs for both the CoCo and MC-10 but very few programs were available for purchase. Programs for the MC-10 were not compatible with the CoCo.
Pocket Computers
Both the TRS-80 and Tandy brands were used for a range of "Pocket Computers" sold by Tandy. These were manufactured by Sharp or Casio, depending on the model.
Portable Data Terminal
The TRS-80 PT-210 Portable Data Terminal was released in late 1982 for . It included an acoustic coupler, 300 baud modem, thermal printer, and typewriter-style keyboard.
PC-compatible computers
In the early 1980s, Tandy began producing a line of computers that were at first "MS-DOS compatible"--able to run MS-DOS and certain applications, but not fully compatible with every nuance of the original IBM PC systems--and later mostly, but not 100%, IBM PC compatible. The first of these was the Tandy 2000, a pure MS-DOS compatible machine with no IBM PC ROM BIOS or pretense of PC hardware compatibility. Such machines were common in the early 1980s; the NEC APC is another example. The Tandy 2000 system was similar to the Texas Instruments Professional Computer in that it offered better graphics, a faster processor (80186) and higher capacity disk drives (80 track double sided 800k 5.25 drives) than the original IBM PC. However, around the time of its introduction, the industry began moving away from MS-DOS compatible computers and towards fully IBM PC compatible clones; later Tandy offerings moved toward full PC hardware compatibility. This industry shift was mainly spurred by the observation that most MS-DOS software was being written for the IBM PC and relied not only on the services provided by MS-DOS itself but also on others provided by the IBM ROM BIOS and, where the services provided by neither DOS nor the IBM BIOS were adequate, on direct low-level control of the IBM PC hardware (especially the video).
The Tandy 2000 was followed later by the less expensive Tandy 1000, marketed as highly compatible with the IBM PC but actually designed to be an enhanced IBM PCjr-compatible computer. With inopportune timing (for Tandy), IBM discontinued the unsuccessful PCjr shortly before the Tandy 1000 was scheduled for introduction. Despite this unfortunate turn, the Tandy 1000 was well received and was succeeded by dozens of Tandy 1000 models very successful and popular line. Each of these models was generally named by adding a two or three-letter designation after "Tandy 1000", such as Tandy 1000 HD, Tandy 1000 HX, or Tandy 1000 RLX. While the progressive Tandy 1000 models fairly quickly departed from PCjr hardware compatibility, they all retained the enhanced CGA video modes of the IBM PCjr (supported by no other IBM or clone machine), and some later models added an original 640x200 16-color video mode.
As margins decreased in PC clones, in the early 1990s Tandy was unable to compete and stopped manufacturing their own systems, instead selling computers manufactured by a variety of companies, AST Research and Gateway 2000 among them.
The later Tandy 1000 systems and follow-ons were also marketed by DEC, as Tandy and DEC had a joint manufacturing agreement.
References
Microcomputers
Lists of computer hardware |
124547 | https://en.wikipedia.org/wiki/Worthington%2C%20Minnesota | Worthington, Minnesota | Worthington is a city in and the county seat of Nobles County, Minnesota, United States. The population was 13,947 at the 2020 census.
The city's site was first settled in the 1870s as Okabena Station on a line of the Chicago, St. Paul, Minneapolis and Omaha Railway, later the Chicago and North Western Railway (now part of the Union Pacific Railroad) where steam engines would take on water from adjacent Lake Okabena. More people entered, along with one A. P. Miller of Toledo, Ohio, under a firm called the National Colony Organization. Miller named the new city after his wife's maiden name.
History
The first European likely to have visited the Nobles County area of southwestern Minnesota was French explorer Joseph Nicollet. Nicollet mapped the area between the Mississippi and Missouri Rivers in the 1830s. He called the region “Sisseton Country” in honor of the Sisseton band of Dakota Indians then living there. It was a rolling sea of wide open prairie grass that extended as far as the eye could see. One small lake in Sisseton Country was given the name “Lake Okabena” on Nicollet's map, “Okabena” being a Dakota word meaning “nesting place of the herons.”
The town of Worthington was founded by "Yankees" (immigrants from New England and upstate New York who were descended from the English Puritans who settled New England in the 1600s).
In 1871, the St. Paul & Sioux City Railway Company began connecting its two namesake cities with a rail line. The steam engines of that time required a large quantity of water, resulting in water stations being needed every along their routes. One of these stations, at the site of present-day Worthington, was designated “The Okabena Railway Station.”
Meanwhile, in that same year, Professor Ransom Humiston of Cleveland, Ohio, and Dr. A.P. Miller, editor of the Toledo Blade, organized a company to locate a colony of New England settlers who had already settled in Northern Ohio along the tracks of the Sioux City and St. Paul Railway. These people were "Yankee" settlers whose parents had moved from New England to the region of Northeast Ohio known as the Connecticut Western Reserve. They were primarily members of the Congregational Church, though due to the Second Great Awakening, many of them had converted to Methodism and Presbyterianism, and some had become Baptists before coming to what is now Minnesota. This colony—the National Colony—was to be a village of temperance, a place where evangelical Methodists, Presbyterians, Congregationalists, and Baptists could live free of the temptations of alcohol. A town was plotted, and the name was changed from the Okabena Railway Station to Worthington, the maiden name of Dr. Miller's mother-in-law.
On April 29, 1872, regular passenger train service to Worthington started, and on that first train were the first of the National Colony settlers. One early arrival described the scene:
We were among the first members of the colony to arrive at the station of an unfinished railroad… There was a good hotel, well and comfortably furnished, one or two stores neatly furnished and already stocked with goods, [and] several other[s] in process of erection… The streets, scarcely to be defined as such, were full of prairie schooners, containing families waiting until masters could suit themselves with “claims,” the women pursuing their housewifely avocations meanwhile—some having cooking stoves in their wagons, others using gypsy fires to do their culinary work; all seeming happy and hopeful.
Some settlers from New England were drinking men, most of them Civil War veterans from Massachusetts and Maine, who came into conflict with the temperance movement. A curious event took place on Worthington's very first Fourth of July celebration. Hearing that there was a keg of beer in the Worthington House Hotel, Professor Humiston entered the hotel, seized the keg, dragged it outside, and destroyed it with an axe. A witness described what happened next:
Upon seeing this, the young men of the town thought it to be rather an imposition, and collected together, procured the services of the band, and under the direction of a military officer marched to the rear of the hotel, and with a wheelbarrow and shovel took the empty keg that had been broken open, and playing the dead march with flag at half staff marched to the flagpole in front of Humiston's office where they dug a grave and gave the empty keg a burial with all the honors attending a soldier's funeral.
They then, with flag at full mast and with lively air, marched back to the ice house, procured a full keg of beer, returning to the grave, resting the keg thereon. Then a general invitation was given to all who desired to partake, which many did until the keg was emptied... In the evening they reassembled, burning Prof. Humiston in effigy about 10 p.m. Thus ended the glorious Fourth at Worthington, Minn.
—Sibley Gazette July 5, 1872
In spite of tensions between pro- and anti-temperance factions, the town grew rapidly. By the end of summer in 1872, 85 buildings had been constructed where just one year before there had been nothing but a field of prairie grass.
Settlers poured into the region. At first they came almost exclusively from the six New England states due to issues of overpopulation combined with land shortages. Some had come from Upstate New York and had parents and grandparents who had moved to that region from New England during the early 1800s and late 1700s. Due to the large number of New Englanders and New England transplants from upstate New York, Worthington, like much of Minnesota at the time, was very culturally continuous with early New England culture for much of its early history. It was the age of the Homestead Act, when of government land could be claimed for free. All one had to do was live on the land and “improve” it, a vague requirement. In such an atmosphere, settlers without connection to the National Colony also arrived in great number, and few of those were temperance activists. The ensuing winter was severe, and swarms of grasshoppers stripped farmers’ fields bare in the summer of 1873. Still, settlers came. 1874 produced a bumper harvest, followed by another grasshopper invasion in 1875. 1876 and 1877 were both good farming years. Grasshoppers returned for the last time in 1879, and a bright future began for southwestern Minnesota. According to the 1880 census, Nobles County had 4,435 residents, 636 of them in Worthington.
In the early 1900s German immigrants began arriving in Worthington in large numbers, not directly from Germany, but mostly from other places in the midwest, especially Ohio, where their communities had already been established.
Unlike in other parts of the country, the Germans did not face xenophobia in Nobles County, but were welcomed by the Yankee population. This led to many writing back to Ohio, which led to chain migration to the region, greatly increasing the German-American population. The "Yankee" population of Americans of English descent did not come into conflict with the German-American community for much of their early history together, but the two communities were divided on the issue of World War I, the Yankee community divided about and the Germans unanimously opposed to American entry into the war. The Yankee community was generally pro-British, but many also did not want the United States to enter the war. The Germans were sympathetic to Germany and did not want the United States to enter into a war against Germany, but the Germans were not anti-British. Before World War I, many German community leaders in Minnesota and Wisconsin spoke openly and enthusiastically about how much better America was than Germany, due primarily (in their eyes) to the presence of English law and the English political culture the Americans had inherited from the colonial era, which they contrasted with the turmoil and oppression in Germany they had so recently fled. Other immigrant groups followed the Germans, including settlers from Ireland, Norway and Sweden.
From 1939 to 1940 Worthington was home to the Worthington Cardinals, a minor league baseball team. Worthington played as a member of the Class D Western League. The Worthington Cardinals were an affiliate of the St. Louis Cardinals.
On December 12, 2006, the Immigration and Customs Enforcement (ICE) staged a coordinated predawn raid at the Swift & Company meat packing plant in Worthington and five other Swift plants in western states, interviewing workers and hauling hundreds off in buses.
Geography
According to the United States Census Bureau, the city has an area of , of which is land and is water.
Climate
Demographics
The U.S. Bureau of Census now classifies Worthington as a micropolitan area, with a population of 20,508. The area has had a relatively high level of immigration, mostly Hispanics, in the early 21st century. Some sources credit this immigration trend for revitalizing the city's economy, which had been constrained by a shrinking population.
2010 census
As of the census of 2010, there were 12,764 people, 4,458 households, and 2,917 families residing in the city. The population density was . There were 4,699 housing units at an average density of . The racial makeup of the city was 62.2% White, 5.5% African American, 0.7% Native American, 8.6% Asian, 0.1% Pacific Islander, 20.5% from other races, and 2.4% from two or more races. Hispanic or Latino of any race were 35.4% of the population.
There were 4,458 households, of which 34.7% had children under the age of 18 living with them, 48.4% were married couples living together, 10.7% had a female householder with no husband present, 6.3% had a male householder with no wife present, and 34.6% were non-families. 28.4% of all households were made up of individuals, and 13.9% had someone living alone who was 65 years of age or older. The average household size was 2.79 and the average family size was 3.36.
The median age in the city was 33.5 years. 26.8% of residents were under the age of 18; 10.7% were between the ages of 18 and 24; 26.1% were from 25 to 44; 21.3% were from 45 to 64; and 15% were 65 years of age or older. The gender makeup of the city was 51.1% male and 48.9% female.
2000 census
As of the census of 2000, there were 11,283 people, 4,311 households, and 2,828 families residing in the city. The population density was 1,578.9 people per square mile (609.3/km). There were 4,573 housing units at an average density of 639.9 per square mile (246.9/km). The racial makeup of the city was 76.81% White, 1.91% African American, 0.49% Native American, 7.06% Asian, 0.13% Pacific Islander, 11.49% from other races, and 2.11% from two or more races. Hispanic or Latino of any race were 19.28% of the population.
There were 4,311 households, out of which 30.5% had children under the age of 18 living with them, 52.4% were married couples living together, 8.9% had a female householder with no husband present, and 34.4% were non-families. 28.9% of all households were made up of individuals, and 15.5% had someone living alone who was 65 years of age or older. The average household size was 2.55 and the average family size was 3.12.
In the city, the population was spread out, with 25.5% under the age of 18, 9.7% from 18 to 24, 27.1% from 25 to 44, 20.1% from 45 to 64, and 17.6% who were 65 years of age or older. The median age was 36 years. For every 100 females, there were 98.6 males. For every 100 females age 18 and over, there were 97.6 males.
The median income for a household in the city was $36,250, and the median income for a family was $44,643. Males had a median income of $28,750 versus $20,880 for females. The per capita income for the city was $18,078. About 9.1% of families and 13.3% of the population were below the poverty line, including 18.4% of those under age 18 and 12.3% of those age 65 or over.
Government
Worthington is in Minnesota's 1st congressional district, formerly represented by Republican Jim Hagedorn of Blue Earth. At the state level, Worthington is in Senate District 22, represented by Republican Bill Weber, and in House District 22B, represented by Republican Rod Hamilton.
Local politics
The mayor of Worthington is Mike Kuhle. City council members meet in City Hall on the second and fourth Mondays of every month to discuss objectives and goals for the city. The city is divided into two wards, with one at-large council member. The mayor and council members are elected to four-year terms.
Current Worthington city council members, in addition to Kuhle, include:
Larry Janssen, 1st Ward
Alan Oberloh, 1st Ward
Mike Harmon, 2nd Ward
Amy Ernst, 2nd Ward
Chad Cummings, At Large
Sister city
There is a sister-city relationship between Worthington and Crailsheim, Germany, the first such relationship in history between an American and a German city.
Education
Worthington is served by Independent School District 518. Worthington's school mascot is the Trojan, and its high school athletic teams play in the Big South Conference. ISD 518 is known regionally for its robust music program offerings, with band, string orchestra, choir, and theater ensembles open to all students. Worthington Senior High School's 'Spirit of Worthington' Trojan Marching Band, with over 160 members, is an ensemble that has performed nationally 4 times. In 2019, the Trojans were a featured band at the Chic-fil-A Peach Bowl in Atlanta, Georgia.
High School: Worthington High School Worthington Senior High School
Middle School: Worthington Middle School
Elementary School: Prairie Elementary School
Worthington's private, parochial schools include:
Worthington Christian School, which serves grades K-8.
St. Mary's Elementary School, which serves grades K-6.
Worthington's local higher education institution is Minnesota West Community and Technical College. Minnesota West's Worthington campus is a two-year college that offers associate degrees in a wide variety of majors, along with diplomas and certificates in areas from practical nursing to accounting, among others.
Worthington and the surrounding area are served by the Nobles County Library, part of the Plum Creek Library System, which is based in the city.
Transportation
Highways
Interstate 90
U.S. Route 59
Minnesota State Highway 60
Minnesota State Highway 266 (decommissioned - designated as Nobles County Road 25)
Nobles County Road 25
Nobles County Road 35
Notable people
Dwayne Andreas, CEO of Archer Daniels Midland and political donor, was born in Worthington
Peter Ludlow, prominent analytic philosopher
Wendell Butcher, football player
George Dayton, banker and real estate developer in Worthington before moving to Minneapolis to start Dayton's Department Store (now part of Macy's); recently restored, 1890 Dayton House is a community historic site and bed and breakfast
Matt Entenza, former minority leader of Minnesota House of Representatives (2002–2006) and 2010 DFL candidate for governor of Minnesota; grew up in Worthington and attended Worthington public schools
Big Tiny Little, pianist and television personality
Stephen Miller, fourth Governor of Minnesota from 1864–1866, settled in Worthington, representing area in Minnesota House of Representatives from 1873 to 1874; presidential elector 1876; buried in Worthington Cemetery
Lee Nystrom, NFL player, was born in Worthington
Tim O'Brien, novelist known for Vietnam War literature, grew up in Worthington and references city in several of his novels, including Lake Okabena in The Things They Carried, published 1990
John Olson, longtime state senator and Worthington native, represented southwestern Minnesota from 1959–1977; chaired Minnesota Senate's General Legislation Committee from 1967 to 1971 and Higher Education Committee 1971-1973
Local events
Worthington hosts many annual events: Windsurfing Regatta & Music Festival (June), International Festival (July), King Turkey Day (September), and Holiday Parade (November).
Media
The Globe serves Worthington, Nobles County, and surrounding areas with a print newspaper, an e-paper and online news. It was purchased by the Forum Communications Company in 1995 and is published on Wednesdays and Fridays.
See also
Impact of the COVID-19 pandemic on the meat industry in the United States
References
External links
City of Worthington
Community Web Site of Worthington, Minnesota
Historic Dayton House
Worthington Daily Globe newspaper site
Worthington Windsurfing Regatta and Unvarnished Music Festival
Cities in Nobles County, Minnesota
Cities in Minnesota
County seats in Minnesota |
2290843 | https://en.wikipedia.org/wiki/Interface%20Message%20Processor | Interface Message Processor | The Interface Message Processor (IMP) was the packet switching node used to interconnect participant networks to the ARPANET from the late 1960s to 1989. It was the first generation of gateways, which are known today as routers. An IMP was a ruggedized Honeywell DDP-516 minicomputer with special-purpose interfaces and software. In later years the IMPs were made from the non-ruggedized Honeywell 316 which could handle two-thirds of the communication traffic at approximately one-half the cost. An IMP requires the connection to a host computer via a special bit-serial interface, defined in BBN Report 1822. The IMP software and the ARPA network communications protocol running on the IMPs was discussed in RFC 1, the first of a series of standardization documents published by the Internet Engineering Task Force (IETF).
History
The concept of an "Interface computer" was first proposed in 1966 by Donald Davies for the NPL network in England. The same idea was independently developed in early 1967 at a meeting of principal investigators for the Department of Defense's Advanced Research Projects Agency (ARPA) to discuss interconnecting machines across the country. Larry Roberts, who led the ARPANET implementation, initially proposed a network of host computers. Wes Clark suggested inserting "a small computer between each host computer and the network of transmission lines", i.e. making the IMP a separate computer.
The IMPs were built by the Massachusetts-based company Bolt Beranek and Newman (BBN) in 1969. BBN was contracted to build four IMPs, the first being due at UCLA by Labor Day; the remaining three were to be delivered in one-month intervals thereafter, completing the entire network in a total of twelve months. When Massachusetts Senator Edward Kennedy learned of BBN's accomplishment in signing this million-dollar agreement, he sent a telegram congratulating the company for being contracted to build the "Interfaith Message Processor".
The team working on the IMP called themselves the "IMP Guys":
Team Leader: Frank Heart
Software: Willy Crowther, Dave Walden, Bernie Cosell and Paul Wexelblat
Hardware: Severo Ornstein, Ben Barker
Theory and collaboration with the above on the overall system design: Bob Kahn
Other: Hawley Rising
Added to IMP team later: Marty Thrope (hardware), Jim Geisman, Truett Thach (installation), Bill Bertell (Honeywell)
BBN began programming work in February 1969 on modified Honeywell DDP-516s. The completed code was six thousand words long, and was written in the Honeywell 516 assembly language. The IMP software was produced primarily on a PDP-1, where the IMP code was written and edited, then run on the Honeywell.
BBN designed the IMP simply as "a messenger" that would only "store-and-forward". BBN designed only the host-to-IMP specification, leaving host sites to build individual host-to-host interfaces. The IMP had an error-control mechanism that discarded packets with errors without acknowledging receipt; the source IMP, upon not receiving an acknowledging receipt, would subsequently re-send a duplicate packet. Based on the requirements of ARPA's request for proposal, the IMP used a 24-bit checksum for error correction. BBN chose to make the IMP hardware calculate the checksum, because it was a faster option than using a software calculation. The IMP was initially conceived as being connected to one host computer per site, but at the insistence of researchers and students from the host sites, each IMP was ultimately designed to connect to multiple host computers.
The first IMP was delivered to Leonard Kleinrock's group at UCLA on August 30, 1969. It used an SDS Sigma-7 host computer. Douglas Engelbart's group at the Stanford Research Institute (SRI) received the second IMP on October 1, 1969. It was attached to an SDS-940 host. The third IMP was installed in University of California, Santa Barbara on November 1, 1969. The fourth and final IMP was installed in the University of Utah in December 1969. The first communication test between two systems (UCLA and SRI) took place on October 29, 1969, when a login to the SRI machine was attempted, but only the first two letters could be transmitted. The SRI machine crashed upon reception of the 'g' character. A few minutes later, the bug was fixed and the login attempt was successfully completed.
BBN developed a program to test the performance of the communication circuits. According to a report filed by Heart, a preliminary test in late 1969 based on a 27-hour period of activity on the UCSB-SRI line found "approximately one packet per 20,000 in error;" subsequent tests "uncovered a 100% variation in this number - apparently due to many unusually long periods of time (on the order of hours) with no detected errors."
A variant of the IMP existed, called the TIP, which connected terminals as well as computers to the network; it was based on the Honeywell 316, a later version of the 516. Later, some Honeywell-based IMPs were replaced with multiprocessing BBN Pluribus IMPs, but ultimately BBN developed a microprogrammed clone of the Honeywell machine.
IMPs were at the heart of the ARPANET until DARPA decommissioned the ARPANET in 1989. Most IMPs were either taken apart, junked or transferred to MILNET. Some became artifacts in museums; Kleinrock placed IMP Number One on public view at UCLA. The last IMP on the ARPANET was the one at the University of Maryland.
BBN Report 1822
BBN Report 1822 specifies the method for connecting a host computer to an IMP. This connection and protocol is generally referred to as 1822, the report number.
The initial version of the 1822 protocol was developed in 1969: since it predates the OSI model by a decade, 1822 does not map cleanly into the OSI layers. However, it is accurate to say that the 1822 protocol incorporates the physical layer, the data link layer, and the network layer. The interface visible to the host system passes network layer addresses directly to a physical layer device.
To transmit data, the host constructs a message containing the numeric address of another host on the network (similar to an IP address on the Internet) and a data field, and transmits the message across the 1822 interface to the IMP. The IMP routes the message to the destination host using protocols that were eventually adopted by Internet routers. Messages could store a total length of 8159 bits, of which the first 96 were reserved for the header ("leader").
While packets transmitted across the Internet are assumed to be unreliable, 1822 messages were guaranteed to be transmitted reliably to the addressed destination. If the message could not be delivered, the IMP sent to the originating host a message indicating that the delivery failed. In practice, however, there were (rare) conditions under which the host could miss a report of a message being lost, or under which the IMP could report a message as lost when it had in fact been received.
Later versions of the 1822 protocol, such as 1822L, are described in RFC 802 and its successors.
See also
TCP/IP model
Fuzzball router
References
Further reading
External links
A Technical History of the ARPANET with photos of IMP
IMP history with photo of developers
Dave Walden's memories of the IMP and ARPANET
Oral history interview with Severo Ornstein, Charles Babbage Institute, University of Minnesota. Ornstein was principal hardware designer of the IMP.
Internet STD 39, also known as BBN Report 1822, "Specification for the Interconnection of a Host and an IMP".
Networking hardware
ARPANET
Network protocols |
340172 | https://en.wikipedia.org/wiki/Mysians | Mysians | Mysians (, ) were the inhabitants of Mysia, a region in northwestern Asia Minor.
Origins according to ancient authors
Their first mention is by Homer, in his list of Trojans allies in the Iliad, and according to whom the Mysians fought in the Trojan War on the side of Troy, under the command of Chromis and Ennomus the Augur, and were lion-hearted spearmen who fought with their bare hands.
Herodotus in his Histories wrote that the Mysians were brethren of the Carians and the Lydians, originally Lydian colonists in their country, and as such, they had the right to worship alongside their relative nations in the sanctuary dedicated to the Carian Zeus in Mylasa. He also mentions a movement of Mysians and associated peoples from Asia into Europe still earlier than the Trojan War, wherein the Mysians and Teucrians had crossed the Bosphorus into Europe and, after conquering all of Thrace, pressed forward till they came to the Ionian Sea, while southward they reached as far as the river Peneus. Herodotus adds an account and description of later Mysians who fought in Darius' army.
Strabo in his Geographica informs that, according to his sources, the Mysians in accordance with their religion abstained from eating any living thing, including from their flocks, and that they used as food honey and milk and cheese. Citing the historian Xanthus, he also reports that the name of the people was derived from the Lydian name for the oxya tree.
Mysian language
Little is known about the Mysian language. Strabo noted that their language was, in a way, a mixture of the Lydian and Phrygian languages. As such, the Mysian language could be a language of the Anatolian group. However, a passage in Athenaeus suggests that the Mysian language was akin to the barely attested Paeonian language of Paeonia, north of Macedon.
A short inscription which could be in Mysian and which dates from between the 5th and 3rd centuries BC was found in Üyücek, near Kütahya, and seems to include Indo-European words, but it has not been deciphered.
See also
Mysia
Mysian language
Ctistae
Kapnobatai
References
Mysia
Ancient peoples of Anatolia
Ancient peoples of the Near East
Members of the Delian League |
23964 | https://en.wikipedia.org/wiki/PlayStation%20%28console%29 | PlayStation (console) | The (abbreviated as PS, commonly known as the PS1 or its codename PSX) is a home video game console developed and marketed by Sony Computer Entertainment. It was released on 3 December 1994 in Japan, 9 September 1995 in North America, 29 September 1995 in Europe, and 15 November 1995 in Australia. As a fifth-generation console, the PlayStation primarily contended with the Nintendo 64 and the Sega Saturn.
Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third-party developers.
The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Premier PlayStation franchises included Gran Turismo, Crash Bandicoot, Tomb Raider, and Final Fantasy, all of which spawned numerous sequels. PlayStation games continued to sell until Sony ceased production of the PlayStation and its games on 23 March 2006—over eleven years after it had been released, and less than a year before the debut of the PlayStation 3. A total of 7,918 PlayStation games were released, with cumulative sales of 962 million units.
The PlayStation signalled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS One.
History
Background
The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé.
The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD".
Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the ill-fated MSX home computer format, Sony had wanted to use its experience in consumer electronics to produce its own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc title sold, giving the company a large degree of control despite Nintendo's leading position in the video gaming market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application.
The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over its licences on all Philips-produced machines.
Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kuturagi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced its partnership with Nintendo and its new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon its work with Sony.
Inception
Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop its own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but their board of directors in Tokyo vetoed the idea when American CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES.
Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to its involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that it had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992.
To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters.
Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible.
Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed its European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy".
Development
Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for its arcade games and could easily port successful games to its home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation’s technological appeal. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was keen to participate in the PlayStation project as a third-party developer since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994.
Despite securing the support of various Japanese studios, Sony had no developers of its own by the time it was developing the PlayStation. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for million, securing its first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development.
The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon its plans for a workstation-based development system in favour of SN Systems', thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005.
Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, the then-representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California and Tokyo housed technical support teams that could work closely with third-party developers if needed. Peter Molyneux, who owned Bullfrog Productions at the time, admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. In contrast to other disc-reading consoles such as the 3DO, the PlayStation could quickly generate and synthesise data from the CD since it was an image-generation system, rather than a data-replay system.
The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful during the early stages of development as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Sony used the free software GNU C compiler, also known as GCC, to guarantee short debugging times as it was already familiar to many programmers. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard.
Launch
Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of . Sales in Japan began with a "stunning" success with long queues in shops. It sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. After a while, a grey market emerged for PlayStations, which were shipped from Japan to North America and Europe, with some buyers of such consoles paying large amounts of money in the range of £700.
Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that its Saturn console would be released at a price of $399. Immediately afterwards came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console.
Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games.
The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20&nsbp;million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles, though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $ billioin ) from worldwide hardware and software sales. By late 1996, sales in Europe totalled units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 and 60 games being developed for the Saturn and the Nintendo 64 respectively.
Marketing success and later years
The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" and "U R NOT " (red E). Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say Bullshit. Let me show you how ready I am. As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well.
Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Skeptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glaendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing.
In 1996, Sony expanded its CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time.
By 1998, Sega, encouraged by its declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3.
Hardware
Technical specifications
The main microprocessor is a 32-bit LSI R3000 CPU with a clock rate of 33.86 MHz and 30 MIPS. Its CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and MIDI sequencing. It features 2 MB of main RAM, with an additional 1 MB being allocated to video memory. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. Its video output, initially provided by a parallel I/O cable (and later a serial I/O used for the PlayStation Link Cable) displays resolutions from 256×224 to 640×480 pixels.
The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. Whilst running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded.
Models
The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port.
Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers.
PS One
On 7 July 2000, Sony released the PS One (stylised as PS one), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006.
Controllers
Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person.
Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April&nsbp;1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size.
The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks, the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation Chris Charla theorized that Sony dropped vibration feedback to keep the price of the controller down.
In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles and slightly different shoulder buttons. It also introduces two new buttons mapped to clicking in the analogue sticks and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller.
Peripherals
Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display).
Released late into the console's lifespan exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America.
Functionality
In addition to playing games, most PlayStation models are equipped to play audio CDs; the Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle.
PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001.
Copy protection system
Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lock-outs). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pickup system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process.
As the disc authenticity was only verified during booting, this copy protection system could be circumvented by swapping any genuine disc with the copied disc, while modchips could remove the protection system altogether. Sony untruthfully suggested in advertisements that discs' unique black undersides played a role in copy protection. In reality, the black plastic used was transparent to any infrared laser and did not itself pose an obstacle to duplicators or computer CD drives, although it may have helped customers distinguish between unofficial and genuine copies.
Hardware problems
Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power and heat even when turned off.
The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models.
Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions.
Game library
A total of 7,918 PlayStation games have been released worldwide; 4,944 in Japan, 1,639 in Europe, and 1,335 in North America. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units.
The PlayStation featured a diverse game library which grew to appeal to all types of players. The first two games available at launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), and Tekken, all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made.
At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; its breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan, boosting sales. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2.
Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format.
Reception
The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 4 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war". Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995.
In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising its stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors.
Legacy
SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third.
The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the fifth best-selling console of all time as of , with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most amount of games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from its video game division contributed to 23%.
Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. Hundreds of PlayStation games were rereleased as PS One Classics for purchase and download on the PlayStation Portable, PlayStation 3, and PlayStation Vita. The PlayStation 2 and certain PlayStation 3 models also maintained backward compatibility with original PlayStation discs.
The PlayStation has often ranked among the best video game consoles. In 2018, RetroGamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s".
CD format
The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; it was likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given its substantial reliance on licensing and exclusive games for its revenue.
Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular titles to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of titles which had already recouped their development costs.
Tokunaka remarked in 1996:
The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square, whose Final Fantasy VII, and Enix (later merged with Square to form Square Enix), whose Dragon Quest VII (2000) were planned for the Nintendo 64. Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo itself or second-parties such as Rare.
PlayStation Classic
The PlayStation Classic is a dedicated video game console by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original.
As a dedicated console, the PlayStation Classic features 20 pre-installed games, such as Tekken 3 (1996), Final Fantasy VII, Jumping Flash!, and Syphon Filter (1999); the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 GB of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console.
The PlayStation Classic received negative reviews from critics and was compared unfavourably to Nintendo's rival NES Classic Edition and Super NES Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. It sold poorly as a result.
See also
PlayStation: The Official Magazine (PSM)
Portable Sound Format (PSF)
System 573
Notes
References
Citations
Publications
1990s toys
2000s toys
CD-ROM-based consoles
Discontinued products
Fifth-generation video game consoles
Home video game consoles
Japanese brands
PlayStation (brand)
Products introduced in 1994
Products and services discontinued in 2006
Sony consoles |
1049533 | https://en.wikipedia.org/wiki/Mental%20Ray | Mental Ray | Mental Ray (stylized as mental ray) is a production-quality ray tracing application for 3D rendering. Its Berlin-based developer Mental Images was acquired by Nvidia in 2007 and Mental Ray was discontinued in 2017.
Mental Ray has been used in many feature films, including Hulk, The Matrix Reloaded & Revolutions, Star Wars: Episode II – Attack of the Clones, The Day After Tomorrow and Poseidon.
In November 2017 Nvidia announced that it would no longer offer new Mental Ray subscriptions, although maintenance releases with bug fixes were published throughout 2018 for existing plugin customers.
Features
The primary feature of Mental Ray is the achievement of high performance through parallelism on both multiprocessor machines and across render farms. The software uses acceleration techniques such as scanline for primary visible surface determination and binary space partitioning for secondary rays. It also supports caustics and physically correct simulation of global illumination employing photon maps. Any combination of diffuse, glossy (soft or scattered), and specular reflection and transmission can be simulated.
Mental Ray was designed to be integrated into a third-party application using an API or be used as a standalone program using the .mi scene file format for batch-mode rendering. Currently there are many programs integrating it such as Autodesk Maya, 3D Studio Max, Cinema 4D and Revit, Softimage|XSI, Side Effects Software's Houdini, SolidWorks and Dassault Systèmes' CATIA. Most of these software front-ends provide their own library of custom shaders (described below). However assuming these shaders are available to mental ray, any mi file can be rendered, regardless of the software that generated it.
Mental Ray is fully programmable and infinitely variable, supporting linked subroutines also called shaders written in C or C++. This feature can be used to create geometric elements at runtime of the renderer, procedural textures, bump and displacement maps, atmosphere and volume effects, environments, camera lenses, and light sources.
Supported geometric primitives include polygons, subdivision surfaces, and trimmed free-form surfaces such as NURBS, Bézier, and Taylor monomial.
Phenomena consist of one or more shader trees (DAG). A phenomenon looks like regular shader to the user, and in fact may be a regular shader, but generally it will contain a link to a shader DAG, which may include the introduction or modification of geometry, introduction of lenses, environments, and compile options. The idea of a Phenomenon is to package elements and hide complexity.
Since 2010 Mental Ray has also included the iray rendering engine, which added GPU acceleration to the product. In 2013, the ambient occlusion pass was also accelerated by CUDA, and in 2015 the GI Next engine was added which can be used to compute all indirect/global illumination on GPUs.
In 2003, Mental Images received an Academy Award for contributions of mental ray to motion pictures.
See also
Dielectric Shader, able to realistically render the behavior of light rays passing through materials with differing refractive indices.
PhotoWorks (ray tracing software), previously part of SolidWorks which used a version of the Mental Ray rendering engine as its renderer in older versions.
Notes
Further reading
Driemeyer, Thomas: Rendering with mental ray, SpringerWienNewYork,
Driemeyer, Thomas: Programming mental ray, SpringerWienNewYork,
Kopra, Andy: Writing mental ray Shaders: A perceptual introduction, SpringerWienNewYork,
External links
Mental Ray home page
Global illumination software
Rendering systems
3D rendering software for Linux
Proprietary freeware for Linux |
55353373 | https://en.wikipedia.org/wiki/OpenHPC | OpenHPC | OpenHPC is a set of community-driven FOSS tools for Linux based HPC. OpenHPC does not have specific hardware requirements.
History
A birds-of-a-feather panel discussion titled "Community Supported HPC Repository & Management Framework" convened at the 2015 edition of the International Supercomputing Conference. The panel discussed the common software components necessary to build linux compute clusters and solicited feedback on community interest in such a project. Following the response, the OpenHPC project was announced at SC 2015 under the auspices of the Linux Foundation.
Releases
Design
OpenHPC provides an integrated and tested collection of software components that, along with a supported standard Linux distribution, can be used to implement a full-featured compute cluster. Components span the entire HPC software ecosystem including provisioning and system administration tools, resource management, I/O services, development tools, numerical libraries, and performance analysis tools. The architecture of OpenHPC is intentionally modular to allow end users to pick and choose from the provided components, as well as to foster a community of open contribution. The project provides recipes for building clusters using CentOS (v8.3) and openSUSE Leap (v15.2) on x86_64 as well as aarch64 architectures.
See also
Cluster manager
Comparison of cluster software
List of cluster management software
References
External links
OpenHPC: A Comprehensive System Software Stack
Next Platform – OpenHPC Pedal Put To The Compute Metal
HPCwire – OpenHPC Pushes to Prove its Openness and Value at SC16
High Performance Computing: 32nd International Conference
OpenHPC Slack channel
Cluster computing
High-availability cluster computing
Job scheduling
Parallel computing |
37990714 | https://en.wikipedia.org/wiki/Muvee%20Reveal | Muvee Reveal | muvee Reveal is proprietary video editing software program for Microsoft Windows created by Singapore-based muvee Technologies. Reveal creates video slideshows from input videos, photos, and music. Muvee Reveal 7 was first released in 2007 and is the modern successor to the award-winning muvee autoProducer title first released by muvee Technologies in 2002. Since 2009, versions of muvee's Reveal movie making software use CUDA for faster processing and rendering.
muvee Reveal has been downloaded over a million times while OEM versions of the software were installed on over 24 million on HP computers, Dell PC and Notebooks, bundled with Nikon One and Coolpix, Flip, Olympus, Sony Cybershot and Panasonic Lumix cameras, Seagate FreeAgent and Toshiba Canvio external hard drives, Coby camcorders since 2004.
Features
Partial List of Features in muvee Reveal:
Insert photos and video
Add opening Titles, intertitles, and end Credits with free fonts
Place captions and subtitle lower-thirds on photos and videos.
Add any music and mark desired sections
Fit video to music length
Add an editing template Style which adds effects, transitions and pacing information
Voice-over with music-voice-sfx mixer
magicMoments for Video to mark up essential segments of video
magicSpot for Photos to mark start and end points for a Ken-Burns effect on photos
Chapters: create sections where different music and Styles can be used
Output formats
Output formats include:
WMV
MOV
MPEG-1
DV-AVI
AVI
MPEG-2
H.264
MPEG-4
See also
List of video editing software
Comparison of video editing software
MPEG-4 Part 14
References
External links
Video editing software |
8465195 | https://en.wikipedia.org/wiki/CA/EZTEST | CA/EZTEST | CA-EZTEST was a CICS interactive test/debug software package distributed by Computer Associates and originally called EZTEST/CICS, produced by Capex Corporation of Phoenix, Arizona with assistance from Ken Dakin from England.
The product provided source level test and debugging features for computer programs written in COBOL, PL/I and Assembler (BAL) languages to complement their own existing COBOL optimizer product.
Competition
CA-EZTEST initially competed with three rival products:
"Intertest" originally from On-line Software International, based in the United States. In 1991, Computer Associates International, Inc. acquired On-line Software and renamed the product CA-INTERTEST, then stopped selling CA-EZTEST.
OLIVER (CICS interactive test/debug) from Advanced Programming Techniques in the UK.
XPEDITER from Compuware Corporation who in 1994 acquired the OLIVER product.
Early critical role
Between them, these three products provided much needed third-party system software support for IBM's "flagship" teleprocessing product CICS, which survived for more than 20 years as a strategic product without any memory protection of its own. A single "rogue" application program (frequently by a buffer overflow) could accidentally overwrite data almost anywhere in the address space causing "down-time" for the entire teleprocessing system, possibly supporting thousands of remote terminals. This was despite the fact that much of the world's banking and other commerce relied heavily on CICS for secure transaction processing between 1970 and early 1990s. The difficulty in deciding which application program caused the problem was often insurmountable and frequently the system would be restarted without spending many hours investigated very large (and initially unformatted) "core dump"s requiring expert system programming support and knowledge.
Early integrated testing environment
Additionally, the product (and its competitors) provided an integrated testing environment which was not provided by IBM for early versions of CICS and which was only partially satisfied with their later embedded testing tool — "Execution Diagnostic Facility" (EDF), which only helped newer "Command level" programmers and provided no protection.
Supported operating systems
The following operating systems were supported:
IBM MVS
IBM XA
IBM VSE (except XPEDITER)
References
External links
IBM CICS official website
Xpediter — Interactive mainframe analysis and debugging
Xpeditor/CICS users guide for COBOL for OS/390 (Release 2.5 or above) and z/OS, September 2004
CA Inc. — product description for CA-Intertest
Software testing
Debuggers
CA Technologies
IBM mainframe software |
2584996 | https://en.wikipedia.org/wiki/Source-specific%20multicast | Source-specific multicast | Source-specific multicast (SSM) is a method of delivering multicast packets in which the only packets that are delivered to a receiver are those originating from a specific source address requested by the receiver. By so limiting the source, SSM reduces demands on the network and improves security.
SSM requires that the receiver specify the source address and explicitly excludes the use of the (*,G) join for all multicast groups in RFC 3376, which is possible only in IPv4's IGMPv3 and IPv6's MLDv2.
Any-source multicast
Source-specific multicast is best understood in contrast to any-source multicast (ASM). In the ASM service model a receiver expresses interest in traffic to a multicast address. The multicast network must
discover all multicast sources sending to that address, and
route data from all sources to all interested receivers.
This behavior is particularly well suited to groupware applications where
all participants in the group want to be aware of all other participants, and
the list of participants is not known in advance.
The source discovery burden on the network can become significant when the number of sources is large.
Operation
In the SSM service model, in addition to the receiver expressing interest in traffic to a multicast address, the receiver expresses interest in receiving traffic from only one specific source sending to that multicast address. This relieves the network of discovering many multicast sources and reduces the amount of multicast routing information that the network must maintain.
SSM requires support in last-hop routers and in the receiver's operating system. SSM support is not required in other network components, including routers and even the sending host. Interest in multicast traffic from a specific source is conveyed from hosts to routers using IGMPv3 as specified in RFC 4607.
SSM destination addresses must be in the ranges 232.0.0.0/8 for IPv4. For IPv6 current allowed SSM destination addresses are specified by ff3x::/96, where the hexadecimal digit x represents the scope. Note however that the allocation may be extended in the future so receivers and network equipment should treat any ff3x::/32 address as SSM.
References
External links
JAVA Source-specific multicast support library
Internet broadcasting
Internet Protocol
Network protocols |
5556196 | https://en.wikipedia.org/wiki/Crime%20prevention | Crime prevention | Crime prevention is the attempt to reduce and determine crime and criminals. It is applied specifically to efforts made by governments to reduce crime, enforce the law, and maintain criminal justice.
Studies
Criminologists, commissions, and research bodies such as the World Health Organization, United Nations, the United States National Research Council, the UK Audit Commission have analyzed their and others' research on what lowers rates of interpersonal crime.
They agree that governments must go beyond law enforcement and criminal justice to tackle the risk factors that cause crime because it is more cost effective and leads to greater social benefits than the standard ways of responding to crime. Multiple opinion polls also confirm public support for investment in prevention. Waller uses these materials in Less Law, More Order to propose specific measures to reduce crime as well as a crime bill.
The World Health Organization Guide (2004) complements the World Report on Violence and Health (2002) and the 2003 World Health Assembly Resolution 56-24 for governments to implement nine recommendations, which were:
Create, implement and monitor a national action plan for violence prevention.
Enhance capacity for collecting data on violence.
Define priorities for, and support research on, the causes, consequences, costs and prevention of violence.
Promote primary prevention responses.
Strengthen responses for victims of violence.
Integrate violence prevention into social and educational policies, and thereby promote gender and social equality.
Increase collaboration and exchange of information on violence prevention.
Promote and monitor adherence to international treaties, laws and other mechanisms to protect human rights.
Seek practical, internationally agreed responses to the global drugs and global arms trade.
The commissions agree on the role of municipalities, because they are best able to organize the strategies to tackle the risk factors that cause crime. The European Forum for Urban Safety and the United States Conference of Mayors have stressed that municipalities must target the programs to meet the needs of youth at risk and women who are vulnerable to violence.
To succeed, they need to establish a coalition of key agencies such as schools, job creation, social services, housing and law enforcement around a diagnosis.
Types suggestions for effective crime prevention
Several factors must come together for a crime to occur:
an individual or group must have the desire or motivation to participate in a banned or prohibited behavior;
at least some of the participants must have the skills and tools needed to commit the crime; and,
an opportunity must be acted upon.
Primary prevention addresses individual and family-level factors correlated with later criminal participation. Individual level factors such as attachment to school and involvement in pro-social activities decrease the probability of criminal involvement.
Family-level factors such as consistent parenting skills similarly reduce individual level risk. Risk factors are additive in nature. The greater the number of risk factors present the greater the risk of criminal involvement. In addition there are initiatives which seek to alter rates of crime at the community or aggregate level.
For example, Larry Sherman from the University of Maryland in Policing Domestic Violence (1993) demonstrated that changing the policy of police response to domestic violence calls altered the probability of subsequent violence. Policing hot spots, areas of known criminal activity, decreases the number of criminal events reported to the police in those areas. Other initiatives include community policing efforts to capture known criminals. Organizations such as America's Most Wanted and Crime Stoppers help catch these criminals.
Secondary prevention uses intervention techniques that are directed at youth who are at high risk to commit crime, and especially focus on youth who drop out of school or get involved in gangs. It targets social programs and law enforcement at neighborhoods where crime rates are high. Much of the crime that is happening in neighbourhoods with high crime rates is related to social and physical problems. The use of secondary crime prevention in cities such as Birmingham and Bogotá has achieved large reductions in crime and violence. Programs, such as, general social services, educational institutions and the police, are focused on youth who are at risk and have been shown to significantly reduce crime.
Tertiary prevention is used after a crime has occurred in order to prevent successive incidents. Such measures can be seen in the implementation of new security policies following acts of terrorism such as the September 11, 2001 attacks.
Situational crime prevention uses techniques focusing on reducing on the opportunity to commit a crime. Some of techniques include increasing the difficulty of crime, increasing the risk of crime, and reducing the rewards of crime.
Situational crime prevention
Introduction and description
Situational crime prevention (SCP) is a relatively new concept that employs a preventive approach by focusing on methods to reduce the opportunities for crime. It was first outlined in a 1976 report released by the British Home Office. SCP focuses on the criminal setting and is different from most criminology as it begins with an examination of the circumstances that allow particular types of crime. By gaining an understanding of these circumstances, mechanisms are then introduced to change the relevant environments with the aim of reducing the opportunities for particular crimes. Thus, SCP focuses on crime prevention rather than the punishment or detection of criminals and its intention is to make criminal activities less appealing to offenders.
SCP focuses on opportunity-reducing processes that:
Are aimed at particular forms of crime;
Entail the management, creation or manipulation of the immediate environment in as organised and permanent a manner as possible; and
Result in crime being more difficult and risky or less rewarding and justifiable.
The theory behind SCP concentrates on the creation of safety mechanisms that assist in protecting people by making criminals feel they may be unable to commit crimes or would be in a situation where they may be caught or detected, which will result in them being unwilling to commit crimes where such mechanisms are in place. The logic behind this is based on the concept of rational choice - that every criminal will assess the situation of a potential crime, weigh up how much they may gain, balance it against how much they may lose and the probability of failing, and then act accordingly.
One example of SCP in practice is automated traffic enforcement. Automated traffic enforcement systems (ATES) use automated cameras on the roads to catch drivers who are speeding and those who run red lights. Such systems enjoy use all over the world. These systems have been installed and are advertised as an attempt to keep illegal driving incidences down. As a potential criminal, someone who is about to speed or run a red light knows that their risk of getting caught is nearly 100% with these systems. This completely disincentivizes the person from speeding or running red lights in areas in which they know ATES are set up. Though not conclusive, evidence shows that these type of systems work. In a Philadelphia study, some of the city's most dangerous intersections had a reduction of 96% in red light violations after the installation and advertisement of an ATES system.
Applying SCP to information systems (IS)
Situational crime prevention (SCP) in general attempts to move away from the "dispositional" theories of crime commission i.e. the influence of psychosocial factors or genetic makeup of the criminal, and to focus on those environmental and situational factors that can potentially influence criminal conduct. Hence rather than focus on the criminal, SCP focuses on the circumstances that lend themselves to crime commission. Understanding these circumstances leads to the introduction of measures that alter the environmental factors with the aim of reducing opportunities for criminal behavior. Other aspects of SCP include:
targeting specific forms of crime e.g. cybercrime
aiming to increase the effort and decrease potential risks of crime
reducing provocative phenomena
Safeguarding
Another aspect of SCP that is more applicable to the cyber environment is the principle of safeguarding. The introduction of these safeguards is designed to influence the potential offender's view of the risks and benefits of committing the crime. A criminal act is usually performed if the offender decides that there is little or no risk attached to the act. One of the goals of SCP is to implement safeguards to the point where the potential offender views the act unfavourably. For example, drivers approaching a traffic junction where there are speed cameras slow down if there is a nearly 100% chance of being caught trying to run a red light. The use of crime "scripts" has been touted as a method of administering safeguards. Scripts were originally developed in the field of cognitive science and focus on the behavioural processes involved in rational goal-oriented behaviour. Hence scripts have been proposed as tool for examining criminal behaviour. In particular the use of what is termed a "universal script" has been advanced for correctly identifying all the stages in the commission process of a crime.
Application to cybercrimes
It has been suggested that cybercriminals be assessed in terms of their criminal attributes, which include skills, knowledge, resources, access and motives (SKRAM). Cybercriminals usually have a high degree of these attributes and this is why SCP may prove more useful than traditional approaches to crime. Clarke proposed a table of twenty-five techniques of situational crime prevention, but the five general headings are:
Increasing the effort to commit the crime
Increasing the risks of committing the crime
Reducing the rewards of committing the crime
Reducing any provocation for committing the crime
Removing any excuses for committing the crime
These techniques can be specifically adapted to cybercrime as follows:
Increasing the effort
Reinforcing targets and restricting access- the use of firewalls, encryption, card/password access to ID databases and banning hacker websites and magazines.
Increasing the risk
Reinforcing authentication procedures- background checks for employees with database access, tracking keystrokes of computer users, use of photo and thumb print for ID documents/credit cards, requiring additional ID for online purchases, use of cameras at ATMs and at point of sale.
Reducing the rewards
Removing targets and disrupting cyberplaces – monitoring Internet sites and incoming spam, harsh penalties for hacking, rapid notification of stolen or lost credit bankcards, avoiding ID numbers on all official documents.
Reducing provocation and excuses
Avoiding disputes and temptations – maintaining positive employee-management relations and increasing awareness of responsible use policy.
Many of these techniques do not require a considerable investment in hi-tech IT skills and knowledge. Rather, it is the effective utilization and training of existing personnel that is key.
It has been suggested that the theory behind situational crime prevention may also be useful in improving information systems (IS) security by decreasing the rewards criminals may expect from a crime. SCP theory aims to affect the motivation of criminals by means of environmental and situational changes and is based on three elements:
Increasing the perceived difficulty of crime;
Increasing the risks; and
Reducing the rewards.
IS professionals and others who wish to fight computer crime could use the same techniques and consequently reduce the frequency of computer crime that targets the information assets of businesses and organisations.
Designing out crime from the environment is a crucial element of SCP and the most efficient way of using computers to fight crime is to predict criminal behaviour, which as a result, makes it difficult for such behaviour to be performed. SCP also has an advantage over other IS measures because it does not focus on crime from the criminal's viewpoint.
Many businesses/organisations are heavily dependent on information and communications technology (ICT) and information is a hugely valuable asset due to the accessible data that it provides, which means IS has become increasingly important. While storing information in computers enables easy access and sharing by users, computer crime is a considerable threat to such information, whether committed by an external hacker or by an ‘insider’ (a trusted member of a business or organisation). After viruses, illicit access to and theft of, information form the highest percentage of all financial losses associated with computer crime and security incidents. Businesses need to protect themselves against such illegal or unethical activities, which may be committed via electronic or other methods and IS security technologies are vital in order to protect against amendment, unauthorised disclosure and/or misuse of information. Computer intrusion fraud is a huge business with hackers being able to find passwords, read and alter files and read email, but such crime could almost be eliminated if hackers could be prevented from accessing a computer system or identified quickly enough.
Despite many years of computer security research, huge amounts of money being spent on secure operations and an increase in training requirements, there are frequent reports of computer penetrations and data thefts at some of the most heavily protected computer systems in the world. Criminal activities in cyberspace are increasing with computers being used for numerous illegal activities, including email surveillance, credit card fraud and software piracy. As the popularity and growth of the Internet continues to increase, many web applications and services are being set up, which are widely used by businesses for their business transactions.
In the case of computer crime, even cautious companies or businesses that aim to create effective and comprehensive security measures may unintentionally produce an environment, which helps provide opportunities because they are using inappropriate controls. Consequently, if the precautions are not providing an adequate level of security, the IS will be at risk.
Application to sexual abuse
Smallbone et al.’s Integrated Theory of Child Sexual Abuse posits that it can be useful to study child sexual abuse as a situationally specific incident, and that on any particular occasion, a variety of different factors can influence whether that incident is likely
to occur. One set of factors is situational factors, which form the immediate backdrop to the setting in which the abuse takes
place. Situational factors, it is argued, can influence not just whether a person abuses a child, but whether the idea of abusing occurs to them
in the first place. The particular opportunities and dynamics of a situation are said to present cues, stressors, temptations and perceived provocations, which trigger motivation. The consideration of situational factors leads to the argument that some offenders may be considered as ‘situational’, marking them out from other types. The ‘situational offender’ is someone who is not primarily attracted to children. Instead, he is stimulated to offend by specific behavioural cues or stressors, often while performing care-giving duties. The authors of the theory argue that modifying the situations experienced by children, through situational crime prevention strategies, could lower the likelihood of abuse, irrespective of the disposition of people who are likely to come into contact with children. The authors concede that there has been little testing of situational interventions, which means there is little evidence to demonstrate their effectiveness.
An evaluation of a programme which worked work mothers in London to reduce situational risk of child sexual abuse in the home illustrated some of the challenges that mothers faced in identifying and reducing situational risk:
Increasing understanding about abuse, how and where it happens.
Accepting the possibility of abuse at home and in the family.
Accurately assessing the risks posed to one's own children.
Lowering known risks by negotiating with family members.
Situational crime prevention and fraud
In computer systems that have been developed to design out crime from the environment, one of the tactics used is risk assessment, where business transactions, clients and situations are monitored for any features that indicate a risk of criminal activity. Credit card fraud has been one of the most complex crimes worldwide in recent times and despite numerous prevention initiatives, it is clear that more needs to be done if the problem is to be solved. Fraud management comprises a whole range of activities, including early warning systems, signs and patterns of different types of fraud, profiles of users and their activities, security of computers and avoiding customer dissatisfaction. There are a number of issues that make the development of fraud management systems an extremely difficult and challenging task, including the huge volume of data involved; the requirement for fast and accurate fraud detection without inconveniencing business operations; the ongoing development of new fraud to evade existing techniques; and the risk of false alarms.
Generally, fraud detection techniques fall into two categories: statistical techniques and artificial intelligence techniques.
Important statistical data analysis techniques to detect fraud include:
Grouping and classification to determine patterns and associations among sets of data.
Matching algorithms to identify irregularities in the transactions of users compared to previous proof
Data pre-processing techniques for validation, correction of errors and estimating incorrect or missing data.
Important AI techniques for fraud management are:
Data mining – to categorise and group data and automatically identify associations and rules that may be indicative of remarkable patterns, including those connected to fraud.
Specialist systems to programme expertise for fraud detection in the shape of rules.
Pattern recognition to identify groups or patterns of behaviour either automatically or to match certain inputs.
Machine learning techniques to automatically detect the characteristics of fraud
Neural networks that can learn suspicious patterns and later identify them.
Applications
Neighborhoods can implement protective strategies to reduce violent crime. The broken windows theory of crime suggests that disorderly neighborhoods can promote crime by showing they have inadequate social control. Some studies have indicated that modifying the built environment can reduce violent crime. This includes deconcentrating high-rise public housing, making zoning changes, restricting the number of liquor licenses available in an area, and keeping vacant lots and buildings maintained and secure.
See also
Crime displacement
Predictive policing
Notes
Bibliography
Audit Commission, Misspent youth: Young people and crime, London: Audit Commission for Local Authorities and NHS in England and Wales, 1996
Her Majesty's Inspectorate of Constabulary, Beating crime, London: Home Office, 1998
Home Office, Reducing offending: An assessment of research evidence on ways of dealing with offending behaviour, edited by Peter Goldblatt and Chris Lewis. London: Home Office, Research and Statistics, 1998
International Centre for Prevention of Crime, Urban Crime Prevention and Youth at Risk: Compendium of promising strategies and programs from around the world, Montreal, 2005
International Centre for Prevention of Crime, Crime Prevention Digest II: Comparative Analysis of Successful Community Safety, Montreal, 1999
International Centre for Prevention of Crime, 100 Crime Prevention Programs to Inspire Action across the World, Montreal, 1999
National Research Council (US), Fairness and Effectiveness in Policing: The Evidence, edited by Wesley Skogan, Washington, DC: The National Academies Press, 2004
National Research Council (U.S.), Juvenile crime, juvenile justice, edited by Joan McCord, et al., Washington, DC: National Academies Press, 2001
National Research Council and Institute of Medicine, Violence in families: assessing prevention and treatment programs, Committee on the Assessment of Family Violence Interventions, Board on Children, Youth, and Families, Edited by Rosemary Chalk and Patricia King. Washington, DC: National Academies Press, 2004
Sherman, Lawrence, David Farrington, Brandon Welsh, Doris MacKenzie, 2002, Evidence Based Crime Prevention, New York: Routledge
Skogan, Wesley and Susan Hartnett, Community Policing: Chicago Style, New York: Oxford University Press, 1997
United Nations, Economic and Social Council, Guidelines for the Prevention of Crime, New York: United Nations, Economic and Social Council, Office for Drug Control and Crime Prevention, 2002
Waller, Irvin, Less Law, More Order: The Truth about Reducing Crime, West Port: Praeger Imprint Series, 2006
Welsh, Brandon, and David Farrington, eds., Preventing Crime: What Works for Children, Offenders, Victims, and Places, New York: Springer, 2006
World Health Organization, World report on road traffic injury prevention: Summary, Geneva, 2004
World Health Organization, Preventing violence: A guide to implementing the recommendations of the World Report on Violence and Health, Geneva: Violence and Injuries Prevention, 2004
World Health Organization, World Report on Violence and Health, Geneva: Violence and Injuries Prevention, 2002
External links
Institute for the Prevention of Crime
International Center for the Prevention of Crime
Crime Prevention and Social Media Community of Practice
Criminology
Law enforcement techniques
Espionage techniques |
2189320 | https://en.wikipedia.org/wiki/Roman%20Dzindzichashvili | Roman Dzindzichashvili | Roman Yakovlevich Dzindzichashvili (; pronounced jin-jee-khash-VEE-lee; born May 5, 1944) is a Soviet-born Israeli-American chess player. He was awarded the title Grandmaster by FIDE in 1977.
Life and career
Born in Tbilisi, Georgian SSR into a family of Georgian Jews, his older brother is Nodar Djin. Dzindzichashvili won the Junior Championship of the Soviet Union in 1962 and the University Championships in 1966 and 1968. In 1970, he was awarded the title of International Master by FIDE. He left the U.S.S.R. in 1976 for Israel and earned the Grandmaster title in 1977. One of his best career performances was first place at the 53rd Hastings Chess Festival in 1977/1978, scoring 10½ out of 14 points, a full point ahead of former World Champion Tigran Petrosian. In 1979, he settled in the United States, and he won the Lone Pine tournament the next year. He led the U.S.team at the Chess Olympiad in 1984.
He won the U.S. Chess Championship twice, in 1983 and again in 1989, sharing the title with two other players each time. He briefly took up residence in Washington Square Park in New York City, and hustled chess during the 1980s, making a living playing blitz for stakes, as is popular there. He had a cameo in the 1993 film Searching for Bobby Fischer. He also had a brief appearance in "Men Who Would Be Kings", a documentary about chess in Washington Square Park set in the 1980s.
Dzindzichashvili is a well-known theoretician and a chess coach. Among his students are 5-time US Champion Gata Kamsky and Eugene Perelshteyn. He is the author and star of multiple chess instructional DVDs entitled "Roman's Lab". He currently produces instructional videos for Chess.com. Topics include openings, middlegames, endgames, famous players, and interesting games.
He is one of the founders of Chess.net internet chess server project, started in 1993. He played third board for the "GGGg" team that won the Amateur Team East tournament in February 2008.
Dzindzichashvili vs. computer programs
Dzindzichashvili played a series of rapid games against the computer program Fritz in 1991 and 1993. In the following game he checkmated the program in only 28 moves.
Dzindzichashvili vs. Fritz, 1991 1.d4 e6 2.e4 d5 3.e5 c5 4.Nf3 Nc6 5.Bd3 cxd4 6.0-0 Bc5 7.Re1 Nge7 8.Nbd2 0-0 9.Bxh7+ Kxh7 10.Ng5+ Kg6 11.Qg4 Nxe5 12.Rxe5 f5 13.Qg3 Rf7 14.Ndf3 Qh8 15.Nh4+ Qxh4 16.Qxh4 Rf8 17.Qh7+ Kf6 18.Nf3 Ng6 19.Bg5+ Kf7 20.Qh5 Rh8 21.Rxf5+ Kg8 22.Qxg6 exf5 23.Bf6 Rh7 24.Re1 d3 25.Re8+ Bf8 26.Ng5 Rh6 27.Rxf8+ Kxf8 28.Qf7#
In a match held on March 3–7, 2008, Dzindzichashvili played a series of eight games, time control 45'+10", against the computer program Rybka, with Rybka giving odds of pawn and move. The series ended in a 4–4 tie. A return match, at the same odds, was played on July 28, 2008, at the faster than tournament time control (30'+20"). Rybka 3, running on eight CPUs, won by the score 2½–1½ (1 victory and 3 draws).
Game from 1969
In 1969, in the USSR, Dzindzichahsvili had a quick win against Grigoryan.
GM Karen H. Grigoryan (white) vs. Roman Dzindzichahsvili (black)
ECO code C64, Ruy Lopez opening.
1.e4 e5 2.Nf3 Nc6 3.Bb5 Bc5 4.c3 f5 5.d4 fxe4 6.Ng5 Bb6 7.d5 e3 8.dxc6 bxc6 9.h4 exf2+ 10.Kf1 cxb5 11.Qd5 Nh6 12.Qxa8 c6 13.Ne4 0-0 14.Bg5 b4 (diagram) (prepares 15... Ba6) (0-1)
Further reading
See also
List of Jewish chess players
References
External links
The Dzindzichashvili– Rybka 3 Handicap Match, Chessbase.com
1944 births
Living people
Chess grandmasters
Chess coaches
American chess players
Chess players from Georgia (country)
Israeli chess players
Soviet chess players
Jewish chess players
Jews from Georgia (country)
Sportspeople from Tbilisi
American people of Georgian-Jewish descent |
4175276 | https://en.wikipedia.org/wiki/Time%20Matters | Time Matters | Time Matters is practice management software, produced by PCLaw | Time Matters LLC. It differs from contact management software such as ACT! or GoldMine because in addition to contacts, it manages calendaring, email, documents, research, billing, accounting, and matters or projects. It integrates with a variety of other software products from both LexisNexis and other vendors. Some of these vendors are Quicken, Microsoft, Palm, Mozilla, Corel, and Adobe. Developed originally for law firms, Time Matters competes with Gavel, Amicus, Tabs, and other legal practice management products. It also may be used in conjunction with Document modelling and Document assembly software products like HotDocs and Deal Builder.
Time Matters was developed by DATA.TXT Corporation originally of Coral Gables, Florida, later of Cary, North Carolina. Since its inception, DATA.TXT Corporation focused on making Time Matters an all-encompassing professional office software package, providing Calendar, Tickler, Contact, Matter, Document, and Messaging Management for personal computers and networks of all sizes. Founded in December 1989 by Robert Butler who was later joined as co-founder by Kevin Stilwell in 1992, the entire management and programming staff that began Time Matters' development in 1989 remained on the team until 2004, providing continuity and reliability rarely seen in software developed for specialized markets. Time Matters was purchased by Reed Elsevier in March, 2004. LexisNexis developed Time Matters out of offices in Cary NC, later moving operations to Raleigh, North Carolina on the North Carolina State University campus.
Time Matters for Windows has been shipping since 1994 (The DOS version of Time Matters started shipping in 1989).
Time Matters was previously available in three editions: Professional, Enterprise, and World. The Enterprise edition used Microsoft SQL Server as its database engine. Time Matters Browser Edition (formerly World Edition) served up Time Matters in Web browsers for remote access to a law firm's data. An international network of Certified Independent Consultants ("CICs") support, train, and customize this product for end-users.
Time Matters Professional, discontinued with the release of Time Matters 10.0 in 2009, was based on the TPS file system developed by Softvelocity. Currently, Time Matters relies on SQL Server for its database.
With the release of Version 10 in October 2009, Time Matters became available only in the Enterprise Edition (but was sold as Time Matters). In May 2010, LexisNexis introduced an Annual Maintenance Plan (AMP) subscription program. AMP subscribers are eligible to download product upgrades and to receive technical support. AMP subscribers also receive free access to online training and are eligible to subscribe to the Time Matters Go mobile app service for Android and iOS. No per incidence technical support options are available.
In 2018, Time Matters introduced Time Matters Go, a mobile application for iOS and Android (operating system) devices.
Time Matters 16.4 released on January 30, 2019. This release provided improved integration with Microsoft Exchange Server, and improved add-ins for supported versions of Adobe Acrobat and Microsoft Office applications.
In May 2019, LexisNexis entered a joint venture with LEAP Legal Software, providing a migration option from the server-based Time Matters to the cloud-based product offered by LEAP. At the time, LexisNexis reported that they had 15,000 paying customers and 130,000 users across their PCLaw and Time Matters products. A new software company, PCLaw | Time Matters was born out of the joint-venture, which continues to develop Time Matters.
See also
LexisNexis
References
External links
Time Matters website
Business software
Legal software
Timekeeping |
49400675 | https://en.wikipedia.org/wiki/Cub%20Linux | Cub Linux | Cub Linux was a computer operating system designed to mimic the desktop appearance and functionality of Chrome OS. It was based on Ubuntu Linux LTS 14.04 "Trusty Tahir". It used Openbox as the window manager and tools taken from LXDE, Gnome, XFCE as well as a number of other utilities. It was a cloud-centric operating system that was heavily focused on the Chromium Browser. Cub Linux's tagline was "Cub = Chromium + Ubuntu".
History
Cub Linux was originally called Chromixium OS. The developer, RichJack, initially announced the project on the Ubuntu user forums on September 19, 2014. Since then, the project released the first stable version, Chromixium 1.0 as a 32 bit live ISO on April 26, 2015. This was followed by a service pack to address a number of issues such as screen tearing and slow menu generation. In July 2015, a number of updates were rolled into a new release, version 1.5. This was initially 32 bit only, but was followed by a 64 bit release in November 2015.
At some point towards the end of 2015, Google, who owns the rights to the Chrome OS and Chromium trademarks, requested that RichJack cease use of the Chromixium mark and related websites and social media presences. On January 17, 2016, RichJack announced that Chromixium would be changing name to Cub Linux with immediate effect and that the Chromixium name would be completely dropped by 31 March 2016.
Towards the end of 2016, the Cub Linux website mysteriously disappeared. Their GitHub page is still open. User d4zzy, who was involved in Cub Linux's development had this to say about its sudden end: "Cub was killed due to private life restrictions - that was all and there was no one at the time to pick it up."
On July 19, 2017, the developer of Feren OS announced that he would be "bringing back Cub" in the form of Phoenix Linux. These plans appear to be on hold due to the success of Feren OS
Receptions
Jesse Smith from DistroWatch Weekly reviewed Chromixium OS 1.0:
See also
SUSE Studio#Notable appliances
List of Linux distributions#openSUSE-based
Ubuntu
Chrome OS
Chromium Browser
References
External links
Cub Linux Forum
Cub Linux at Distrowatch
Cub Linux source code
Original Chromixium website
Chromixium archive and downloads
Cub Linux archive and downloads
Phoenix Linux website
Computer-related introductions in 2016
X86-64 Linux distributions
Ubuntu derivatives
Linux distributions |
44942220 | https://en.wikipedia.org/wiki/Virtual%20memory%20compression | Virtual memory compression | Virtual memory compression (also referred to as RAM compression and memory compression) is a memory management technique that utilizes data compression to reduce the size or number of paging requests to and from the auxiliary storage. In a virtual memory compression system, pages to be paged out of virtual memory are compressed and stored in physical memory, which is usually random-access memory (RAM), or sent as compressed to auxiliary storage such as a hard disk drive (HDD) or solid-state drive (SSD). In both cases the virtual memory range, whose contents has been compressed, is marked inaccessible so that attempts to access compressed pages can trigger page faults and reversal of the process (retrieval from auxiliary storage and decompression). The footprint of the data being paged is reduced by the compression process; in the first instance, the freed RAM is returned to the available physical memory pool, while the compressed portion is kept in RAM. In the second instance, the compressed data is sent to auxiliary storage but the resulting I/O operation is smaller and therefore takes less time.
In some implementations, including zswap, zram and Helix Software Company’s Hurricane, the entire process is implemented in software. In other systems, such as IBM's MXT, the compression process occurs in a dedicated processor that handles transfers between a local cache and RAM.
Virtual memory compression is distinct from garbage collection (GC) systems, which remove unused memory blocks and in some cases consolidate used memory regions, reducing fragmentation and improving efficiency. Virtual memory compression is also distinct from context switching systems, such as Connectix's RAM Doubler (though it also did online compression) and Apple OS 7.1, in which inactive processes are suspended and then compressed as a whole.
Benefits
By reducing the I/O activity caused by paging requests, virtual memory compression can produce overall performance improvements. The degree of performance improvement depends on a variety of factors, including the availability of any compression co-processors, spare bandwidth on the CPU, speed of the I/O channel, speed of the physical memory, and the compressibility of the physical memory contents.
On multi-core, multithreaded CPUs, some benchmarks show performance improvements of over 50%.
In some situations, such as in embedded devices, auxiliary storage is limited or non-existent. In these cases, virtual memory compression can allow a virtual memory system to operate, where otherwise virtual memory would have to be disabled. This allows the system to run certain software which would otherwise be unable to operate in an environment with no virtual memory.
Flash memory has certain endurance limitations on the maximum number of erase cycles it can undergo, which can be as low as 100 erase cycles. In systems where Flash Memory is used as the only auxiliary storage system, implementing virtual memory compression can reduce the total quantity of data being written to auxiliary storage, improving system reliability.
Shortcomings
Low compression ratios
One of the primary issues is the degree to which the contents of physical memory can be compressed under real-world loads. Program code and much of the data held in physical memory is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets. Various studies show typical data compression ratios ranging from 2:1 to 2.5:1 for program data, similar to typically achievable compression ratios with disk compression.
Background I/O
In order for virtual memory compression to provide measurable performance improvements, the throughput of the virtual memory system must be improved when compared to the uncompressed equivalent. Thus, the additional amount of processing introduced by the compression must not increase the overall latency. However, in I/O-bound systems or applications with highly compressible data sets, the gains can be substantial.
Increased thrashing
The physical memory used by a compression system reduces the amount of physical memory available to processes that a system runs, which may result in increased paging activity and reduced overall effectiveness of virtual memory compression. This relationship between the paging activity and available physical memory is roughly exponential, meaning that reducing the amount of physical memory available to system processes results in an exponential increase of paging activity.
In circumstances where the amount of free physical memory is low and paging is fairly prevalent, any performance gains provided by the compression system (compared to paging directly to and from auxiliary storage) may be offset by an increased page fault rate that leads to thrashing and degraded system performance. In an opposite state, where enough physical memory is available and paging activity is low, compression may not impact performance enough to be noticeable. The middle ground between these two circumstanceslow RAM with high paging activity, and plenty of RAM with low paging activityis where virtual memory compression may be most useful. However, the more compressible the program data is, the more pronounced are the performance improvements as less physical memory is needed to hold the compressed data.
For example, in order to maximize the use of a compressed pages cache, Helix Software Company's Hurricane 2.0 provides a user-configurable compression rejection threshold. By compressing the first 256 to 512 bytes of a 4 KiB page, this virtual memory compression system determines whether the configured compression level threshold can be achieved for a particular page; if achievable, the rest of the page would be compressed and retained in a compressed cache, and otherwise the page would be sent to auxiliary storage through the normal paging system. The default setting for this threshold is an 8:1 compression ratio.
Price/performance issues
In hardware implementations the technology also relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression. The relative price/performance differences of the various components tend to vary over time. For example, the addition of a compression co-processor may have minimal impact on the cost of a CPU.
Prioritization
In a typical virtual memory implementation, paging happens on a least recently used basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s auxiliary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.
Compression using quantization
Accelerator designers exploit quantization to reduce the bitwidth
of values and reduce the cost of data movement. However,
any value that does not fit in the reduced bitwidth results in
an overflow (we refer to these values as outliers). Therefore
accelerators use quantization for applications that are tolerant
to overflows. In most applications the rate of
outliers is low and values are often within a narrow range
providing the opportunity to exploit quantization in generalpurpose processors. However, a software implementation of
quantization in general-purpose processors has three problems.
First, the programmer has to manually implement conversions
and the additional instructions that quantize and dequantize
values, imposing a programmer's effort and performance overhead. Second, to cover outliers, the bitwidth of the quantized
values often become greater than or equal to the original
values. Third, the programmer has to use standard bitwidth;
otherwise, extracting non-standard bitwidth (i.e., 1–7, 9–15, and
17–31) for representing narrow integers exacerbates the overhead
of software-based quantization. A hardware support in the memory hierarchy of
general-purpose processors for quantization can solve these problems. Tha hardware support allows representing
values by few and flexible numbers of bits and storing outliers
in their original format in a separate space, preventing any
overflow. It minimizes metadata and the overhead of locating
quantized values using a software-hardware interaction that
transfers quantization parameters and data layout to hardware.
As a result, transparent hardware-based quantization has three advantages over cache
compression techniques: (i) less metadata, (ii) higher compression
ratio for floating-point values and cache blocks with multiple data
types, and (iii) lower overhead for locating the compressed blocks.
History
Virtual memory compression has gone in and out of favor as a technology. The price and speed of RAM and external storage have plummeted due to Moore's Law and improved RAM interfaces such as DDR3, thus reducing the need for virtual memory compression, while multi-core processors, server farms, and mobile technology together with the advent of flash based systems make virtual memory compression more attractive.
Origins
Paul R. Wilson proposed compressed caching of virtual memory pages in 1990, in a paper circulated at the ACM OOPSLA/ECOOP '90 Workshop on Garbage Collection ("Some Issues and Strategies in Heap Management and Memory Hierarchies"), and appearing in ACM SIGPLAN Notices in January 1991.
Helix Software Company pioneered virtual memory compression in 1992, filing a patent application for the process in October of that year. In 1994 and 1995, Helix refined the process using test-compression and secondary memory caches on video cards and other devices. However, Helix did not release a product incorporating virtual memory compression until July 1996 and the release of Hurricane 2.0, which used the Stac Electronics Lempel–Ziv–Stac compression algorithm and also used off-screen video RAM as a compression buffer to gain performance benefits.
In 1995, RAM cost nearly $50 per megabyte, and Microsoft's Windows 95 listed a minimum requirement of 4 MB of RAM. Due to the high RAM requirement, several programs were released which claimed to use compression technology to gain “memory”. Most notorious was the SoftRAM program from Syncronys Softcorp. SoftRAM was revealed to be “placebo software” which did not include any compression technology at all. Other products, including Hurricane and MagnaRAM, included virtual memory compression, but implemented only run-length encoding, with poor results, giving the technology a negative reputation.
In its 8 April 1997 issue, PC Magazine published a comprehensive test of the performance enhancement claims of several software virtual memory compression tools. In its testing PC Magazine found a minimal (5% overall) performance improvement from the use of Hurricane, and none at all from any of the other packages. However the tests were run on Intel Pentium systems which had a single core and were single threaded, and thus compression directly impacted all system activity.
In 1996, IBM began experimenting with compression, and in 2000 IBM announced its Memory eXpansion Technology (MXT). MXT was a stand-alone chip which acted as a CPU cache between the CPU and memory controller. MXT had an integrated compression engine which compressed all data heading to/from physical memory. Subsequent testing of the technology by Intel showed 5–20% overall system performance improvement, similar to the results obtained by PC Magazine with Hurricane.
Recent developments
In early 2008, a Linux project named zram (originally called compcache) was released; in a 2013 update, it was incorporated into Chrome OS and Android 4.4
In 2010, IBM released Active Memory Expansion (AME) for AIX 6.1 which implements virtual memory compression.
In 2012, some versions of the POWER7+ chip included AME hardware accelerators using the 842 compression algorithm for data compression support, used on AIX, for virtual memory compression. More recent POWER processors continue to support the feature.
In December 2012, the zswap project was announced; it was merged into the Linux kernel mainline in September 2013.
In June 2013, Apple announced that it would include virtual memory compression in OS X Mavericks, using the Wilson-Kaplan WKdm algorithm.
A 10 August 2015 "Windows Insider Preview" update for Windows 10 (build 10525) added support for RAM compression.
See also
Disk compression
Swap partitions on SSDs
References
virtual memory compression |
1452173 | https://en.wikipedia.org/wiki/RapidIO | RapidIO | The RapidIO architecture is a high-performance packet-switched electrical connection technology. RapidIO supports messaging, read/write and cache coherency semantics. RapidIO fabrics guarantee in-order packet delivery, enabling power- and area- efficient protocol
implementation in hardware. Based on industry-standard electrical specifications such as those for Ethernet,
RapidIO can be used as a chip-to-chip, board-to-board, and chassis-to-chassis interconnect..
History
RapidIO has its roots in energy-efficient, high-performance computing. The protocol was originally designed by Mercury Computer Systems and Motorola (Freescale) as a replacement for Mercury's RACEway proprietary bus and Freescale's PowerPC bus. The RapidIO Trade Association was formed in February 2000, and included telecommunications and storage OEMs as well as FPGA, processor, and switch companies.
The protocol was designed to meet the following objectives:
Low latency
Guaranteed, in order, packet delivery
Support for messaging and read/write semantics
Could be used in systems with fault tolerance/high availability requirements
Flow control mechanisms to manage short-term (less than 10 microseconds), medium-term (tens of microseconds) and long-term (hundreds of microseconds to milliseconds) congestion
Efficient protocol implementation in hardware
Low system power
Scale from two to thousands of nodes
Releases
The RapidIO specification revision 1.1 (3xN Gen1), released in March 2001, defined a wide, parallel bus. This specification did not achieve extensive commercial adoption.
The RapidIO specification revision 1.2, released in June 2002, defined a serial interconnect based on the XAUI physical layer. Devices based on this specification achieved significant commercial success within wireless baseband, imaging and military compute.
The RapidIO specification revision 1.3 was released in June 2005.
The RapidIO specification revision 2.0 (6xN Gen2), was released in March 2008, added more port widths (2×, 8×, and 16×) and increased the maximum lane speed to 6.25 GBd / 5 Gbit/s. Revision 2.1 has repeated and expanded the commercial success of the 1.2 specification.
The RapidIO specification revision 2.1 was released in September 2009.
The RapidIO specification revision 2.2 was released in May 2011.
The RapidIO specification revision 3.0 (10xN Gen3), was released in October 2013, has the following changes and improvements compared to the 2.x specifications:
Based on industry-standard Ethernet 10GBASE-KR electrical specifications for short (20 cm + connector) and long (1 m + 2 connector) reach applications
Directly leverages the Ethernet 10GBASE-KR DME training scheme for long-reach signal quality optimization
Defines a 64b/67b encoding scheme (similar to the Interlaken standard) to support both copper and optical interconnects and to improve bandwidth efficiency
Dynamic asymmetric links to save power (for example, 4× in one direction, 1× in the other)
Addition of a time synchronization capability similar to IEEE 1588, but much less expensive to implement
Support for 32-bit device IDs, increasing maximum system size and enabling innovative hardware virtualization support
Revised routing table programming model simplifies network management software
Packet exchange protocol optimizations
The RapidIO specification revision 3.1, was released in October 2014, was developed through a collaboration between the RapidIO Trade Association and NGSIS. Revision 3.1 has the following enhancements compared to the 3.0 specification:
MECS Time Synchronization protocol for smaller embedded systems. MECS Time Synchronization supports redundant time sources. This protocol is lower cost than the Timestamp Synchronization Protocol introduced in revision 3.0
PRBS test facilities and standard register interface.
Structurally Asymmetric Link behavioral definition and standard register interface. Structurally Asymmetric Links carry much more data in one direction than the other, for applications such as sensors or processing pipelines. Unlike dynamic asymmetric links, Structurally Asymmetric Links allow implementers to remove lanes on boards and in silicon, saving size, weight, and power. Structurally asymmetric links also allow the use of alternative lanes in the case of a hardware failure on a multi-lane port.
Extended error log to capture a series of errors for diagnostic purposes
Space device profiles for endpoints and switches, which define what it means to be a space-compliant RapidIO device.
The RapidIO specification revision 3.2 was released in February 2016.
The RapidIO specification revision 4.0 (25xN Gen4) was released in June 2016. has the following changes and improvements compared to the 3.x specifications:
Support 25 Gbaud lane rate and physical layer specification, with associated programming model changes
Allow IDLE3 to be used with any Baud Rate Class, with specified IDLE sequence negotiation
Increased maximum packet size to 284 bytes in anticipation of Cache Coherency specification
Support 16 physical layer priorities
Support “Error Free Transmission” for high throughput isochronous information transfer
The RapidIO specification revision 4.1 was released in July 2017.
Wireless infrastructure
RapidIO fabrics are used in cellular infrastructure 3G, 4G and LTE networks with millions of RapidIO ports shipped into wireless base stations worldwide. RapidIO fabrics were originally designed to support connecting different types of processors from different manufacturers together in a single system. This flexibility has driven the widespread use of RapidIO in wireless infrastructure equipment where there is a need to combine heterogeneous, DSP, FPGA and communication processors together in a tightly coupled system with low latency and high reliability.
Data centers
Data center and HPC analytics systems have been deployed using a RapidIO 2D Torus Mesh Fabric, that provides a high speed general purpose interface among the system cartridges for applications that benefit from high bandwidth, low latency node-to-node communication. The RapidIO 2D Torus unified fabric is routed as a torus ring configuration connecting up to 45 server cartridges capable of providing 5Gbs per lane connections in each direction to its north, south, east and west neighbors. This allows the system to meet many unique HPC applications where efficient localized traffic is needed.
Also, using an open modular data center and compute platform, a heterogeneous HPC system has showcased the low latency attribute of RapidIO to enable real-time analytics. In March 2015 a top-of-rack switch was announced to drive RapidIO into mainstream data center applications.
Aerospace
The interconnect or "bus" is one of the critical technologies in the design and development of spacecraft avionic systems that dictates its architecture and level of complexity. There are a host of existing architectures that are still in use given their level of maturity. These existing systems are sufficient for a given type of architecture need and requirement. Unfortunately, for next generation missions a more capable avionics architecture is desired; which is well beyond the capabilities levied by existing architectures. A viable option toward the design and development of these next generation architectures is to leverage existing commercial protocols capable of accommodating high levels of data transfer.
In 2012, RapidIO was selected by the Next Generation Spacecraft Interconnect Standard (NGSIS) working group to serve as the foundation for standard communication interconnects to be used in spacecraft. The NGSIS is an umbrella standards effort that includes RapidIO Version 3.1 development, and a box level hardware standards effort under VITA 78 called SpaceVPX or High ReliabilityVPX. The NGSIS requirements committee developed extensive requirements criteria with 47 different elements for the NGSIS interconnect. Independent trade study results by NGSIS member companies demonstrated the superiority of RapidIO over other existing commercial protocols, such as InfiniBand, Fibre Channel, and 10G Ethernet. As a result, the group decided that RapidIO offered the best overall interconnect for the needs of next-generation spacecraft.
PHY roadmap
The RapidIO roadmap aligns with Ethernet PHY development. RapidIO specifications for 50 GBd and higher links are under investigation.
Terminology
Link Partner One end of a RapidIO link.
Endpoint A device that can originate and/or terminate RapidIO packets.
Processing Element A device which has at least one RapidIO port
Switch A device that can route RapidIO packets.
Protocol overview
The RapidIO protocol is defined in a 3-layered specification:
Physical: Electrical specifications, PCS/PMA, link-level protocol for reliable packet exchange
Transport: Routing, multicast, and programming model
Logical: Logical I/O, messaging, global shared memory (CC-NUMA), flow control, data streaming
System specifications include:
System Initialization
Error Management/Hot Swap
Physical layer
The RapidIO electrical specifications are based on industry-standard Ethernet and Optical Interconnect Forum standards:
XAUI for lane speeds of 1.25, 2.5, and 3.125 GBd (1, 2, and 2.5 Gbit/s)
OIF CEI 6+ Gbit/s for lane speeds of 5.0 and 6.25 GBd (4 and 5 Gbit/s)
10GBASE-KR 802.3-ap (long reach) and 802.3-ba (short reach) for lane speeds of 10.3125 GBd (9.85 Gbit/s)
The RapidIO PCS/PMA layer supports two forms of encoding/framing:
8b/10b for lane speeds up to 6.25 GBd
64b/67b, similar to that used by Interlaken for lane speeds over 6.25 GBd
Every RapidIO processing element transmits and receives three kinds of information: Packets, control symbols, and an idle sequence.
Packets
Every packet has two values that control the physical layer exchange of that packet. The first is an acknowledge ID (ackID), which is the link-specific, unique, 5-, 6-, or 12-bit value that is used to track packets exchanged on a link. Packets are transmitted with serially increasing ackID values. Because the ackID is specific to a link, the ackID is not covered by CRC, but by protocol. This allows the ackID to change with each link it passes over, while the packet CRC can remain a constant end-to-end integrity check of the packet. When a packet is successfully received, it is acknowledged using the ackID of the packet. A transmitter must retain a packet until it has been successfully acknowledged by the link partner.
The second value is the packet's physical priority. The physical priority is composed of the Virtual Channel (VC) identifier bit, the Priority bits, and the Critical Request Flow (CRF) bit. The VC bit determines if the Priority and CRF bits identify a Virtual Channel from 1 to 8, or are used as the priority within Virtual Channel 0. Virtual Channels are assigned guaranteed minimum bandwidths. Within Virtual Channel 0, packets of higher priority can pass packets of lower priority. Response packets must have a physical priority higher than requests in order to avoid deadlock.
The physical layer contribution to RapidIO packets is a 2-byte header at the beginning of each packet that includes the ackID and physical priority, and a final 2-byte CRC value to check the integrity of the packet. Packets larger than 80 bytes also have an intermediate CRC after the first 80 bytes. With one exception a packet's CRC value(s) acts as an end-to-end integrity check.
Control symbols
RapidIO control symbols can be sent at any time, including within a packet. This gives RapidIO the lowest possible in-band control path latency, enabling the protocol to achieve high throughput with smaller buffers than other protocols.
Control symbols are used to delimit packets (Start of Packet, End of Packet, Stomp), to acknowledge packets (Packet Acknowledge, Packet Not Acknowledged), reset (Reset Device, Reset Port) and to distribute events within the RapidIO system (Multicast Event Control Symbol). Control symbols are also used for flow control (Retry, Buffer Status, Virtual Output Queue Backpressure) and for error recovery.
The error recovery procedure is very fast. When a receiver detects a transmission error in the received data stream, the receiver causes its associated transmitter to send a Packet Not Accepted control symbol. When the link partner receives a Packet Not Accepted control symbol, it stops transmitting new packets and sends a Link Request/Port Status control symbol. The Link Response control symbol indicates the ackID that should be used for the next packet transmitted. Packet transmission then resumes.
IDLE sequence
The IDLE sequence is used during link initialization for signal quality optimization. It is also transmitted when the link does not have any control symbols or packets to send.
Transport layer
Every RapidIO endpoint is uniquely identified by a Device Identifier (deviceID). Each RapidIO packet contains two device IDs. The first is the destination ID (destID), which indicates where the packet should be routed. The second is the source ID (srcID), which indicates where the packet originated. When an endpoint receives a RapidIO request packet that requires a response, the response packet is composed by swapping the srcID and destID of the request.
RapidIO switches use the destID of received packets to determine the output port or ports that should forward the packet. Typically, the destID is used to index into an array of control values. The indexing operation is fast and low cost to implement. RapidIO switches support a standard programming model for the routing table, which simplifies system control.
The RapidIO transport layer supports any network topology, from simple trees and meshes to n-dimensional hypercubes, multi-dimensional toroids, and more esoteric architectures such as entangled networks.
The RapidIO transport layer enables hardware virtualization (for example, a RapidIO endpoint can support multiple device IDs). Portions of the destination ID of each packet can be used to identify specific pieces of virtual hardware within the endpoint.
Logical layer
The RapidIO logical layer is composed of several specifications, each providing packet formats and protocols for different transaction semantics.
Logical I/O
The logical I/O layer defines packet formats for read, write, write-with-response, and various atomic transactions. Examples of atomic transactions are set, clear, increment, decrement, swap, test-and-swap, and compare-and-swap.
Messaging
The Messaging specification defines Doorbells and Messages. Doorbells communicate a 16-bit event code. Messages transfer up to 4KiB of data, segmented into up to 16 packets each with a maximum payload of 256 bytes. Response packets must be sent for each Doorbell and Message request. The response packet status value indicates done, error, or retry. A status of retry requests the originator of the request to send the packet again. The logical level retry response allows multiple senders to access a small number of shared reception resources, leading to high throughput with low power.
Flow control
The Flow Control specification defines packet formats and protocols for simple XON/XOFF flow control operations. Flow control packets can be originated by switches and endpoints. Reception of a XOFF flow control packet halts transmission of a flow or flows until an XON flow control packet is received or a timeout occurs. Flow Control packets can also be used as a generic mechanism for managing system resources.
CC-NUMA
The Globally Shared Memory specification defines packet formats and protocols for operating a cache coherent shared memory system over a RapidIO network.
Data streaming
The Data Streaming specification supports messaging with different packet formats and semantics than the Messaging specification. Data Streaming packet formats support the transfer of up to 64K of data, segmented over multiple packets. Each transfer is associated with a Class of Service and Stream Identifier, enabling thousands of unique flows between endpoints.
The Data Streaming specification also defines Extended Header flow control packet formats and semantics to manage performance within a client-server system. Each client uses extended header flow control packets to inform the server of the amount of work that could be sent to the server. The server responds with extended header flow control packets that use XON/XOFF, rate, or credit based protocols to control how quickly and how much work the client sends to the server.
System initialization
Systems with a known topology can be initialized in a system specific manner without affecting interoperability. The RapidIO system initialization specification supports system initialization when system topology is unknown or dynamic. System initialization algorithms support the presence of redundant hosts, so system initialization need not have a single point of failure.
Each system host recursively enumerates the RapidIO fabric, seizing ownership of devices, allocating device IDs to endpoints and updating switch routing tables. When a conflict for ownership occurs, the system host with the larger deviceID wins. The "losing" host releases ownership of its devices and retreats, waiting for the "winning" host. The winning host completes enumeration, including seizing ownership of the losing host. Once enumeration is complete, the winning host releases ownership of the losing host. The losing host then discovers the system by reading the switch routing tables and registers on each endpoint to learn the system configuration. If the winning host does not complete enumeration in a known time period, the losing host determines that the winning host has failed and completes enumeration.
System enumeration is supported in Linux by the RapidIO subsystem.
Error management
RapidIO supports high availability, fault tolerant system design, including hot swap. The error conditions that require detection, and standard registers to communicate status and error information, are defined. A configurable isolation mechanism is also defined so that when it is not possible to exchange packets on a link, packets can be discarded to avoid congestion and enable diagnosis and recovery activities. In-band (port-write packet) and out-of-band (interrupt) notification mechanisms are defined.
Form factors
The RapidIO specification does not discuss the subjects of form factors and connectors, leaving this to specific application-focussed communities. RapidIO is supported by the following form factors:
Advanced Telecommunications Computing Architecture
Advanced Mezzanine Card
XMC
OpenVPX
VXS
Software
Processor-agnostic RapidIO support is found in the Linux kernel.
Applications
The RapidIO interconnect is used extensively in the following applications:
Wireless base stations
Aerospace and Military single-board computers, as well as radar, acoustic and image processing systems
Video
Storage
Supercomputing
Medical imaging
Industrial control and data path applications
RapidIO is expanding into supercomputing, server, and storage applications.
Competing protocols
PCI Express is targeted at the host to peripheral market, as opposed to embedded systems. Unlike RapidIO, PCIe is not optimized for peer-to-peer multi processor networks. PCIe is ideal for host to peripheral communication. PCIe does not scale as well in large multiprocessor peer-to-peer systems, as the basic PCIe assumption of a "root complex" creates fault tolerance and system management issues.
Another alternative interconnect technology is Ethernet. Ethernet is a robust approach to linking computers over large geographic areas, where network topology may change unexpectedly, the protocols used are in flux, and link latencies are large. To meet these challenges, systems based on Ethernet require significant amounts of processing power, software and memory throughout the network to implement protocols for flow control, data transfer, and packet routing. RapidIO is optimized for energy efficient, low latency, processor-to-processor communication in fault tolerant embedded systems that span geographic areas of less than one kilometre.
SpaceFibre is a competing technology for space applications.
Time Triggered Ethernet is a competing technology for more complex backplane (VPX) and backbone applications for space (launchers and human-rated integrated avionics).
See also
Bus (computing)
PCI Express
References
External links
.
.
.
Computer buses
Open standards
RapidIO (SRIO)
Computer standards
Computer networking
Local area networks |
51141759 | https://en.wikipedia.org/wiki/X-Agent | X-Agent | X-Agent or XAgent is a spyware and malware program designed to collect and transmit hacked files from machines running Windows, Linux, iOS, or Android, to servers operated by hackers. It employs phishing attacks and the program is designed to "hop" from device to device. In 2016, CrowdStrike identified an Android variant of the malware for the first time, and claimed that the malware targeted members of the Ukrainian military by distributing an infected version of an app to control D-30 Howitzer artillery. The Ukrainian army denied CrowdStrike's report and stated that losses of Howitzer artillery pieces had "nothing to do with the stated cause".
Slovak computer security company ESET obtained the X-Agent source code in 2015 and described its inner workings in a report released in October 2016.
A Washington, DC grand jury indictment (resulting from Robert Mueller's investigation into Russian election interference) charges that agents of the Russian GRU in Moscow "developed, customized and monitored X-Agent malware used to hack the DCCC [Democratic Congressional Campaign Committee] and DNC [Democratic National Committee] networks beginning in or around April 2016" (item 15, at the end of page 4 and the beginning of page 5).
References
Computer viruses
Spyware
IOS software
Android (operating system) |
2470268 | https://en.wikipedia.org/wiki/Zotob | Zotob | <blockquote>"The Zotob worm and several variations of it, known as Rbot.cbq, SDBot.bzh and Zotob.d, infected computers at companies such as ABC, CNN, The Associated Press, The New York Times, and Caterpillar Inc." — Business Week, August 16, 2005.</blockquote>
Zotob is a computer worm which exploits security vulnerabilities in Microsoft operating systems like Windows 2000, including the MS05-039 plug-and-play vulnerability. This worm has been known to spread on Microsoft-ds or TCP port 445.
It was declared that the Zotob worms cost an average of $97,000 as well as 80 hours of cleanup per company affected.
Rbot variant
Zotob was derived from the Rbot worm. Rbot can force an infected computer to continuously restart. Its outbreak on August 16, 2005 was covered "live" on CNN television, as the network's own computers got infected. Zotob would self-replicate each time the computer rebooted, resulting in each computer having numerous copies of the file by the time it was purged. This is similar to the Blaster (Lovesan) worm.
Sequence of events
August 9, 2005: Security advisory"On August 9th, Microsoft released critical security advisory MS05-039 which revealed a vulnerability in the Plug-and-Play component of Windows 2000. Code to patch the loophole was also made available."
Virus writing"In the days since Microsoft's announcement, virus writers have released several variants of both Zotob and RBot, along with updated versions of older worms named SD-Bot and IRC-Bot, designed to take advantage of the newly discovered flaw."
August 13, 2005: Emerged on Saturday"The worms, called Zotob and Rbot, and variants of them, started emerging Saturday, computer security specialists said, and continued to propagate as corporate networks came to life at the beginning of the week."
August 16, 2005: Took down CNN live"Around 5 p.m. problems began at CNN facilities in New York and Atlanta before being cleared up about 90 minutes later.""CNN, breaking into regular programming, reported on air that personal computers running Windows 2000 at the cable news network were affected by a worm that caused them to restart repeatedly.""The Internet Storm Center, which tracks the worldwide impact of computer worms, indicated on its Web site that no major Internet attack was underway. Likely this is an isolated event, which became newsworthy because CNN got infected. We do not see any new threats at this point, the site read."
August 17, 2005: CIBC and other banks, companies affected"CIBC says the Zotob worm caused some isolated outages, but did not affect ATMs, Internet or phone banking. The virus also hit other Canadian businesses but has not caused widespread shutdowns."
August 26, 2005: A suspect is arrested in Morocco "Under the request of the FBI, Moroccan police arrests 18-year-old Farid Essebar, a Moroccan, suspected for being behind the spread of the virus."
September 16, 2006: Sentencing"The creators of the Zotob Windows worm Farid Essabar and his friend Achraf Bahloul were sentenced by a court in Morocco.
Arrest of the coders
On August 26, 2005, Farid Essebar and Atilla Ekici were arrested in Morocco and Turkey, respectively. They are believed to be the men behind the worm's coding.
A signature in the Zotob worm code suggested it was coded by Diabl0 and the IRC server it connects to is the same used in previous version of Mytob. Diabl0 is believed to have incorporated the code of a Russian nicknamed houseofdabus whose journal has been shut down by authorities, just after the arrest of Diabl0. The coder (Ekici) probably paid Diabl0 (Essebar) to write the code.
"He says it's all about making money, and that he doesn't care if people remove the worm because it's the spyware stuff that he installs that's making him the money,'' Taylor said in a conversation with me."
On August 30, 2005, controversial reports emerged from different anti-virus firms. Sophos declared that several people had access to the Mytob source code (a variant of the worm). On the other hand, F-Secure declared that it has found multiple variants of Mytob that were coded after the arrest of Essebar. Those declarations suggest that Essebar is only a part of a larger group of Dark-side hackers behind the spread of the malware.
See also
Timeline of notable computer viruses and worms
References
External links and sources
Security vulnerability information
Microsoft Security Bulletin MS05-039 (Microsoft)
Microsoft Security Advisory (899588) (Microsoft)
US Cert Vulnerability Note VU#998653 (US-CERT)
Secunia Advisory SA16372 (Secunia)
CAN-2005-1983 (Common Vulnerabilities and Exposures)
Bugtraq ID 14513 (SecurityFocus)
Worm information
What You Should Know About Zotob (Microsoft)
W32.Zotob Removal Tool (Symantec Security Response)
WORM_ZOTOB.D (Trend Micro)
Zotob.A (F-Secure)
Zotob.C (F-Secure)
WORM_RBOT.CBR (Trend Micro)
Full Timeline (Security Blogger)
Zotob Removal Instructions
News coverage
BBC News Windows 2000 worm hits US firms
BBC News Windows 2000 bug starts virus war
BBC News Two detained for US computer worm
BBC News Money motive drove virus suspects
New York Times Virus Attacks Windows Computers at Companies
CNN Worm strikes down Windows 2000 systems
MSNBC Computer worms strike media outlets
Reuters Computer virus hits U.S media outlets
Slashdot Zotob Worm Hits CNN and Goes Global
Information Week Zotob Proves Patching "Window" Non-Existent
Security Now! PodCast - Episode #1: "As the Worm Turns"
Exploit-based worms
Hacking in the 2000s |
4588443 | https://en.wikipedia.org/wiki/Radical%20Software | Radical Software | Radical Software was an early journal on the use of video as an artistic and political medium, started in 1970 in New York City. At the time, the term radical software referred to the content of information rather than to a computer program.
History
The founders of Radical Software video journal were Phyllis Gershuny (Segura) and Beryl Korot.
The video journal was begun with a questionnaire sent to a wide variety of interested people. The first issue was a creative editing of the answers to the questionnaire plus some additional special articles. The most outstanding element of Radical Software video journal was the style and emphasis used in editing. The content itself was a call to pay attention to the way information itself is disseminated. And it was a call to encourage a grassroots involvement in creating an information environment exclusive of broadcast and corporate media. It became immediately important and popular as it grasped fully what a lot of people had been concerned with and thinking about; giving its introduction a synchronicity of the ideas of the day.
Its editing was ultimately taken over by its original publisher, Raindance Corporation, a loosely formed group of like-minded videographers: some with a philosophical bent, painters, and an aspiring Hollywood producer. Michael Shamberg who went on to become a Hollywood producer co-edited Issue #5 of Radical Software along with Dudley Evenson. Dudley's husband Dean Evenson provided articles, tech drawings and cartoons. Not all members of Raindance were involved with Radical Software. Ira Schneider was added to its founders list as his importance in maintaining a mailing list and some helpful suggestions were recognized. Schneider did not edit any of the original issues. He and Korot went on to be the editors after the third issue had begun. There was a split at that time in the editorial direction and the original vision was altered to comply with that, causing a fissure and Gershuny's untimely departure. Several subsequent issues were farmed out to other groups and the format and direction shifted yet again. Eleven issues were produced before the organization folded in 1974.
References
External links
Radical Software archive site
Citizen journalism |
2106993 | https://en.wikipedia.org/wiki/Computing%20Research%20Association | Computing Research Association | The Computing Research Association (CRA) is a 501(c)3 non-profit association of North American academic departments of computer science, computer engineering, and related fields; laboratories and centers in industry, government, and academia engaging in basic computing research; and affiliated professional societies. CRA was formed in 1972 and is based in Washington, D.C., United States.
Mission and Activities
CRA's mission is to enhance innovation by joining with industry, government and academia to strengthen research and advanced education in computing. CRA executes this mission by leading the computing research community, informing policymakers and the public, and facilitating the development of strong, diverse talent in the field.
Policy
CRA assists policymakers who seek to understand the issues confronting the federal Networking and Information Technology Research and Development (NITRD) program, a thirteen-agency, $4-billion-a-year federal effort to support computing research. CRA works to educate Members of Congress and provide policy makers with expert testimony in areas associated with computer science research. CRA and their Computing Community Consortium (CCC) sponsored the Leadership in Science Policy Institute, a one and half day workshop that took place in Washington, D.C. CRA also maintains a Government Affairs website and a Computing Research Policy Blog.
Professional Development
CRA works to support computing researchers throughout their careers to help ensure that the need for a continuous supply of talented and well-educated computing researchers and advanced practitioners is met. CRA assists with leadership development within the computing research community, promotes needed changes in advanced education, and encourages participation by members of underrepresented groups. CRA offers Academic Careers Workshops, supports the CRA-W: CRA's Committee on the Status of Women in Computing Research, and runs the DREU: Distributed Research Experiences for Undergraduates Project.
Leadership
CRA supports leadership development in the research community to support researchers in broadening the scope of computing research and increasing its impact on society and works to promote cooperation among various elements of the computing research community. CRA supports the CRA Conference at Snowbird, a biennial conference where leadership in computing research departments gather to network and address common issues in the field. CRA also supports the Computing Leadership Summit.
Information Collection and Dissemination
CRA collects and disseminates information to the research and policy-making communities information about the importance and state of computing research and related policy. CRA works to develop relevant information and make the information available to the public, policy makers, and computing research community.
CRA publishes the Taulbee Survey, a key source of information on the enrollment, production, and employment of Ph.D.s in computer science and computer engineering (CS & CE) and in providing salary and demographic data for faculty in CS & CE in North America. Statistics given include gender and ethnicity breakdowns. CRA also provides Computing Research News published ten times annually for computing researchers, and the CRA Bulletin to share news, information about CRA initiatives, and items of interest to the general community.
See also
Association for Computing Machinery
CRA was the main organizer of the first Federated Computing Research Conference in 1993.
CRA-W
Informatics Europe is a similar organization to the CRA for Europe.
References
External links
CRA Members
CRA Board of Directors
CRA History
CRA Events
1972 establishments in the United States
Research organizations in the United States
Computer science research organizations
Organizations based in Washington, D.C.
Organizations established in 1972 |
2820814 | https://en.wikipedia.org/wiki/Jim%20Woodcock | Jim Woodcock | Professor James Charles Paul Woodcock is a British computer scientist.
Woodcock gained his PhD from the University of Liverpool. Until 2001 he was Professor of Software Engineering at the Oxford University Computing Laboratory, where he was also a Fellow of Kellogg College. He then joined the University of Kent and is now based at the University of York, where, since October 2012, he has been head of the Department of Computer Science.
His research interests include: strong software engineering, Grand Challenge in dependable systems evolution, unifying theories of programming, formal specification, refinement, concurrency, state-rich systems, mobile and reconfigurable processes, nanotechnology, Grand Challenge in the railway domain. He has a background in formal methods, especially the Z notation and CSP.
Woodcock worked on applying the Z notation to the IBM CICS project, helping to gain a Queen's Award for Technological Achievement, and Mondex, helping to gain the highest ITSEC classification level.
Prof. Woodcock is editor-in-chief of the Formal Aspects of Computing journal.
Books
Jim Woodcock and Jim Davies, Using Z: Specification, Refinement, and Proof. Prentice-Hall International Series in Computer Science, 1996. .
Jim Woodcock and Martin Loomes, Software Engineering Mathematics: Formal Methods Demystified. Kindle Edition, Taylor & Francis, 2007.
References
External links
Official homepage
Personal homepage
Research profile
1956 births
Living people
Alumni of the University of Liverpool
British computer scientists
Formal methods people
Members of the Department of Computer Science, University of Oxford
Fellows of Kellogg College, Oxford
Academics of the University of Kent
Academics of the University of York
Fellows of the British Computer Society
Fellows of the Royal Academy of Engineering
Computer science writers
British textbook writers
Academic journal editors |
4399237 | https://en.wikipedia.org/wiki/Disk%20II | Disk II | The Disk II Floppy Disk Subsystem, often rendered as Disk ][, is a -inch floppy disk drive designed by Steve Wozniak at the recommendation of Mike Markkula, and manufactured by Apple Computer, Inc. It went on sale in June 1978 at a retail price of US$495 for pre-order; it was later sold for $595 () including the controller card (which can control up to two drives) and cable. The Disk II was designed specifically for use with the Apple II personal computer family to replace the slower cassette tape storage. These floppy drives cannot be used with any Macintosh without an Apple IIe Card as doing so will damage the drive or the controller.
Apple produced at least six variants of the basic -inch Disk II concept over the course of the Apple II series' lifetime: The Disk II, the Disk III, the DuoDisk, the Disk IIc, the UniDisk 5.25" and the Apple 5.25 Drive. While all of these drives look different, and use four different connector types, they're all electronically extremely similar. They can all use the same low-level disk format, and are all interchangeable with the use of simple adapters, consisting of no more than two plugs and wires between them. Most DuoDisk drives, the Disk IIc, the UniDisk 5.25" and the AppleDisk 5.25" even use the same 19-pin D-Sub connector, so they are directly interchangeable. The only " drive Apple sold aside from the Disk II family was a 360k MFM unit made to allow Mac IIs and SEs to read PC floppy disks.
This is not the case with Apple's -inch drives, which use several different disk formats and several different interfaces, electronically quite dissimilar even in models using the same connector; they are not generally interchangeable.
History
Disk II
Apple did not originally offer a disk drive for the Apple II, which used data cassette storage like other microcomputers of the time. Apple early investor and executive Mike Markkula asked cofounder Steve Wozniak to design a drive system for the computer after finding that a checkbook-balancing program Markkula had written took too long to load from tape. Wozniak knew nothing about disk controllers, but while at Hewlett-Packard he had designed a simple, five-chip circuit to operate a Shugart Associates drive.
The Apple II's lack of a disk drive was "a glaring weakness" in what was otherwise intended to be a polished, professional product. Speaking later, Osborne 1 designer Lee Felsenstein stated, "The difference between cassette and disk systems was the difference between hobbyist devices and a computer. You couldn't have expected, say, VisiCalc, to run on a cassette system." Recognizing that the II needed a disk drive to be taken seriously, Apple set out to develop a disk drive and a DOS to run it. Wozniak spent the 1977 Christmas holidays adapting his controller design, which reduced the number of chips used by a factor of 10 compared to existing controllers. Still lacking a DOS, and with Wozniak inexperienced in operating system design, Steve Jobs approached Shepardson Microsystems with the project. On April 10, 1978 Apple signed a contract for $13,000 with Shepardson to develop the DOS.
Shortly after the disk drive project began in late 1977, Steve Jobs made several trips to Shugart's offices announcing that he wanted a disk drive that would cost just $100. After Wozniak finished studying IBM disk controller designs, Jobs then demanded that Shugart sell them a stripped disk drive that had no controller board, index sensor, load solenoids, or track zero sensor. Although puzzled by this request, Shugart complied and provided Apple with 25 drive mechanisms that they could use as prototypes in developing a disk system for the Apple II. The prototypes received the designation of SA-390.
Wozniak studied North Star Computers and others' more complex floppy controllers. He believed that his simpler design lacked their features, but realized that they were less sophisticated; for example, his could use soft-sectored disks. Following the Shugart controller's manual, Wozniak attempted to develop an FM-type controller with 10 sector per track storage, but realized that Group Coded Recording could fit 13 sectors per track. Wozniak called the resultant Disk II system "my most incredible experience at Apple and the finest job I did", and credited it and VisiCalc with the Apple II's success.
Fellow engineer Cliff Huston came up with several procedures for resuscitating the faulty drives on the assembly line. When Apple sent an order into Shugart for more SA-390s, a Shugart engineer admitted that the disk drive manufacturer had been scamming Apple and that the SA-390s were actually rejected SA-400s that had failed to pass factory inspection. The idea was that Apple couldn't get the drives to work and would be forced to go back and purchase more expensive SA-400s.
The Disk II was very successful for Apple, being the cheapest floppy disk system ever sold up to that point and immensely profitable for the company, in addition to having nearly 20% more storage space than standard FM drives. For a while, the only direct competitor in the microcomputer industry was the TRS-80 Model I, which had used only standard FM storage for 85k. Both the Atari 8-bit and Commodore 64's disk drives' throughputs were much slower than the Disk II's 15 KB/s, seriously affecting their ability to compete in the business market. However, the advantage of Wozniak's design was somewhat nullified when the cost of double-density MFM controllers dropped only a year after the Disk II's introduction.
The initial Disk II drives (A2M0003) were modifications of the Shugart SA-400, which was the first commercially available -inch diskette drive. Apple purchased only the bare drive mechanisms without the standard SA-400 controller board, replaced it with Wozniak's board design, and then stamped the Apple rainbow logo onto the faceplate. Early production at Apple was handled by two people, and they assembled 30 drives a day. By 1982, Apple switched to Alps drives for cost reasons.
Normal storage capacity per disk side was 113.75 KB with Apple DOS 3.2.1 and earlier (256 bytes per sector, 13 sectors per track, 35 tracks per side), or 140 KB with DOS 3.3 and ProDOS (256 bytes per sector, 16 sectors per track, 35 tracks per side). The 16-sector hardware upgrade introduced in 1980 for use with DOS 3.3 modified only the controller card firmware to use a more efficient GCR code called "6 and 2 encoding". Neither the drive itself nor the physical bit density was changed. This update had the disadvantage of not automatically booting older 13 sector Apple II software.
Since the Disk II controller was completely software-operated, the user had total control over the encoding and format so long as it was within the physical limits of the drive mechanism and media. This also allowed software companies to use all sorts of ingenious copy protection schemes.
The Shugart SA-400, from which the Disk II was adapted, was a single-sided, 35-track drive. However, it was common for users to manually flip the disk to utilize the opposite side, after cutting a second notch on the diskette's protective shell to allow write-access. Most commercial software using more than one disk side was shipped on such "flippy" disks as well. Only one side could be accessed at once, but it did essentially double the capacity of each floppy diskette, an important consideration especially in the early years when media was still quite expensive.
In the Disk II, the full-height drive mechanism shipped inside a beige-painted metal case and connected to the controller card via a 20-pin ribbon cable; the controller card was plugged into one of the bus slots on the Apple's mainboard. The connector is very easy to misalign on the controller card, which will short out a certain IC in the drive; if later connected correctly, a drive damaged this way will delete data from any disk inserted into it as soon as it starts spinning, even write-protected disks such as those used to distribute commercial software. This problem resulted in numerous customer complaints and repairs, which led to Apple printing warning messages in their user's manuals to explain how to properly install the connector. They used different connectors that could not be misaligned in later drives. DB-19 adapters for the original Disk II were eventually available for use with Apple's later connector standard.
Up to 14 drives could be attached to one Apple II or Apple IIe computer - two drives per controller card, one card per slot, and there were seven usable slots per computer. While the DOS and ProDOS operating systems worked equally well with the card in any of the normal slots (i.e. all except slot 0 of the Apple II/II+ or the special memory expansion slots of the later models), Apple's printed manuals suggested using slot 6 for the first controller card; most Apple II software expects this slot to be used for the main -inch disk drive and fails otherwise. A Bell & Howell version of the Disk II was also manufactured by Apple in a black painted case, which matched the color of the Bell & Howell version of the Apple II Plus, which Apple was already manufacturing.
Disk III
In 1978, Apple intended to develop its own "FileWare" drive mechanism for use in the new Apple /// and Lisa business computers then being developed. They quickly ran into difficulties with the mechanisms, which precluded them from being incorporated in the Apple ///. That machine thus continued to use the same Shugart design as the Disk II.
The first variation of the Disk II introduced for the Apple ///, called the Disk III (A3M0004), used the identical drive mechanism inside a modified plastic case with a proprietary connector. With some modification both drives are interchangeable. Though Apple sought to force the purchase of new drives with the Apple ///, many former Apple II users quickly devised a way to adapt their existing and cheaper Disk II drives, however only one external Disk II was supported in this manner. The Disk III was the first to allow daisy chaining of up to three additional drives to the single 26-pin ribbon cable connector on the Apple ///, for a total of 4 floppy disk drives – the Apple /// was the first Apple to contain a built-in drive mechanism. The Apple III Plus changed its 26-pin connector to a DB-25 connector, which required an adapter for use with the Disk III.
FileWare
In 1983, Apple finally announced a single and dual external drive (UniFile and DuoFile) implementing the 871-kilobyte "FileWare" mechanism used in the original Apple Lisa, as a replacement for the Disk II & /// drives. However, due to the reliability problems of the Apple-built "Twiggy" drive mechanisms, the products never shipped.
DuoDisk
In 1984, shortly after the introduction of the Apple IIe the previous year, Apple offered a combination of two, two third-height, 140-kilobyte Disk II drive mechanisms side-by-side in a single plastic case, called the DuoDisk 5.25 (A9M0108), which could not be daisy-chained. The unit was designed to be stacked on top of the computer, and beneath the monitor. Each unit required its own disk controller card (as each card could still control only two drives) and the number of units was thus limited to the number of available slots; in practice, few uses of the Apple II computer can make good use of more than two -inch drives, so this limitation mattered little. Originally released with a DB-25 connector to match that of the Apple III Plus, it was the first to adopt Apple's standard DB-19 floppy drive connector.
Disk IIc
The Disk IIc (A2M4050) was a half-height -inch floppy disk drive introduced by Apple Computer in 1984 styled for use alongside the Apple IIc personal computer, the only Apple II to contain a -inch built-in disk drive mechanism. The disk port on the original IIc was only designed to control one additional, external -inch disk drive, and as such, this particular drive omitted a daisy-chain port in back. It was possible to use it on other Apple II models, so long as it came last in the chain of drive devices (due to lacking a daisy-chain port); but since the Disk IIc was sold without a controller card, the Apple IIc computer needing none, it had to be adapted to an existing Disk II controller card in this case. Essentially the same as the full-height Disk II, Apple sold the Disk IIc for US$329, and other companies later sold similar drives for less.
-inch drive
In 1984, Apple had opted for the more modern, Sony-designed -inch floppy disk in late-model Lisas and the new Apple Macintosh. Accordingly, they attempted to introduce a new -inch 800-kilobyte floppy disk format for the Apple II series as well, to eventually replace the 140-kilobyte Disk II format. However, the external UniDisk 3.5 drive required a ROM upgrade (for existing Apple IIc machines; new ones shipped after this time had it from the factory) or a new kind of disk controller card (the so-called "Liron Card", for the Apple IIe) to be used. The much larger capacity and higher bitrate of the -inch drives made it impractical to use the software-driven Disk II controller because the 1-megahertz 6502 CPU in the Apple II line was too slow to be able to read them. Thus, a new and much more advanced (and correspondingly expensive) hardware floppy controller had to be used. And many original Apple IIs could not use the new controller card at all without further upgrades. Also, almost all commercial software for the Apple II series continued to be published on -inch disks which had a much larger installed base. For these reasons the -inch format was not widely accepted by Apple II users. The Apple 3.5 Drive used the same 800-kilobyte format as the UniDisk 3.5", but it did away with the internal controller, which made it cheaper. Unlike all earlier Apple II drives, it was designed to work with the Macintosh too, and among Apple II models, it was compatible only with the Apple IIGS and the Apple IIc+ models, which both had a faster main CPU. On the Apple IIGS, whose improved audiovisual capacities really demanded a higher-capacity disk format as well, the -inch format was accepted by users and became the standard format. Though Apple eventually offered a 1.44-megabyte SuperDrive with matching controller card for the Apple II series as well, the -inch Disk II format drives continued to be offered alongside the newer -inch drives and remained the standard on the non-IIGS models until the platform was discontinued in 1993.
Officially, the following -inch drives could be used on the Apple II:
Apple 3.5" External (A9M0106) - Designed for Apple IIs with the Liron or Superdrive controller or all Macintoshes with an external 19-pin floppy port (Mac 512s must be booted from the internal 400-kilobyte drive with the HD20 INIT, which provides HFS file system support - the Macintosh 128K will not work with this). The drive can be daisy chained, however this feature is not supported on the Macintosh.
Unidisk 3.5" Drive (A2M2053) - Designed for Apple IIs with the Liron or Superdrive controller (not compatible with Macintoshes) Recommended only for 8-bit Apple IIs as the A9M0106 operates faster on the IIGS
Apple FDHD External (G7287) - Supports 720-kilobyte/1.44-megabyte MFM floppy disks in addition to 800-kilobyte GCR. Designed for Apple IIs and Macs with the Superdrive controller, but will also work on machines with the older 800-kilobyte controller (as an 800-kilobyte drive - note that the G7287 is not compatible with the Mac 128/512)
The 400-kilobyte and 800-kilobyte Macintosh external drives (M0130 and M0131) are incompatible with standard Apple II controllers as they do not support the drives' automatic disk eject feature, although they could be used with third-party controllers.
UniDisk 5.25" and Apple 5.25 Drive
Along with the UniDisk 3.5", Apple introduced the UniDisk 5.25 (A9M0104) in a plastic case, which modernized the appearance of the Disk II to better match the Apple IIe. Since the UniDisk 5.25" could fully replace the Disk II in all its uses, the original Disk II was canceled at this point. This was followed in 1986 by a Platinum-gray version which was renamed Apple 5.25 Drive (A9M0107), companion to the Apple 3.5 Drive, and introduced alongside the first Platinum-colored computer, the Apple IIGS. Essentially these were both single half-height Disk II mechanisms inside an individual drive enclosure, just like the Disk IIc had been. All of these drives introduced a daisy chain pass-through port. While the drives are essentially interchangeable among Apple II computers, both with each other and with the earlier drives, only the Apple 5.25 Drive can be used with the Apple IIe Card on a Macintosh LC.
Apple PC 5.25" Drive
There is one -inch drive made by Apple that is completely incompatible with all the drives named above. In 1987, Apple sought to better compete in the IBM dominated business market by offering a means of cross-compatibility. Alongside the release of the Macintosh SE & Macintosh II, Apple released the Apple PC 5.25" Drive which required a separate custom PC 5.25 Floppy Disk Controller Card, different for each Mac model. It is the only -inch drive manufactured by Apple that can be used by the Macintosh.
This drive was for use with industry standard double-sided -inch 360-kilobyte formatted flexible disks. It was similar in appearance to the Disk IIc. Through the use of a special Macintosh Apple File Exchange utility shipped with it, the drive could read files from, and write files to, floppy disks in MS-DOS formats. Software "translators" could convert documents between WordStar and MacWrite formats, among others. The drive is incompatible with all Apple II computers and the Apple IIe Card for the Macintosh LC as well; it also does not allow a Macintosh to read from or write to -inch Apple II-formatted disks.
This drive was made obsolete by the industry-wide adoption of -inch disks and was replaced by the -inch Apple FDHD Drive, which could read and write every existing Macintosh, DOS and Windows format, and the Apple II ProDOS format as well.
Disk II cable pinout
This table shows the pinout of the original 1979 Disk II controller and newer 1983 Uni/Duo Disk I/O controller (655-0101).
The circuitry of these two controllers are identical. The Disk II header pin numbering is per the Disk II controller card silkscreen and the circuit schematic given in the DOS 3.3 manual. The Uni/Duo Disk D-19 pinout is taken from the Apple //c Reference Manual, Volume 1.
NOTES:
Active low signals are suffixed with a "*"
Since most signals are shared with both drive 1 and drive 2, the logic in each drive uses the ENABLE* signal to activate appropriately.
Pin 14 for Disk II drive 1 and drive 2 have separate enable signals (14a and 14b)
Pin 17 for Uni/Duo Disk is chained to first drive (drive 1) and second drive (drive 2) is enabled via other logic in the first drive.
The EXTINT* signal is not present on the Disk II controller card. In the Apple //c computer, it is routed to the DSR* signal of the internal 6551 ACIA (UART) chip.
See also
List of Apple drives
References
External links
Apple II History - Chapter 5 (Disk II)
Apple Floppy Disk II
Apple Floppy Drives
Disk II programming example
Disk II Controller hardware article
Apple II Diskette FAQ and Apple II Drive FAQ at comp.sys.apple2 FAQ mirror
Apple II History - Chapter 8 (The Apple IIc)
Apple floppy drive schematics
The untold story behind Apple's $13,000 operating system
Apple II peripherals
Apple II family
Floppy disk drives |
1052290 | https://en.wikipedia.org/wiki/Newton%20OS | Newton OS | Newton OS is a discontinued operating system for the Apple Newton PDAs produced by Apple Computer, Inc. between 1993 and 1997. It was written entirely in C++ and trimmed to be low power consuming and use the available memory efficiently. Many applications were pre-installed in the ROM of the Newton (making for quick start-up) to save on RAM and flash memory storage for user applications.
Features
Newton OS features many interface elements that the Macintosh system software didn't have at the time, such as drawers and the "poof" animation. An animation similar to this is found in Mac OS X, and parts of the Newton's handwriting recognition system have been implemented as Inkwell in Mac OS X.
Sound responsive — Clicking menus and icons makes a sound; this feature was later introduced in Mac OS 8.
Icons - Similar to the Macintosh Desktop metaphor, Newton OS uses icons to open applications.
Tabbed documents — Similar to tabbed browsing in today's browsers and Apple's At Ease interface, documents titles appear in a small tab at the top right hand of the screen.
Screen rotation — In Newton 2.0, the screen can be rotated to be used for drawing or word processing.
File documents — Notes and Drawings can be categorized. E.g. Fun, Business, Personal, etc.
Print documents — Documents on the Newton can be printed.
Send documents — Documents can be sent to another Newton via Infrared technology or sent using the Internet by E-Mail, or faxed.
Menus — Similar to menus seen in Mac OS, but menu titles are instead presented at the bottom of the screen in small rectangles, making them similar to buttons with attached "pop-up" menus.
Many features of the Newton are best appreciated in the context of the history of Pen computing.
Software
Shortly after the Newton PDA's release in 1993, developers were not paying much attention to the new Newton OS API and were still more interested in developing for the Macintosh and Windows platforms. It was not until two years later that developers saw a potential market available to them in creating software for Newton OS. Several programs were made by third-party developers, including software to enhance the disappointing hand writing recognition technology of Newton OS 1.x.
The basic software that came with Newton OS:
Works — A program for drawing and word processing, with typical capabilities such as: rulers, margins, page breaks, formatting, printing, spell checking and find & replace tools.
Notes — Used for checklists, as well as both drawing and writing in the same program either with a newton keyboard or a stylus pen.
Dates — Calendar program where you can schedule appointments and other special events.
Names — Program for storing extensive contacts information in a flexible format.
Formulas — Program that offers metric conversions, currency conversions, loan and mortgage calculators, etc.
Calculator — A basic calculator with square root, percentage, MR, M+ and M- functions additional to the basic functions found on a calculator.
Clock — A small floating window type application, known as a desktop accessory on the Macintosh. The Newton clock also includes features for an alarm, minute timer and the date.
Book Reader — Support for displaying electronic books is built in.
Version history
Handwriting recognition
The Newton uses the CalliGrapher word-based handwriting recognition engine developed by ParaGraph International Inc, led by former Soviet scientist Stepan Pachikov.
The earliest versions had weaknesses that resulted in bad publicity and reviews. However, with the release of Newton PDAs based upon version 2.0 of the OS, the handwriting recognition substantially improved, partially being a product of ParaGraph and an Apple-created recognizer pair: Apple's Rosetta and Mondello. Newton's handwriting recognition, particularly the print recognizer, has been considered by many reviewers, testers, and users to be the best in the industry, even 10 years after it was introduced. It was developed by Apple's Advanced Technology Group, and was described in 2012 as "the world's first genuinely usable handwriting recognition system".
The Newton can recognize hand-printed text, cursive, or a mix of the two, and can also accept free-hand "Sketches", "Shapes", and "ink text". Text can also be entered by tapping with the stylus on a small on-screen pop-up QWERTY keyboard. With "Shapes", Newton can recognize that the user was attempting to draw a circle, a line, a polygon, etc., and it cleans them up into "perfect" vector representations (with modifiable control points and defined vertices) of what the user is attempting to draw. "Shapes" and "Sketches" can be scaled or deformed once drawn. "Ink text" captures the user's free-hand writing but allows it to be treated somewhat like recognized text when manipulating for later editing purposes ("ink text" supported word wrap, could be formatted to be bold, italic, etc.). At any time a user can also direct the Newton to recognize selected "ink text" and turn it into recognized text (deferred recognition). A Newton Note document (or the notes attached to each contact in Names and each calendar event) can contain any mix of interleaved text, ink text, Shapes, and Sketches.
NewtonScript
Newton OS runs applications written in C++, along with an interpreted, user-friendly language called NewtonScript. These applications are stored in packages.
See also
Apple Newton
eMate 300
Pen computing
iOS
Motorola Marco
Notes
A selection of PDFs of Apple's Newton manuals
Newton FAQ
Pen Computing's First Look at Newton OS 2.0
The Newton Hall of Fame: People behind the Newton
Pen Computing's Why did Apple kill the Newton?
Pen Computing's Newton Notes column archive
A.I. Magazine article by Yaeger on Newton HWR design, algorithms, & quality and associated slides
Info on Newton HWR from Apple's HWR Technical Lead
References
External links
Additional resources & information
NewtonTalk discussion email list
Einstein: a Newton emulator
CalliGrapher handwriting recognition software
Annotated bibliography of references to handwriting recognition and pen computing
Reviews
MacTech's review of MessagePad 2000
MessagePad 2000 review at "The History and Macintosh Society"
Prof. Wittmann's collection of Newton & MessagePad reviews
CNET compares a MessagePad to a Samsung Q1 UMPC
Apple Inc. operating systems
Apple Newton
ARM operating systems
Discontinued operating systems
1993 software |
1247736 | https://en.wikipedia.org/wiki/Troilus | Troilus | Troilus ( or ; ; ) is a legendary character associated with the story of the Trojan War. The first surviving reference to him is in Homer's Iliad, which some scholars theorize was composed by "bards" (aoidoi) and sung in the late 9th or 8th century BC.
In Greek mythology, Troilus is a young Trojan prince, one of the sons of King Priam (or Apollo) and Hecuba. Prophecies link Troilus' fate to that of Troy and so he is ambushed and murdered by Achilles. Sophocles was one of the writers to tell this tale. It was also a popular theme among artists of the time. Ancient writers treated Troilus as the epitome of a dead child mourned by his parents. He was also regarded as a paragon of youthful male beauty.
In Western European medieval and Renaissance versions of the legend, Troilus is the youngest of Priam's five legitimate sons by Hecuba. Despite his youth he is one of the main Trojan war leaders. He dies in battle at Achilles' hands. In a popular addition to the story, originating in the 12th century, Troilus falls in love with Cressida, whose father Calchas has defected to the Greeks. Cressida pledges her love to Troilus but she soon switches her affections to the Greek hero Diomedes when sent to her father in a hostage exchange. Chaucer and Shakespeare are among the authors who wrote works telling the story of Troilus and Cressida. Within the medieval tradition, Troilus was regarded as a paragon of the faithful courtly lover and also of the virtuous pagan knight. Once the custom of courtly love had faded, his fate was regarded less sympathetically.
Little attention was paid to the character during the 18th and 19th centuries. However, Troilus has reappeared in 20th and 21st century retellings of the Trojan War by authors who have chosen elements from both the classical and medieval versions of his story.
The story in the ancient world
For the ancient Greeks, the tale of the Trojan War and the surrounding events appeared in its most definitive form in the Epic Cycle of eight narrative poems from the archaic period in Greece (750 BC – 480 BC). The story of Troilus is one of a number of incidents that helped provide structure to a narrative that extended over several decades and 77 books from the beginning of the Cypria to the end of the Telegony. The character's death early in the war and the prophecies surrounding him demonstrated that all Trojan efforts to defend their home would be in vain. His symbolic significance is evidenced by linguistic analysis of his Greek name "Troilos". It can be interpreted as an elision of the names of Tros and Ilos, the legendary founders of Troy, as a diminutive or pet name "little Tros" or as an elision of Troië (Troy) and lyo (to destroy). These multiple possibilities emphasise the link between the fates of Troilus and of the city where he lived. On another level, Troilus' fate can also be seen as foreshadowing the subsequent deaths of his murderer Achilles, and of his nephew Astyanax and sister Polyxena, who, like Troilus, die at the altar in at least some versions of their stories.
Given this, it is unfortunate that the Cypria—the part of the Epic Cycle that covers the period of the Trojan War of Troilus' death—does not survive. Indeed, no complete narrative of his story remains from archaic times or the subsequent classical period (479–323 BC). Most of the literary sources from before the Hellenistic age (323–30 BC) that even referred to the character are lost or survive only in fragments or summary. The surviving ancient and medieval sources, whether literary or scholarly, contradict each other, and many do not tally with the form of the myth that scholars now believe to have existed in the archaic and classical periods.
Partially compensating for the missing texts are the physical artifacts that remain from the archaic and classical periods. The story of the circumstances around Troilus' death was a popular theme among pottery painters. (The Beazley Archive website lists 108 items of Attic pottery alone from the 6th to 4th centuries BC containing images of the character.) Troilus also features on other works of art and decorated objects from those times. It is a common practice for those writing about the story of Troilus as it existed in ancient times to use both literary sources and artifacts to build up an understanding of what seems to have been the most standard form of the myth and its variants. The brutality of this standard form of the myth is highlighted by commentators such as Alan Sommerstein, an expert on ancient Greek drama, who describes it as "horrific" and "[p]erhaps the most vicious of all the actions traditionally attributed to Achilles."
The standard myth: the beautiful youth murdered
Troilus is an adolescent boy or ephebe, the son of Hecuba, queen of Troy. As he is so beautiful, Troilus is taken to be the son of the god Apollo. However, Hecuba's husband, King Priam, treats him as his own much-loved child.
A prophecy says that Troy will not fall if Troilus lives into adulthood. So the goddess Athena encourages the Greek warrior Achilles to seek him out early in the Trojan War. The youth is known to take great delight in his horses. Achilles ambushes him and his sister Polyxena when he has ridden with her for water from a well in the Thymbra – an area outside Troy where there is a temple of Apollo.
The Greek is struck by the beauty of both Trojans and is filled with lust. It is the fleeing Troilus whom swift-footed Achilles catches, dragging him by the hair from his horse. The young prince refuses to yield to Achilles' sexual attentions and somehow escapes, taking refuge in the nearby temple. But the warrior follows him in, and beheads him at the altar before help can arrive. The murderer then mutilates the boy's body. The mourning of the Trojans at Troilus' death is great.
This sacrilege leads to Achilles’ own death, when Apollo avenges himself by helping Paris strike Achilles with the arrow that pierces his heel.
Ancient literary sources supporting the standard myth
Homer and the missing texts of the archaic and classical periods
The earliest surviving literary reference to Troilus is in Homer's Iliad, which formed one part of the Epic Cycle. It is believed that Troilus' name was not invented by Homer and that a version of his story was already in existence. Late in the poem, Priam berates his surviving sons, and compares them unfavourably to their dead brothers including Trôïlon hippiocharmên. The interpretation of hippiocharmên is controversial but the root hipp- implies a connection with horses. For the purpose of the version of the myth given above, the word has been taken as meaning "delighting in horses". Sommerstein believes that Homer wishes to imply in this reference that Troilus was killed in battle, but argues that Priam's later description of Achilles as andros paidophonoio ("boy-slaying man") indicates that Homer was aware of the story of Troilus as a murdered child; Sommerstein believes that Homer is playing here on the ambiguity of the root paido- meaning boy in both the sense of a young male and of a son.
Troilus' death was also described in the Cypria, one of the parts of the Epic Cycle that is no longer extant. The poem covered the events preceding the Trojan War and the first part of the war itself up to the events of the Iliad. Although the Cypria does not survive, most of an ancient summary of the contents, thought to be by Eutychius Proclus, remains. Fragment 1 mentions that Achilles killed Troilus, but provides no more detail. However, Sommerstein takes the verb used to describe the killing (phoneuei) as meaning that Achilles murders Troilus.
In Athens, the early tragedians Phrynicus and Sophocles both wrote plays called Troilos and the comic playwright Strattis wrote a parody of the same name. Of the esteemed Nine lyric poets of the archaic and classical periods, Stesichorus may have referred to Troilus' story in his Iliupersis and Ibycus may have written in detail about the character. With the exception of these authors, no other pre-Hellenistic written source is known to have considered Troilus at any length.
Unfortunately, all that remains of these texts are the smallest fragments or summaries and references to them by other authors. What does survive can be in the form of papyrus fragments, plot summaries by later authors or quotations by other authors. In many cases these are just odd words in lexicons or grammar books with an attribution to the original author. Reconstructions of the texts are necessarily speculative and should be viewed with "wary but sympathetic scepticism". In Ibycus' case all that remains is a parchment fragment containing a mere six or seven words of verse accompanied with a few lines of scholia. Troilus is described in the poem as godlike and is killed outside Troy. From the scholia, he is clearly a boy. The scholia also refer to a sister, someone "watching out" and a murder in the sanctuary of Thymbrian Apollo. While acknowledging that these details may have been reports of other later sources, Sommerstein thinks it probable that Ibycus told the full ambush story and is thus the earliest identifiable source for it. Of Phrynicus, one fragment remains considered to refer to Troilus. This speaks of "the light of love glowing on his reddening cheeks".
Of all these fragmentary pre-Hellenistic sources, the most is known of Sophocles Troilos. Even so, only 54 words have been identified as coming from the play. Fragment 619 refers to Troilus as an andropais, a man-boy. Fragment 621 indicates that Troilus was going to a spring with a companion to fetch water or to water his horses. A scholion to the Iliad states that Sophocles has Troilus ambushed by Achilles while exercising his horses in the Thymbra. Fragment 623 indicates that Achilles mutilated Troilus' corpse by a method known as maschalismos. This involved preventing the ghost of a murder victim from returning to haunt their killer by cutting off the corpse's extremities and stringing them under its armpits. Sophocles is thought to have also referred to the maschalismos of Troilus in a fragment taken to be from an earlier play Polyxene.
Sommerstein attempts a reconstruction of the plot of the Troilos, in which the title character is incestuously in love with Polyxena and tries to discourage the interest in marrying her shown by both Achilles and Sarpedon, a Trojan ally and son of Zeus. Sommerstein argues that Troilus is accompanied on his fateful journey to his death, not by Polyxena, but by his tutor, a eunuch Greek slave. Certainly there is a speaking role for a eunuch who reports being castrated by Hecuba and someone reports the loss of their adolescent master. The incestuous love is deduced by Sommerstein from a fragment of Strattis' parody, assumed to partially quote Sophocles, and from his understanding that the Sophocles play intends to contrast barbarian customs, including incest, with Greek ones. Sommerstein also sees this as solving what he considers the need for an explanation of Achilles' treatment of Troilus' corpse, the latter being assumed to have insulted Achilles in the process of warning him off Polyxena. Italian professor of English and expert on Troilus, Piero Boitani, on the other hand, considers Troilus' rejection of Achilles' sexual advances towards him as sufficient motive for the mutilation.
Alexandra
The first surviving text with more than the briefest mention of Troilus is Alexandra, a Hellenistic poem dating from no earlier than the 3rd century BC by the tragedian Lycophron (or a namesake of his). The poem consists of the obscure prophetic ravings of Cassandra:
This passage is explained in the Byzantine writer John Tzetzes' scholia as a reference to Troilus seeking to avoid the unwanted sexual advances of Achilles by taking refuge in his father Apollo's temple. When he refuses to come out, Achilles goes in and kills him on the altar. Lycophron's scholiast also says that Apollo started to plan Achilles' death after the murder. This begins to build up the elements of the version of Troilus' story given above: he is young, much loved and beautiful; he has divine ancestry, is beheaded by his rejected Greek lover and, we know from Homer, had something to do with horses. The reference to Troilus as a "lion whelp" hints at his having the potential to be a great hero, but there is no explicit reference to a prophecy linking the possibility of Troilus reaching adulthood and Troy then surviving.
Other written sources
No other extended passage about Troilus exists from before the Augustan Age by which time other versions of the character's story have emerged. The remaining sources compatible with the standard myth are considered below by theme.
Parentage The Apollodorus responsible for the Library lists Troilus last of Priam and Hecuba's sons – a detail adopted in the later tradition – but then adds that it is said that the boy was fathered by Apollo. On the other hand, Hyginus includes Troilus in the middle of a list of Priam's sons without further comment. In the early Christian writings the Clementine Homilies, it is suggested that Apollo was Troilus' lover rather than his father.
Youthfulness Horace emphasises Troilus' youth by calling him inpubes ("unhairy", i.e. pre-pubescent or, figuratively, not old enough to bear arms). Dio Chrysostom derides Achilles in his Trojan discourse, complaining that all that the supposed hero achieved before Homer was the capture of Troilus who was still a boy.
Prophecies The First Vatican Mythographer reports a prophecy that Troy will not fall if Troilus reaches the age of twenty and gives that as a reason for Achilles' ambush. In Plautus, Troilus' death is given as one of three conditions that must be met before Troy would fall.
Beauty Ibycus, in seeking to praise his patron, compares him to Troilus, the most beautiful of the Greeks and the Trojans. Dio Chrysostom refers to Troilus as one of many examples of different kinds of beauty. Statius compares a beautiful dead slave missed by his master to Troilus.
Object of pederastic love Servius, in his scholia to the passage from Virgil discussed below, says that Achilles lures Troilus to him with a gift of doves. Troilus then dies in the Greek's embrace. Robert Graves interprets this as evidence of the vigour of Achilles' love-making but Timothy Gantz considers that the "how or why" of Servius' version of Troilus' death is unclear. Sommerstein favours Graves's interpretation saying that murder was not a part of ancient pederastic relations and that nothing in Servius suggests an intentional killing.
Location of ambush and death A number of reports have come down of Troilus' death variously mentioning water, exercising horses and the Thymbra, though they do not necessarily build into a coherent whole: the First Vatican Mythographer reports that Troilus was exercising outside Troy when Achilles attacked him; a commentator on Ibycus says that Troilus was slain by Achilles in the Thymbrian precinct outside Troy; Eustathius of Thessalonica's commentary on the Iliad says that Troilus was exercising his horses there; Apollodorus says that Achilles ambushed Troilus inside the temple of Thymbrian Apollo; finally, Statius reports that Troilus was speared to death as he fled around Apollo's walls. Gantz struggles to make sense of what he sees as contradictory material, feeling that Achilles' running down of Troilus' horse makes no sense if Troilus was just fleeing to the nearby temple building. He speculates that the ambush at the well and the sacrifice in the temple could be two different versions of the story or, alternatively, that Achilles takes Troilus to the temple to sacrifice him as an insult to Apollo.
Mourning Trojan and, especially, Troilus' own family's mourning at his death seems to have epitomised grief at the loss of a child in classical civilization. Horace, Callimachus and Cicero all refer to Troilus in this way.
Ancient art and artifact sources
Ancient Greek art, as found in pottery and other remains, frequently depicts scenes associated with Troilus' death: the ambush, the pursuit, the murder itself and the fight over his body. Depictions of Troilus in other contexts are unusual. One such exception, a red-figure vase painting from Apulia c.340BC, shows Troilus as a child with Priam.
In the ambush, Troilus and Polyxena approach a fountain where Achilles lies in wait. This scene was familiar enough in the ancient world for a parody to exist from c.400BC showing a dumpy Troilus leading a mule to the fountain. In most serious depictions of the scene, Troilus rides a horse, normally with a second next to him. He is usually, but not always, portrayed as a beardless youth. He is often shown naked; otherwise he wears a cloak or tunic. Achilles is always armed and armoured. Occasionally, as on the vase picture at , or the fresco from the Tomb of the Bulls shown at the head of this article, either Troilus or Polyxena is absent, indicating how the ambush is linked to each of their stories. In the earliest definitely identified version of this scene, (a Corinthian vase c.580BC), Troilus is bearded and Priam is also present. Both these features are unusual. More common is a bird sitting on the fountain; normally a raven, symbol of Apollo and his prophetic powers and thus a final warning to Troilus of his doom; sometimes a cock, a common love gift suggesting that Achilles attempted to seduce Troilus. In some versions, for example an Attic amphora in the Museum of Fine Arts, Boston dating from c.530BC (seen here ) Troilus has a dog running with him. On one Etruscan vase from the 6th century BC, doves are flying from Achilles to Troilus, suggestive of the love gift in Servius. The fountain itself is conventionally decorated with a lion motif.
The earliest identified version of the pursuit or chase is from the third quarter of the 7th century BC. Next chronologically is the best known version on the François Vase by Kleitias. The number of characters shown on pottery scenes varies with the size and shape of the space available. The François Vase is decorated with several scenes in long narrow strips. This means that the Troilus frieze is heavily populated. In the centre, (which can be seen at the Perseus Project at ,) is the fleeing Troilus, riding one horse with the reins of the other in his hand. Below them is the vase—which Polyxena (partially missing), who is ahead of him, has dropped. Achilles is largely missing but it is clear that he is armoured. They are running towards Troy where Antenor gestures towards Priam. Hector and Polites, brothers of Troilus, emerge from the city walls in the hope of saving Troilus. Behind Achilles are a number of deities, Athena, Thetis (Achilles' mother), Hermes, and Apollo (just arriving). Two Trojans are also present, the woman gesturing to draw the attention of a youth filling his vase. As the deities appear only in pictorial versions of the scene, their role is subject to interpretation. Boitani sees Athena as urging Achilles on and Thetis as worried by the arrival of Apollo who, as Troilus' protector, represents a future threat to Achilles. He does not indicate what he thinks Hermes may be talking to Thetis about. The classicist and art historian Professor Thomas H. Carpenter sees Hermes as a neutral observer, Athena and Thetis as urging Achilles on, and the arrival of Apollo as the artist's indication of the god's future role in Achilles' death. As Athena is not traditionally a patron of Achilles, Sommerstein sees her presence in this and other portrayals of Troilus' death as evidence of the early standing of the prophetic link between Troilus' death and the fall of Troy, Athena being driven, above all, by her desire for the city's destruction.
The standard elements in the pursuit scene are Troilus, Achilles, Polyxena, the two horses and the fallen vase. On two tripods, an amphora and a cup, Achilles already has Troilus by the hair. A famous vase in the British Museum, which gave the Troilos Painter the name by which he is now known, shows the two Trojans looking back in fear, as the beautiful youth whips his horse on. This vase can be seen at the Perseus Project site . The water spilling from the shattered vase below Troilus' horse, symbolises the blood he is about to shed.
The iconography of the eight legs and hooves of the horses can be used to identify Troilus on pottery where his name does not appear; for example, on a Corinthian vase where Troilus is shooting at his pursuers and on a peaceful scene on a Chalcidian krater where the couples Paris and Helen, Hector and Andromache are labelled, but the youth riding one of a pair of horses is not.
A later Southern Italian interpretation of the story is on vases held respectively at the Boston Museum of Fine Arts and the Hermitage Museum in St Petersburg. On the krater from c.380-70BC at Troilus can be seen with just one horse trying to defend himself with a throwing spear; on the hydria from c.325-320BC at , Achilles is pulling down the youth's horse.
The earliest known depictions of the death or murder of Troilus are on shield bands from the turn of the 7th into the 6th century BC found at Olympia. On these, a warrior with a sword is about to stab a naked youth at an altar. On one, Troilus clings to a tree (which Boitani takes for the laurel sacred to Apollo). A crater contemporary with this shows Achilles at the altar holding the naked Troilus upside down while Hector, Aeneas and an otherwise unknown Trojan Deithynos arrive in the hope of saving the youth. In some depictions Troilus is begging for mercy. On an amphora, Achilles has the struggling Troilus slung over his shoulder as he goes to the altar. Boitani, in his survey of the story of Troilus through the ages, considers it of significance that two artifacts (a vase and a sarcophagus) from different periods link Troilus' and Priam's death by showing them on the two sides of the same item, as if they were the beginning and end of the story of the fall of Troy. Achilles is the father of Neoptolemus, who slays Priam at the altar during the sack of Troy. Thus the war opens with a father killing a son and closes with a son killing a father.
Some pottery shows Achilles, already having killed Troilus, using his victim's severed head as a weapon as Hector and his companions arrive too late to save him; some includes the watching Athena, occasionally with Hermes. At is one such picture showing Achilles fighting Hector over the altar. Troilus' body is slumped and the boy's head is either flying through the air, or stuck to the end of Achilles' spear. Athena and Hermes look on. Aeneas and Deithynos are behind Hector.
Sometimes details of the closely similar deaths of Troilus and Astyanax are exchanged. shows one such image where it is unclear which murder is portrayed. The age of the victim is often an indicator of which story is being told and the relative small size here might point towards the death of Astyanax, but it is common to show even Troilus as much smaller than his murderer, (as is the case with the kylix pictured to the above right). Other factors in this case are the presence of Priam (suggesting Astyanax), that of Athena (suggesting Troilus) and the fact that the scene is set outside the walls of Troy (again suggesting Troilus).
A variant myth: the boy-soldier overwhelmed
A different version of Troilus' death appears on a red-figure cup by Oltos. Troilus is on his knees, still in the process of drawing his sword when Achilles' spear has already stabbed him and Aeneas comes too late to save him. Troilus wears a helmet, but it is pushed up to reveal a beautiful young face. This is the only such depiction of Troilus' death in early figurative art. However, this version of Troilus as a youth defeated in battle appears also in written sources.
Virgil and other Latin sources
This version of the story appears in Virgil's Aeneid, in a passage describing a series of paintings decorating the walls of a temple of Juno. The painting immediately next to the one depicting Troilus shows the death of Rhesus, another character killed because of prophecies linked to the fall of Troy. Other pictures are similarly calamitous.
In a description whose pathos is heightened by the fact that it is seen through a compatriot's eyes, Troilus is infelix puer ("unlucky boy") who has met Achilles in "unequal" combat. Troilus' horses flee while he, still holding their reins, hangs from the chariot, his head and hair trailing behind while the backward-pointing spear scribbles in the dust. (The First Vatican Mythographer elaborates on this story, explaining that Troilus's body is dragged right to the walls of Troy.)
In his commentary on the Aeneid, Servius considers this story as a deliberate departure from the "true" story, bowdlerized to make it more suitable for an epic poem. He interprets it as showing Troilus overpowered in a straight fight. Gantz, however, argues that this might be a variation of the ambush story. For him, Troilus is unarmed because he went out not expecting combat and the backward pointing spear was what Troilus was using as a goad in a manner similar to characters elsewhere in the Aeneid. Sommerstein, on the other hand believes that the spear is Achilles' that has struck Troilus in the back. The youth is alive but mortally wounded as he is being dragged towards Troy.
An issue here is the ambiguity of the word congressus ("met"). It often refers to meeting in a conventional combat but can have reference to other types of meetings too. A similar ambiguity appears in Seneca and in Ausonius' 19th epitaph, narrated by Troilus himself. The dead prince tells how he has been dragged by his horses after falling in unequal battle with Achilles. A reference in the epitaph comparing Troilus' death to Hector's suggests that Troilus dies later than in the traditional narrative, something that, according to Boitani, also happens in Virgil.
Greek writers in the boy-soldier tradition
Quintus of Smyrna, in a passage whose atmosphere Boitani describes as sad and elegiac, retains what for Boitani are the two important issues of the ancient story, that Troilus is doomed by Fate and that his failure to continue his line symbolises Troy's fall. In this case, there is no doubt that Troilus entered battle knowingly, for in the Posthomerica Troilus's armour is one of the funerary gifts after Achilles' own death. Quintus repeatedly emphasises Troilus's youth: he is beardless, virgin of a bride, childlike, beautiful, the most godlike of all Hecuba's children. Yet he was lured by Fate to war when he knew no fear and was struck down by Achilles' spear just as a flower or corn that has borne no seed is killed by the gardener.
In the Ephemeridos belli Trojani (Journal of the Trojan War), supposedly written by Dictys the Cretan during the Trojan War itself, Troilus is again a defeated warrior, but this time captured with his brother Lycaon. Achilles vindictively orders that their throats be slit in public, because he is angry that Priam has failed to advance talks over a possible marriage to Polyxena. Dictys' narrative is free from gods and prophecy but he preserves Troilus' loss as something to be greatly mourned:
The Trojans raised a cry of grief and, mourning loudly, bewailed the fact that Troilus had met so grievous a death, for they remembered how young he was, who being in the early years of his manhood, was the people's favourite, their darling, not only because of his modesty and honesty, but more especially because of his handsome appearance.
The story in the medieval and Renaissance eras
In the sources considered so far, Troilus' only narrative function is his death. The treatment of the character changes in two ways in the literature of the medieval and renaissance periods. First, he becomes an important and active protagonist in the pursuit of the Trojan War itself. Second, he becomes an active heterosexual lover, rather than the passive victim of Achilles' pederasty. By the time of John Dryden's neo-classical adaptation of Shakespeare's Troilus and Cressida it is the ultimate failure of his love affair that defines the character.
For medieval writers, the two most influential ancient sources on the Trojan War were the purported eye-witness accounts of Dares the Phrygian, and Dictys the Cretan, which both survive in Latin versions. In Western Europe the Trojan side of the war was favoured and therefore Dares was preferred over Dictys. Although Dictys' account positions Troilus' death later in the war than was traditional, it conforms to antiquity's view of him as a minor warrior if one at all. Dares' De excidio Trojae historia (History of the Fall of Troy) introduces the character as a hero who takes part in events beyond the story of his death.
Authors of the 12th and 13th centuries such as Joseph of Exeter and Albert of Stade continued to tell the legend of the Trojan War in Latin in a form that follows Dares' tale with Troilus remaining one of the most important warriors on the Trojan side. However, it was two of their contemporaries, Benoît de Sainte-Maure in his French verse romance and Guido delle Colonne in his Latin prose history, both also admirers of Dares, who were to define the tale of Troy for the remainder of the medieval period. The details of their narrative of the war were copied, for example, in the Laud and Lydgate Troy Books and also in Raoul Lefevre's Recuyell of the Historyes of Troye. Lefevre, through Caxton's 1474 printed translation, was in turn to become the best known retelling of the Troy story in Renaissance England and influenced Shakespeare among others. The story of Troilus as a lover, invented by Benoît and retold by Guido, generated a second line of influence. It was taken up as a tale that could be told in its own right by Boccaccio and then by Chaucer who established a tradition of retelling and elaborating the story in English-language literature, which was to be followed by Henryson and Shakespeare.
The second Hector, wall of Troy
As indicated above, it was through the writings of Dares the Phrygian that the portrayal of Troilus as an important warrior was transmitted to medieval times. However, some authors have argued that the tradition of Troilus as a warrior may be older. The passage from the Iliad described above is read by Boitani as implying that Priam put Troilus on a par with the very best of his warrior sons. The description of him in that passage as hippiocharmên is rendered by some authorities as meaning a warrior charioteer rather than merely someone who delights in horses. The many missing and partial literary sources might include such a hero. Yet only the one ancient vase shows Troilus as a warrior falling in a conventional battle.
Dares
In Dares, Troilus is the youngest of Priam's royal sons, bellicose when peace or truces are suggested and the equal of Hector in bravery, "large and most beautiful... brave and strong for his age, and eager for glory."<ref>Dares, De excidio Trojae Historia, 12.</ref> He slaughters many Greeks, wounds Achilles and Menelaus, routs the Myrmidons more than once before his horse falls and traps him and Achilles takes the opportunity to put an end to his life. Memnon rescues the body, something that didn't happen in many later versions of the tale. Troilus' death comes near the end of the war not at its beginning. He now outlives Hector and succeeds him as the Trojans' great leader in battle. Now it is in reaction to Troilus's death that Hecuba plots Achilles' murder.
As the tradition of Troilus the warrior advances through time, the weaponry and the form of combat change. Already in Dares he is a mounted warrior, not a charioteer or foot warrior, something anachronistic to epic narrative. In later versions he is a knight with armour appropriate to the time of writing who fights against other knights and dukes. His expected conduct, including his romance, conforms to courtly or other values contemporary to the writing.
Description in medieval texts
The medieval texts follow Dares' structuring of the narrative in describing Troilus after his parents and four royal brothers Hector, Paris, Deiphobus and Helenus.
Joseph of Exeter, in his Daretis Phrygii Ilias De bello Troiano (The Iliad of Dares the Phrygian on the Trojan War), describes the character as follows:
The limbs of Troilus expand and fill his space.
In mind a giant, though a boy in years, he yields
to none in daring deeds with strength in all his parts
his greater glory shines throughout his countenance.
Benoît de Sainte-Maure's description in Le Roman de Troie (The Romance of Troy) is too long to quote in full, but influenced the descriptions that follow. Benoît goes into details of character and facial appearance avoided by other writers. He tells that Troilus was "the fairest of the youths of Troy" with:
fair hair, very charming and naturally shining, eyes bright and full of gaiety... He was not insolent or haughty, but light of heart and gay and amorous. Well was he loved, and well did he love...
Guido delle Colonne's Historia destructionis Troiae (History of the Destruction of Troy) says:
The fifth and last was named Troilus, a young man as courageous as possible in war, about whose valour there are many tales which the present history does not omit later on.
The Laud Troy Book:
The youngest doughti Troylus
A doughtier man than he was on
Of hem alle was neuere non,-
Save Ector, that was his brother
There never was goten suche another.
The boy who in the ancient texts was never Achilles' match has now become a young knight, a worthy opponent to the Greeks.
Knight and war leader
In the medieval and renaissance tradition, Troilus is one of those who argue most for war against the Greeks in Priam's council. In several texts, for example the Laud Troy Book, he says that those who disagree with him are better suited to be priests. Guido, and writers who follow him, have Hector, knowing how headstrong his brother can be, counsel Troilus not to be reckless before the first battle.
In the medieval texts, Troilus is a doughty knight throughout the war, taking over, as in Dares, after Hector's death as the main warrior on the Trojan side. Indeed he is named as a second Hector by Chaucer and Lydgate. These two poets follow Boccaccio in reporting that Troilus kills thousands of Greeks. However, the comparison with Hector can be seen as acknowledging Troilus' inferiority to his brother through the very need to mention him.
In Joseph, Troilus is greater than Alexander, Hector, Tydeus, Bellona and even Mars, and kills seven Greeks with one blow of his club. He does not strike at opponents' legs because that would demean his victory. He only fights knights and nobles, and disdains facing the common warriors.
Albert of Stade saw Troilus as so important that he is the title character of his version of the Trojan War. He is "the wall of his homeland, Troy's protection, the rose of the military...."
The list of Greek leaders Troilus wounds expands in the various re-tellings of the war from the two in Dares to also include Agamemnon, Diomedes and Menelaus. Guido, in keeping his promise to tell of all Troilus' valorous deeds, describes many incidents. Troilus is usually victorious but is captured in an early battle by Menestheus before his friends rescue him. This incident reappears in the imitators of Guido, such as Lefevre and the Laud and Lydgate Troy Books.
Death
Within the medieval Trojan tradition, Achilles withdraws from fighting in the war because he is to marry Polyxena. Eventually, so many of his followers are killed that he decides to rejoin the battle leading to Troilus' death and, in turn, to Hecuba, Polyxena and Paris plotting Achilles' murder.
Albert and Joseph follow Dares in having Achilles behead Troilus as he tries to rise after his horse falls. In Guido and authors he influenced, Achilles specifically seeks out Troilus to avenge a previous encounter where Troilus has wounded him. He therefore instructs the Myrmidons to find Troilus, surround him and cut him off from rescue.
In the Laud Troy Book, this is because Achilles almost killed Troilus in the previous fight but the Trojan was rescued. Achilles wants to make sure that this does not happen again. This second combat is fought as a straight duel between the two with Achilles, the greater warrior, winning.
In Guido, Lefevre and Lydgate Troilus' killer's behaviour is very different, shorn of any honour. Achilles waits until his men have killed Troilus' horse and cut loose his armour. Only then
And when he sawe how Troilus nakid stod,
Of longe fightyng awaped and amaat
And from his folke alone disolat
—Lydgate, Troy Book, iv, 2756-8.
does Achilles attack and behead him.
In an echo of the Iliad, Achilles drags the corpse behind his horse. Thus, the comparison with the Homeric Hector is heightened and, at the same time, aspects of the classical Troilus's fate are echoed.
The lover
The last aspect of the character of Troilus to develop in the tradition has become the one for which he is best known. Chaucer's Troilus and Criseyde and Shakespeare's Troilus and Cressida both focus on Troilus in his role as a lover. This theme is first introduced by Benoît de Sainte-Maure in the Roman de Troie and developed by Guido delle Colonne. Boccaccio's Il Filostrato is the first book to take the love-story as its main theme. Robert Henryson and John Dryden are other authors who dedicate works to it.
The story of Troilus' romance developed within the context of the male-centred conventions of courtly love and thus the focus of sympathy was to be Troilus and not his beloved. As different authors recreated the romance, they would interpret it in ways affected both by the perspectives of their own times and their individual preoccupations. The story as it would later develop through the works of Boccaccio, Chaucer and Shakespeare is summarised below.
The story of Troilus and Cressida
Troilus used to mock the foolishness of other young men's love affairs. But one day he sees Cressida in the temple of Athena and falls in love with her. She is a young widow and daughter of the priest Calchas who has defected to the Greek camp.
Embarrassed at having become exactly the sort of person he used to ridicule, Troilus tries to keep his love secret. However, he pines for Cressida and becomes so withdrawn that his friend Pandarus asks why he is unhappy and eventually persuades Troilus to reveal his love.
Pandarus offers to act as a go-between, even though he is Cressida's relative and should be guarding her honour. Pandarus convinces Cressida to admit that she returns Troilus' love and, with Pandarus's help, the two are able to consummate their feelings for each other.
Their happiness together is brought to an end when Calchas persuades Agamemnon to arrange Cressida's return to him as part of a hostage exchange in which the captive Trojan Antenor is freed. The two lovers are distraught and even think of eloping together but they finally cooperate with the exchange. Despite Cressida's initial intention to remain faithful to Troilus, the Greek warrior Diomedes wins her heart. When Troilus learns of this, he seeks revenge on Diomedes and the Greeks and dies in battle. Just as Cressida betrayed Troilus, Antenor was later to betray Troy.
Benoît and Guido
In the Roman de Troie, the daughter of Calchas whom Troilus loves is called Briseis. Their relationship is first mentioned once the hostage exchange has been agreed:
Whoever had joy or gladness, Troilus suffered affliction and grief. That was for the daughter of Calchas, for he loved her deeply. He had set his whole heart on her; so mightily was he possessed by his love that he thought only of her. She had given herself to him, both her body and her love. Most men knew of that.
In Guido, Troilus' and Diomedes' love is now called Briseida. His version (a history) is more moralistic and less touching, removing the psychological complexity of Benoît's (a romance) and the focus in his retelling of the love triangle is firmly shifted to the betrayal of Troilus by Briseida. Although Briseida and Diomedes are most negatively caricatured by Guido's moralising, even Troilus is subject to criticism as a "fatuous youth" prone, as in the following, to youthful faults.
Troilus, however, after he had learned of his father's intention to go ahead and release Briseida and restore her to the Greeks, was overwhelmed and completely wracked by great grief, and almost entirely consumed by tears, anguished sighs, and laments, because he cherished her with the great fervour of youthful love and had been led by the excessive ardour of love into the intense longing of blazing passion. There was no one of his dear ones who could console him.
Briseis, at least for now, is equally affected by the possibility of separation from her lover. Troilus goes to her room and they spend the night together, trying to comfort each other. Troilus is part of the escort to hand her over the next day. Once she is with the Greeks, Diomedes is immediately struck by her beauty. Although she is not hostile, she cannot accept him as her lover. Meanwhile Calchas tells her to accept for herself that the gods have decreed Troy's fall and that she is safer now she is with the Greeks.
A battle soon takes place and Diomedes unseats Troilus from his horse. The Greek sends it as a gift to Briseis/Briseida with an explanation that it had belonged to her old lover.
In Benoît, Briseis complains at Diomedes' seeking to woo her by humbling Troilus, but in Guido all that remains of her long speech in Benoît is that she "cannot hold him in hatred who loves me with such purity of heart."
Diomedes soon does win her heart. In Benoît, it is through his display of love and she gives him her glove as a token. Troilus seeks him out in battle and utterly defeats him. He saves Diomedes' life, only so that he can bring her a message of Troilus' contempt.
In Guido, Briseida's change of heart comes after Troilus wounds Diomedes seriously. Briseida tends Diomedes and then decides to take him as her lover, because she does not know if she will ever meet Troilus again.
In later medieval tellings of the war, the episode of Troilus and Briseida/Cressida is acknowledged and often given as a reason for Diomedes and Troilus to seek each other out in battle. The love story also becomes one that is told separately.
Boccaccio
The first major work to take the story of Troilus' failed love as its central theme is Giovanni Boccaccio's Il Filostrato. The title means "the one struck down by love". There is an overt purpose to the text. In the proem, Boccaccio himself is Filostrato and addresses his own love who has rejected him.
Boccaccio introduces a number of features of the story that were to be taken up by Chaucer. Most obvious is that Troilus' love is now called Criseida or Cressida. An innovation in the narrative is the introduction of the go-between Pandarus. Troilus is characterised as a young man who expresses whatever moods he has strongly, weeping when his love is unsuccessful, generous when it is.
Boccaccio fills in the history before the hostage exchange as follows. Troilus mocks the lovelorn glances of other men who put their trust in women before falling victim to love himself when he sees Cressida, here a young widow, in the Palladium, the temple of Athena. Troilus keeps his love secret and is made miserable by it. Pandarus, Troilus' best friend and Cressida's cousin in this version of the story, acts as go-between after persuading Troilus to explain his distress. In accordance with the conventions of courtly love, Troilus' love remains secret from all except Pandarus, until Cassandra eventually divines the reason for Troilus' subsequent distress.
After the hostage exchange is agreed, Troilus suggests elopement, but Cressida argues that he should not abandon Troy and that she should protect her honour. Instead, she promises to meet him within ten days. Troilus spends much of the intervening time on the city walls, sighing in the direction where Cressida has gone. No horses or sleeves, as used by Guido or Benoît, are involved in Troilus' learning of Cressida's change of heart. Instead a dream hints at what has happened, and then the truth is confirmed when a brooch – previously a gift from Troilus to Cressida – is found on Diomedes' looted clothing. In the meantime, Cressida has kept up the pretence in their correspondence that she still loves Troilus. After Cressida's betrayal is confirmed, Troilus becomes ever fiercer in battle.
Chaucer and his successors
Geoffrey Chaucer's Troilus and Criseyde reflects a more humorous world-view than Boccaccio's poem. Chaucer does not have his own wounded love to display and therefore allows himself an ironic detachment from events and Criseyde is more sympathetically portrayed. In contrast to Boccaccio's final canto, which returns to the poet's own situation, Chaucer's palinode has Troilus looking down laughing from heaven, finally aware of the meaninglessness of earthly emotions. About a third of the lines of the Troilus are adapted from the much shorter Il Filostrato, leaving room for a more detailed and characterised narrative.
Chaucer's Criseyde is swayed by Diomedes playing on her fear. Pandarus is now her uncle, more worldly-wise and more active in what happens and so Troilus is more passive. This passivity is given comic treatment when Troilus passes out in Criseyde's bedroom and is lifted into her bed by Pandarus. Troilus' repeated emotional paralysis is comparable to that of Hamlet who may have been based on him. It can be seen as driven by loyalty both to Criseyde and to his homeland, but has also been interpreted less kindly.
Another difference in Troilus' characterisation from the Filostrato is that he is no longer misogynistic in the beginning. Instead of mocking lovers because of their putting trust in women, he mocks them because of how love affects them. Troilus' vision of love is stark: total commitment offers total fulfilment; any form of failure means total rejection. He is unable to comprehend the subtleties and complexities that underlie Criseyde's vacillations and Pandarus' manoeuvrings.
In his storytelling Chaucer links the fates of Troy and Troilus, the mutual downturn in fortune following the exchange of Criseyde for the treacherous Antenor being the most significant parallel.
Little has changed in the general sweep of the plot from Boccaccio. Things are just more detailed, with Pandarus, for example, involving Priam's middle son Deiphobus during his attempts to unite Troilus and Cressida. Another scene that Chaucer adds was to be reworked by Shakespeare. In it, Pandarus seeks to persuade Cressida of Troilus' virtues over those of Hector, before uncle and niece witness Troilus returning from battle to public acclaim with much damage to his helmet. Chaucer also includes details from the earlier narratives. So, reference is made not just to Boccaccio's brooch, but to the glove, the captured horse and the battles of the two lovers in Benoît and Guido.
Because of the great success of the Troilus, the love story was popular as a free standing tale to be retold by English-language writers throughout the 15th and 16th centuries and into the 17th century. The theme was treated either seriously or in burlesque. For many authors, true Troilus, false Cresseid and pandering Pandarus became ideal types eventually to be referred to together as such in Shakespeare.
During the same period, English retellings of the broader theme of the Trojan War tended to avoid Boccaccio's and Chaucer's additions to the story, though their authors, including Caxton, commonly acknowledged Chaucer as a respected predecessor. John Lydgate's Troy Book is an exception. Pandarus is one of the elements from Chaucer's poem that Lydgate incorporates, but Guido provides his overall narrative framework. As with other authors, Lydgate's treatment contrasts Troilus' steadfastness in all things with Cressida's fickleness. The events of the war and the love story are interwoven. Troilus' prowess in battle markedly increases once he becomes aware that Diomedes is beginning to win Cressida's heart, but it is not long after Diomedes final victory in love when Achilles and his Myrmidon's treacherously attack and kill Troilus and maltreat his corpse, concluding Lydgate's treatment of the character as an epic hero, who is the purest of all those who appear in the Troy Book.
Of all the treatments of the story of Troilus and, especially, Cressida in the period between Chaucer and Shakespeare, it is Robert Henryson's that receives the most attention from modern critics. His poem The Testament of Cresseid is described by the Middle English expert C. David Benson as the "only fifteenth century poem written in Great Britain that begins to rival the moral and artistic complexity of Chaucer's Troilus". In the Testament the title-character is abandoned by Diomedes and then afflicted with leprosy so that she becomes unrecognizable to Troilus. He pities the lepers she is with and is generous to her because she reminds him of the idol of her in his mind, but he remains the virtuous pagan knight and does not achieve the redemption that she does. Even so, following Henryson Troilus was seen as a representation of generosity.
Shakespeare and Dryden
Another approach to Troilus' love story in the centuries following Chaucer is to treat Troilus as a fool, something Shakespeare does in allusions to him in plays leading up to Troilus and Cressida. In Shakespeare's "problem play" there are elements of Troilus the fool. However, this can be excused by his age. He is an almost beardless youth, unable to fully understand the workings of his own emotions, in the middle of an adolescent infatuation, more in love with love and his image of Cressida than the real woman herself. He displays a mixture of idealism about eternally faithful lovers and of realism, condemning Hector's "vice of mercy". His concept of love involves both a desire for immediate sexual gratification and a belief in eternal faithfulness. He also displays a mixture of constancy, (in love and supporting the continuation of war) and inconsistency (changing his mind twice in the first scene on whether to go to battle or not). More a Hamlet than a Romeo, by the end of the play his illusions of love shattered and Hector dead, Troilus might show signs of maturing, recognising the nature of the world, rejecting Pandarus and focusing on revenge for his brother's death rather than for a broken heart or a stolen horse. The novelist and academic Joyce Carol Oates, on the other hand, sees Troilus as beginning and ending the play in frenzies – of love and then hatred. For her, Troilus is unable to achieve the equilibrium of a tragic hero despite his learning experiences, because he remains a human-being who belongs to a banal world where love is compared to food and cooking and sublimity cannot be achieved.Troilus and Cressida's sources include Chaucer, Lydgate, Caxton and Homer, but there are creations of Shakespeare's own too and his tone is very different. Shakespeare wrote at a time when the traditions of courtly love were dead and when England was undergoing political and social change. Shakespeare's treatment of the theme of Troilus' love is much more cynical than Chaucer's, and the character of Pandarus is now grotesque. Indeed, all the heroes of the Trojan War are degraded and mocked. Troilus' actions are subject to the gaze and commentary of both the venal Pandarus and of the cynical Thersites who tells us:
...That dissembling abominable varlet Diomed has got that same scurvy, doting, foolish knave's sleeve of Troy there in his helm. I would fain see them meet, that that same young Trojan ass, that loves the whore there, might send that Greekish whoremasterly villain with the sleeve back to the dissembling luxurious drab of a sleeveless errand...
The action is compressed and truncated, beginning in medias res with Pandarus already working for Troilus and praising his virtues to Cressida over those of the other knights they see returning from battle, but comically mistaking him for Deiphobus. The Trojan lovers are together only one night before the hostage exchange takes place. They exchange a glove and a sleeve as love tokens, but the next night Ulysses takes Troilus to Calchas' tent, significantly near Menelaus' tent. There they witness Diomedes successfully seducing Cressida after taking Troilus' sleeve from her. The young Trojan struggles with what his eyes and ears tell him, wishing not to believe it. Having previously considered abandoning the senselessness of war in favour of his role of lover and having then sought to reconcile love and knightly conduct, he is now left with war as his only role.
Both the fights between Troilus and Diomedes from the traditional narrative of Benoît and Guido take place the next day in Shakespeare's retelling. Diomedes captures Troilus' horse in the first fight and sends it to Cressida. Then the Trojan triumphs in the second, though Diomedes escapes. But in a deviation from this narrative it is Hector, not Troilus, whom the Myrmidons surround in the climactic battle of the play and whose body is dragged behind Achilles' horse. Troilus himself is left alive vowing revenge for Hector's death and rejecting Pandarus. Troilus' story ends, as it began, in medias res with him and the remaining characters in his love-triangle remaining alive.
Some seventy years after Shakespeare's Troilus was first presented, John Dryden re-worked it as a tragedy, in his view strengthening Troilus' character and indeed the whole play, by removing many of the unresolved threads in the plot and ambiguities in Shakespeare's portrayal of the protagonist as a believable youth rather than a clear-cut and thoroughly sympathetic hero. Dryden described this as " that heap of Rubbish, under which many excellent thoughts lay bury'd." His Troilus is less passive on stage about the hostage exchange, arguing with Hector over the handing over of Cressida, who remains faithful. Her scene with Diomedes that Troilus witnesses is her attempt "to deceive deceivers". She throws herself at her warring lovers' feet to protect Troilus and commits suicide to prove her loyalty. Unable to leave a still living Troilus on the stage, as Shakespeare did, Dryden restores his death at the hands of Achilles and the Myrmidons but only after Troilus has killed Diomedes. According to P. Boitani, Dryden goes to "the opposite extreme of Shakespeare's... all problems and therefore the tragedy".
Modern versions
After Dryden's Shakespeare, Troilus is almost invisible in literature until the 20th century. Keats does refer to Troilus and Cressida in the context of the "sovereign power of love" and Wordsworth translated some of Chaucer but, as a rule, love was portrayed in ways far different from how it is in the Troilus and Cressida story. Boitani sees the two World Wars and the 20th century's engagement "in the recovery of all sorts of past myths" as contributing to a rekindling of interest in Troilus as a human being destroyed by events beyond his control. Similarly Foakes sees the aftermath of one World War and the threat of a second as key elements for the successful revival of Shakespeare's Troilus in two productions in the first half of the 20th century, and one of the authors discussed below names Barbara Tuchman's The March of Folly: From Troy to Vietnam as the trigger for his wish to retell the Trojan war.
Boitani discusses the modern use of the character of Troilus in a chapter entitled Eros and Thanatos. Love and death, the latter either as a tragedy in itself or as an epic symbol of Troy's own destruction, therefore, are the two core elements of the Troilus myth for the editor of the first book-length survey of it from ancient to modern times. He sees the character as incapable of transformation on a heroic scale in the manner of Ulysses and also blocked from the possibility of development as an archetypal figure of troubled youth by Hamlet. Troilus' appeal for the 20th and 21st century is his very humanity.
Belief in the medieval tradition of the Trojan War that followed Dictys and Dares survived the Revival of Learning in the Renaissance and the advent of the first English translation of the Iliad in the form of Chapman's Homer. (Shakespeare used both Homer and Lefevre as sources for his Troilus.) However the two supposedly eye-witness accounts were finally discredited by Jacob Perizonius in the early years of the 18th century. With the chief source for his portrayal as one of the most active warriors of the Trojan War undermined, Troilus has become an optional character in modern Trojan fiction, except for those that retell the love story itself. Lindsay Clarke and Phillip Parotti, for example, omit Troilus altogether. Hilary Bailey includes a character of that name in Cassandra: Princess of Troy but little remains of the classical or medieval versions except that he fights Diomedes. However, some of the over sixty re-tellings of the Trojan War since 1916 do feature the character.
Once more a man-boy
One consequence of the reassessment of sources is the reappearance of Troilus in his ancient form of andropais. Troilus takes this form in Giraudoux's The Trojan War Will Not Take Place, his first successful reappearance in the 20th century. Troilus is a fifteen-year-old boy whom Helen has noticed following her around. After turning down the opportunity to kiss her when she offers and when confronted by Paris, he eventually accepts the kiss at the end of the play just as Troy has committed to war. He is thus a symbol of the whole city's fatal fascination with Helen.
Troilus, in one of his ancient manifestations as a boy-soldier overwhelmed, reappears both in works Boitani discusses and those he does not. Christa Wolf in her Kassandra features a seventeen-year-old Troilus, first to die of all the sons of Priam. The novel's treatment of the character's death has features of both medieval and ancient versions. Troilus has just gained his first love, once more called Briseis. It is only after his death that she is to betray him. On the first day of the war, Achilles seeks Troilus out and forces him into battle with the help of the Myrmidons. Troilus tries to fight in the way he has been taught princes should do, but Achilles strikes the boy down and leaps on top of him, before attempting to throttle him. Troilus escapes and runs to the sanctuary of the temple of Apollo where he is helped to take his armour off. Then, in "some of the most powerful and hair-raising" words ever written on Troilus' death, Wolf describes how Achilles enters the temple, caresses then half-throttles the terrified boy, who lies on the altar, before finally beheading him like a sacrificial victim. After his death, the Trojan council propose that Troilus be officially declared to have been twenty in the hope of avoiding the prophecy about him but Priam, in his grief, refuses as this would insult his dead son further. In "exploring the violent underside of sexuality and the sexual underside of violence", Wolf revives a theme suggested by the ancient vases where an "erotic aura seems to pervade representations of a fully armed Achilles pursuing or butchering a naked, boyish Troilus".
Colleen McCullough is another author who incorporates both the medieval Achilles' seeking Troilus out in battle and the ancient butchery at the altar. Her The Song of Troy includes two characters, Troilos and Ilios, who are Priam's youngest children – both with prophecies attached and both specifically named for the city's founders. They are eight and seven respectively when Paris leaves for Greece and somewhere in their late teens when killed. Troilos is made Priam's heir after Hector's death, against the boy's will. Odysseus's spies learn of the prophecy that Troy will not fall if Troilos comes of age. Achilles therefore seeks him out in the next battle and kills him with a spear-cast to his throat. In a reference to the medieval concept of Troilus as the second Hector, Automedon observes that "with a few more years added, he might have made another Hektor." Ilios is the last son of Priam to die, killed at the altar in front of his parents by Neoptolemos.
Marion Zimmer Bradley's The Firebrand features an even younger Troilus, just twelve when he becomes Hector's charioteer. (His brother wants to keep a protective eye on him now he is ready for war.) Troilus helps kill Patroclus. Although he manages to escape the immediate aftermath of Hector's death, he is wounded. After the Trojans witness Achilles' treatment of Hector's body, Troilus insists on rejoining the battle despite his wounds and Hecuba's attempts to stop him. Achilles kills him with an arrow. The mourning Hecuba comments that he did not want to live because he blamed himself for Hector's death.
Reinventing the love story
A feature already present in the treatments of the love story by Chaucer, Henryson, Shakespeare and Dryden is the repeated reinvention of its conclusion. Boitani sees this as a continuing struggle by authors to find a satisfying resolution to the love triangle. The major difficulty is the emotional dissatisfaction resulting from how the tale, as originally invented by Benoît, is embedded into the pre-existing narrative of the Trojan War with its demands for the characters to meet their traditional fates. This narrative has Troilus, the sympathetic protagonist of the love story, killed by Achilles, a character totally disconnected from the love triangle, Diomedes survive to return to Greece victorious, and Cressida disappear from consideration as soon as it is known that she has fallen for the Greek. Modern authors continue to invent their own resolutions.
William Walton's Troilus and Cressida is the best known and most successful of a clutch of 20th-century operas on the subject after the composers of previous eras had ignored the possibility of setting the story. Christopher Hassall's libretto blends elements of Chaucer and Shakespeare with inventions of its own arising from a wish to tighten and compress the plot, the desire to portray Cressida more sympathetically and the search for a satisfactory ending. Antenor is, as usual, exchanged for Cressida but, in this version of the tale, his capture has taken place while he was on a mission for Troilus. Cressida agrees to marry Diomedes after she has not heard from Troilus. His apparent silence, however, is because his letters to her have been intercepted. Troilus arrives at the Greek camp just before the planned wedding. When faced with her two lovers, Cressida chooses Troilus. He is then killed by Calchas with a knife in the back. Diomedes sends his body back to Priam with Calchas in chains. It is now the Greeks who condemn "false Cressida" and seek to keep her but she commits suicide.
Before Cressida kills herself she sings to Troilus to
...turn on that cold river's brim
beyond the sun's far setting.
Look back from the silent stream
of sleep and long forgetting.
Turn and consider me
and all that was ours;
you shall no desert see
but pale unwithering flowers.
This is one of three references in 20th century literature to Troilus on the banks of the River Styx that Boitani has identified. Louis MacNeice's long poem The Stygian Banks explicitly takes its name from Shakespeare who has Troilus compare himself to "a strange soul upon the Stygian banks" and call upon Pandarus to transport him "to those fields where I may wallow in the lily beds". In MacNeice's poem the flowers have become children, a paradoxical use of the traditionally sterile Troilus who
Patrols the Stygian banks, eager to cross,
But the value is not on the further side of the river,
The value lies in his eagerness. No communion
In sex or elsewhere can be reached and kept
Perfectly for ever. The closed window,
The river of Styx, the wall of limitation
Beyond which the word beyond loses its meaning,
Are the fertilising paradox, the grille
That, severing, joins, the end to make us begin
Again and again, the infinite dark that sanctions
Our growing flowers in the light, our having children...
The third reference to the Styx is in Christopher Morley's The Trojan Horse. A return to the romantic comedy of Chaucer is the solution that Boitani sees to the problem of how the love story can survive Shakespeare's handling of it. Morley gives us such a treatment in a book that revels in its anachronism. Young Lieutenant (soon to be Captain) Troilus lives his life in 1185 BC where he has carefully timetabled everything from praying, to fighting, to examining his own mistakes. He falls for Cressida after seeing her, as ever, in the Temple of Athena where she wears black, as if mourning the defection of her father, the economist Dr Calchas. The flow of the plot follows the traditional story, but the ending is changed once again. Troilus' discovery of Cressida's change of heart happens just before Troy falls. (Morley uses Boccaccio's version of the story of a brooch, or in this case a pin, attached to a piece of Diomedes' armour as the evidence that convinces the Trojan.) Troilus kills Diomedes as he exits the Trojan Horse, stabbing him in the throat where the captured piece of armour should have been. Then Achilles kills Troilus. The book ends with an epilogue. The Trojan and Greek officers exercise together by the River Styx, all enmities forgotten. A new arrival (Cressida) sees Troilus and Diomedes and wonders why they seem familiar to her. What Boitani calls "a rather dull, if pleasant, ataraxic eternity" replaces Chaucer's Christian version of the afterlife.
In Eric Shanower's graphic novel Age of Bronze, currently still being serialised, Troilus is youthful but not the youngest son of Priam and Hecuba. In the first two collected volumes of this version of the Trojan War, Shanower provides a total of six pages of sources covering the story elements of his work alone. These include most of the fictional works discussed above from Guido and Boccaccio down to Morley and Walton. Shanower begins Troilus' love story with the youth making fun of Polyxena's love for Hector and in the process accidentally knocking aside Cressida's veil. He follows the latter into the temple of Athena to gawp at her. Pandarus is the widow Cressida's uncle encouraging him. Cressida rejects Troilus' initial advances not because of wanting to act in a seemly manner, as in Chaucer or Shakespeare, but because she thinks of him as just a boy. However, her uncle persuades her to encourage his affection, in the hope that being close to a son of Priam will protect against the hostility of the Trojans to the family of the traitor Calchas. Troilus' unrequited love is used as comic relief in an otherwise serious retelling of the Trojan War cycle. The character is portrayed as often indecisive and ineffectual as on the second page of this episode sample at the official site . It remains to be seen how Shanower will further develop the story.
Troilus is rewarded a rare happy ending in the early Doctor Who story The Myth Makers. The script was written by Donald Cotton who had previously adapted Greek tales for the BBC Third Programme. The general tone is one of high comedy combined with a "genuine atmosphere of doom, danger and chaos" with the BBC website listing A Funny Thing Happened on the Way to the Forum as an inspiration together with Chaucer, Shakespeare, Homer and Virgil. Troilus is again an andropais "seventeen next birthday" described as "looking too young for the military garb". Both "Cressida" and "Diomede" are the assumed names of the Doctor's companions. Thus Troilus' jealousy of Diomede, whom he believes also loves Cressida, is down to confusion about the real situation. In the end "Cressida" decides to leave the Doctor for Troilus and saves the latter from the fall of Troy by finding an excuse to get him away from the city. In a reversal of the usual story, he is able to avenge Hector by killing Achilles when they meet outside Troy. (The story was originally intended to end more conventionally, with "Cressida", despite her love for him, apparently abandoning him for "Diomede", but the producers declined to renew co-star Maureen O'Brien's contract, requiring that her character Vicki be written out.)
See also
List of children of Priam
Notes and references
Annotated bibliography
Andrew, M. (1989) "The Fall of Troy in Sir Gawain and the Green Knight and Troilus and Criseyde ", in: Boitani (1989: pp. 75–93). Focuses on a comparison between how the Gawain poet and Chaucer handle their themes.
Antonelli, R. (1989) "The Birth of Criseyde: an exemplary triangle; 'Classical' Troilus and the question of love at the Anglo-Norman court", in: Boitani (1989: pp. 21–48). Examination of Benoît's and Guido's treatment of the love triangle.
Benson, C. D. (1980) The History of Troy in Middle English Literature, Woodbridge: D. S. Brewer. A study examining Guido's influence on writers on Troy up to Lydgate and Henryson. Troilus is discussed throughout.
Benson, C. D. (1989) "True Troilus and False Cresseid: the descent from tragedy" in Boitani (1989: pp. 153–170). Examination of the Troilus and Cressida story in the minor authors between Chaucer and Shakespeare.
Boitani, P. (ed.) (1989) The European Tragedy of Troilus, Oxford, Clarendon Press . This was the first full book to examine the development of Troilus through the ages. The outer chapters are by Boitani reviewing the history of Troilus as a character from ancient to modern times. The middle chapters, looking at the tale through the medieval and renaissance periods, are by other authors with several examining Chaucer and Shakespeare.
Burgess, J. S. (2001) The Tradition of the Trojan War in Homer and the Epic Cycle, Baltimore, Johns Hopkins University Press . Examination of the Trojan War in archaic literary and artifact sources. Troilus mentioned in passing.
Carpenter, T. H. (1991) Art and Myth in Ancient Greece, London, Thames and Hudson. Contains roughly four pages (17–21) of text and, separately, fourteen illustrations (figs. 20–22, 25–35) on Troilos in ancient art. .
Coghill, N. (ed.) (1971: pp. xi–xxvi) "Introduction" in: Geoffrey Chaucer, Troilus and Criseyde, London: Penguin . Discusses Chaucer, his sources and key themes in the Troilus. The main body of the book is a translation into modern English by Coghill.
Foakes, R. A. (ed.) (1987) Troilus and Cressida (The New Penguin Shakespeare.) London: Penguin . Annotated edition with introduction.
Frazer, R. M. (trans.) (1966) The Trojan War: the Chronicles of Dictys of Crete and Dares the Phrygian. Bloomington: Indiana University Press. English translation of Dictys' Ephemeridos belli Trojani (pp. 17–130) and Dares' De excidio Trojae historia (pp. 131–68) with Introduction (pp. 3–15) covering the theme of Troy in medieval literature and endnotes.
Gantz, T. (1993) Early Greek Myth. Baltimore: Johns Hopklins U. P. A standard sourcebook on Greek myths. Multiple versions available. There are approximately six pages (597–603) plus notes discussing Troilos in Volume 2 of the two volume edition. Page references are to the two volume 1996 Johns Hopkins Paperbacks edition ().
Gordon, R. K. (1934) The Story of Troilus. London: J. M. Dent. (Dutton Paperback ed. New York: E. P. Dutton, 1964.) This book has been reprinted by various publishers. It contains a translated selection from Le Roman de Troie, a full translation of Il filostrato and the unmodernised texts of Troilus and Criseyde and The Testament of Cresseid. Page references are to the 1995 printing by University of Toronto Press and the Medieval Academy of America ().
Graves, R. (1955) The Greek Myths. Another standard sourcebook available in many editions. Troilus is discussed in Volume 2 of the two volume version. Page references are to the 1990 Penguin printing of the 1960 revision ().
Lewis, C. S. (1936) The Allegory of Love. Oxford: Clarendon Press. Influential work on the literature of courtly love, including Chaucer's Troilus.
Lombardo, A. (1989) "Fragments and Scraps: Shakespeare's Troilus and Cressida" in Boitani (1989: pp. 199–217). Sets the cynical tone of Troilus in the context of changes both in the world and the theatre.
Lyder, T. D. (2010) "Chaucer's second Hector: the triumphs of Diomede and the possibility of epic in Troilus and Criseyde. (Critical essay)", Medium Aevum, March 22, 2010, Accessed through Highbeam, August 30, 2012 (subscription required).
March, J. (1998) Dictionary of Classical Mythology. London: Cassell. Illustrated dictionary with Troilus covered in one page. Page references are to 1998 hardback edition.
Natali, G. (1989) "A Lyrical Version: Boccaccio's Filostrato", in: Boitani (1989: pp. 49–73). An examination of the Filostrato in context.
Novak, M. E (ed.) (1984) The Works of John Dryden: Volume XIII Plays: All for Love; Oedipus; Troilus and Cressida. Berkeley: University of California Press . Volume in complete edition with annotated texts and commentaries.
Oates, J. O. (1966/7) "The Tragedy of Existence: Shakespeare's Troilus and Cressida" by Joyce Carol Oates. Originally published as two separate essays, in Philological Quarterly, Spring 1967, and Shakespeare Quarterly, Spring 1966. Available online at (Checked 17 August 2007).
Palmer, K. (ed.) (1982) Troilus and Cressida. (The Arden Shakespeare.) London: Methuen. Edition of the play as part of respected series, with extensive notes, appendices and 93 page introduction. References are to 1997 printing by Thomas Nelson & Sons, London ().
Rufini, S. (1989) "'To Make that Maxim Good': Dryden's Shakespeare", in: Boitani (1989: pp. 243–80). Discussion of Dryden's remodeling of Troilus.
Sommer, H. O. (ed.) (1894) The Recuyell of the Historyes of Troye: written in French by Raoul Lefèvre; translated and printed by William Caxton (about A.D. 1474); the first English printed book, now faithfully reproduced, with a critical introduction, index and glossary and eight pages in photographic facsimile. London: David Nutt. Edition of Caxton translation of Lefevre with introduction of 157 pages. Page references are to AMS Press 1973 reprinting ().
Sommerstein, A. H., Fitzpatrick, D. & Talby, T. (2007) Sophocles: Selected Fragmentary Plays. Oxford: Aris and Phillips (). This is a product of the University of Nottingham's project on Sophocles' fragmentary plays. The book contains a 52-page chapter (pp. 196–247) on the Troilos, including the Greek text with translation and commentary of the few words and phrases known to come from the play. The introduction to this chapter includes approximately seven pages on the literary and artistic background on Troilus plus discussion and a putative reconstruction of the plot of the play itself. This, the chapter on the Polyxene, where Troilus is also discussed, and the general introduction to the book are all solely by Sommerstein and therefore he alone is referenced above.
Torti, A. (1989) "From 'History' to 'Tragedy': The Story of Troilus and Criseyde in Lydgate's Troy Book and Henryson's Testament of Cresseid", in: Boitani (1989: pp. 171–97). Examination of the two most important authors considering the love story between Chaucer and Shakespeare.
Windeatt, B. (1989) "Classical and Medieval Elements in Chaucer's Troilus", in: Boitani (1989: p. 111–131)
Woodford, S. (1993) The Trojan War in Ancient Art. Ithaca: Cornell University Press . Contains approximately four illustrated pages (55–59) on Troilos in ancient art.
External links
List of pictures of Troilus at Perseus Project: Includes sections from the François Vase. The site holds an extensive classical collection including the texts of both primary and secondary sources on classical topics. Several of the texts mentioned here are available there in the original language and with English translation. A smaller Renaissance collection contains the text of the Shakespeare Troilus and Cressida''.
Publicly accessible images of ambush and pursuit in the Beazley Archive: Many other images of Troilus on the site are accessible for academic or research purposes.
The Development of Attic Black-Figure by J. D. Beazley discusses several pictures of Troilos. Heavily illustrated in black and white.
Princes in Greek mythology
Children of Apollo
Demigods in classical mythology
Children of Priam
Trojans
Characters in Greek mythology
LGBT themes in Greek mythology
Troilus and Cressida
Characters in poems
Male Shakespearean characters
Medieval literature |
380927 | https://en.wikipedia.org/wiki/John%20P.%20Kennedy | John P. Kennedy | John Pendleton Kennedy (October 25, 1795 – August 18, 1870) was an American novelist, lawyer and Whig politician who served as United States Secretary of the Navy from July 26, 1852, to March 4, 1853, during the administration of President Millard Fillmore, and as a U.S. Representative from Maryland's 4th congressional district, during which he encouraged the United States government's study, adoption and implementation of the telegraph. A lawyer who became a lobbyist for and director of the Baltimore and Ohio Railroad, Kennedy also served several terms in the Maryland General Assembly, and became its Speaker in 1847.
Kennedy later helped lead the effort to end slavery in Maryland, which, as a non-Confederate state, was not affected by the Emancipation Proclamation and required a state law to free slaves within its borders and to outlaw the furtherance of the practice.
Kennedy also advocated religious tolerance, and furthered studies of Maryland history. He helped preserve or found Historic St. Mary's City (site of the colonial founding of Maryland and the birthplace of religious freedom in America), St. Mary's College of Maryland (then St. Mary's Female seminary), the Peabody Library (now a part of Johns Hopkins University) and the Peabody Conservatory of Music (also now a part of Johns Hopkins).
Early life and education
John Pendleton Kennedy was born in Baltimore, Maryland, on October 25, 1795, the son of an Irish immigrant and merchant, John Kennedy. His mother, the former Nancy Pendleton, was descended from the First Families of Virginia family. Poor investments resulted in his father declaring bankruptcy in 1809. John Pendleton Kennedy attended private schools while growing up and was relatively well-educated for the time. He graduated from Baltimore College in 1812. His brother Anthony Kennedy would become a U.S. Senator.
Kennedy's college studies were interrupted by the War of 1812. He joined the army and in 1814, marched with the United Company of the 5th Baltimore Light Dragoons, known as the "Baltimore 5th," a unit that included rich merchants, lawyers, and other professionals. Kennedy wrote humorous amounts of his military escapades, such as when he lost his boots and marched onward in dancing pumps. The war was, however, serious, and Kennedy participated in the disastrous Battle of Bladensburg as the British threatened the new national capitol, Washington, D.C. Secretary of State James Monroe ordered the Baltimore 5th to move back from the left of the forward line to an exposed position a quarter-mile away. After the British forces crossed a bridge, the 5th moved forward. The fighting was intense: nearly every British officer among the advancing troops was hit, but then the British fired Congreve rockets. At first, the 5th stood firm, but when the two regiments to the right ran away, the 5th also broke. Kennedy threw away his musket and carried a wounded fellow-soldier (James W. McCulloh) to safety. Kennedy later fought in the Battle of North Point, which saved Baltimore from a burning similar to that of the capitol. Another wartime contact who proved crucial in Kennedy's later political and business career was George Peabody, who later helped finance the B&O Railroad and founded the House of Morgan, as well as the Peabody Institute.
Kennedy spent his summers in Martinsburg, Virginia, where he read law under the tutelage of his relative Judge Edmund Pendleton (descendant of the patriot Edmund Pendleton, who sat on the Virginia Court of Appeals). Kennedy would later often allude to genteel life on Southern plantations based on his youthful summers in Martinsburg. Later, Kennedy inherited some money from a rich Philadelphia uncle, and in 1829 married Elizabeth Gray, whose father Edward Gray was a wealthy mill-owner with a country house on the Patapsco River below Ellicott's Mills, and whose monetary generosity would allow Kennedy to effectively withdraw from his law practice for a decade to write.
Literary life
Although admitted to the bar in 1816, he was much more interested in literature and politics than law. He associated with the focal point of Baltimore's literary community, the Delphian Club. Kennedy's first literary attempt was a fortnightly periodical called the Red Book, published anonymously with his roommate Peter Hoffman Cruse from 1819 to 1820. Kennedy published Swallow Barn, or A Sojourn in the Old Dominion in 1832, which would become his best-known work. Horse-Shoe Robinson was published in 1835 to win a permanent place of respect in the history of American fiction.
Kennedy's friends and personal associates included George Henry Calvert, James Fenimore Cooper, Charles Dickens, Washington Irving, Edgar Allan Poe, William Gilmore Simms, and William Makepeace Thackeray.
Kennedy's journal entries dated September 1858 state that Thackeray asked him for assistance with a chapter in The Virginians; Kennedy then assisted him by contributing scenic written depictions to that chapter.
While sitting round a back parlor table at the home of noted Baltimore literarist, civic leader and friend John H. B. Latrobe at 11 West Mulberry Street, across from the old Baltimore Cathedral in the Mount Vernon, Baltimore neighborhood in October 1833, imbibing some spirits and genial conversation with another friend James H. Miller, they together judged the draft of "MS. Found in a Bottle" from a then-unknown aspiring writer Edgar Allan Poe to be worthy of publishing in the Baltimore Sunday Visitor because of its dark and macabre atmosphere. Also in 1835, he helped later introduce Poe to Thomas Willis White, editor of the Southern Literary Messenger.
While abroad, Kennedy became a friend of William Makepeace Thackeray and wrote or outlined the fourth chapter of the second volume of The Virginians, a fact which accounts for the great accuracy of its scenic descriptions. Of his works, Horse-Shoe Robinson is the best and ranks high in antebellum fiction. Washington Irving read an advance copy of it and reported he was "so tickled with some parts of it" that he read it aloud to his friends. Kennedy sometimes wrote under the pen name 'Mark Littleton', especially in his political satires.
Lawyer and politician
Kennedy enjoyed politics more than law (although the Union Bank was a prime client), and left the Democratic Party when he realized that under President Andrew Jackson it came to oppose internal improvements. He thus became an active Whig like his father-in-law and favored Baltimore's commercial interests. He was appointed Secretary of the Legation in Chile on January 27, 1823, but did not proceed to his post, instead resigning on June 23 of the same year. He was elected to the Maryland House of Delegates in 1820 and chaired its committee on internal improvements, championing the Chesapeake and Ohio Canal so vigorously (despite its failure to pay dividends), that he failed to win re-election after his 1823 vote for state support.
In 1838, Kennedy succeeded Isaac McKim in the U.S. House of Representatives, but was defeated in his bid for reelection in November of that year. Meanwhile, in 1835, Kennedy was among the 10 Baltimoreans who attended a railroad meeting in Brownsville, Pennsylvania, where he delivered a very well-received address urging completion of the B&O Railroad to the Ohio River valley (rather than to Pennsylvania canals, which fed Philadelphia rather than Baltimore). Kennedy was also on the 25-man committee that lobbied the Maryland legislature on the B&O's behalf and ultimately secured passage of the "Eight Million Dollar Bill" in 1836, which led to his becoming a B&O board member the following year (and remained such for many years). When the B&O chose a route westward through Virginia rather than the mountains near Hagerstown, Maryland in 1838, Kennedy was in the B&O's delegation to lobby Virginia's legislature (together with B&O President Louis McLane and well-connected Maryland delegate John Spear Nicholas, son of Judge Philip Norborne Nicholas, a leader of the Richmond Junto) that secured passage of a law authorizing a $1,058,000 subscription (40% of the estimated cost for building the B&O through the state). However, the B&O's shareholders would reject the necessary Wheeling subscription because of its onerous terms, and Kennedy would again take up his pen in the B&O's defense against criticism by Maryland Governor William Grason.
Kennedy won re-election to Congress in 1840 and 1842; but, because of his strong opposition to the annexation of Texas, he was defeated in 1844. His influence in Congress was largely responsible for the appropriation of $30,000 to test Samuel Morse's telegraph. In 1847, Kennedy became speaker of the Maryland House of Delegates, and used his influence to help the B&O, although by the late 1840s it was caught in a three-way controversy with the states of Pennsylvania and Virginia as to whether the B&O's terminus should be Wheeling, Parkersburg or Pittsburgh. After an acrimonious shareholders meeting on August 25, 1847, the B&O affirmed Wheeling as its terminus, and finally completed track to the city in 1853.
Meanwhile, President Millard Fillmore appointed Kennedy as Secretary of the Navy in July 1852. During Kennedy's tenure in office, the Navy organized four important naval expeditions including that which sent Commodore Matthew C. Perry to Japan and Lieutenant William Lewis Herndon and Lieutenant Lardner Gibbon to explore the Amazon .
Kennedy was proposed as a vice-presidential running mate to Abraham Lincoln when Lincoln first sought the Presidency of the United States, although Kennedy was ultimately not selected. Kennedy became a forceful supporter of the Union during the Civil War, and he supported the passage of the Emancipation Proclamation. Later, since the proclamation did not free Maryland slaves because the state was not in rebellion, he also used his influence to push for legislation in Maryland that ultimately ended slavery there in 1864.
In 1853, he was elected as a member to the American Philosophical Society.
Position on religious tolerance
Kennedy called for erecting a monument to the founding of the state of Maryland and to the birth of religious freedom in its original colonial settlement in St. Mary's City, Maryland. Three local citizens then expanded on his idea and sought to start a school that would become a "Living Monument" to religious freedom. The school was founded as such a monument in 1840 by order of the state legislature. Its original name was St. Mary's Seminary, but it would later be known as St. Mary's College of Maryland.
Earlier, when he was in the Maryland state legislature, Kennedy was instrumental in repealing a law that discriminated against Jewish people in court and trial procedures in Maryland. Jewish people were a tiny population in the state at the time and Kennedy was not Jewish, so there was no political or personal advantage to his position. His opposition to slavery in Maryland can be traced back for decades but the depth of that opposition went through an evolution from mild and more economically based in the beginning, to being stronger and more morally based by the time of the Emancipation Proclamation. Kennedy, an Episcopalian, also helped to lead private charitable efforts to aid Irish Catholic immigrants, who were experiencing a great deal of discrimination in the state at the time. However, he did also advocate setting limits on overall foreign national immigration into Maryland beginning in the 1850s, stating that he felt that the sheer number of new immigrants might overwhelm the economy.
Opposition to slavery
Kennedy's opposition to slavery was first publicly expressed in his writings, and then later in his life as a politician, through his speeches and political initiatives. His opposition to slavery in Maryland can be traced back through many decades of his life, but the depth of that opposition went through an evolution from milder and more understated in the beginning, to being stronger, more vocal and more morally based by the time of the Emancipation Proclamation and then the following state-level effort to end slavery in Maryland, as the state was not included in the Emancipation Proclamation because it was not in the Confederacy.
Kennedy once wrote that witnessing a speech by Frederick Douglass had opened his eyes more fully to the "curse" of slavery, as Kennedy called it by 1863.
Kennedy's 1830s novel Swallow Barn is critical of slavery but also idealizes plantation life. However, the original manuscript shows that some of Kennedy's initial descriptions of plantation life were much more critical of slavery, but that he crossed those out of the manuscript before the book went to the printer, possibly because he was afraid of being too openly critical of slavery while living in Maryland, a slave state.
Historians are not in consensus as to whether his earlier softer opposition to slavery was a way of preventing violent attacks against himself, since he lived in a border state where slavery was still practiced and still widely supported. Outright abolitionism at that time would have been an unpopular and potentially dangerous position in pro-slavery Maryland. Other historians maintain that his views on slavery simply evolved from weaker opposition to stronger opposition.
The novel, although more muted in its criticism of slavery by the time of its publication and also expressing some idyllic stereotypes about plantation life, leads to the prediction that slavery would bring the Southern states to ruin. Swallow Barn was published in 1832, 29 years before the start of the Civil War and long before anyone else was known to predict that the Southern and Northern states were headed for armed conflict.
Civil War
Just prior to the Civil War, Kennedy wrote that abolishing slavery immediately was not worth full-scale civil war and that slavery should instead be ended in stages to avoid war. He noted that civil wars were historically the most bloody and devastating kinds of warfare and suggested a negotiated, phased approach to ending slavery to prevent war between the sections.
But after the war broke out, he returned to a position of outright opposition to slavery and began to call for "immediate emancipation" of slaves. His demands for the end of slavery grew stronger as the war progressed. By the height of the Civil War, when Kennedy's opposition to slavery had become much stronger, he signed his name to a key political pamphlet in Maryland opposing slavery and calling for its immediate end.
There is consensus among historians that Kennedy was critical of slavery to some degree for decades, strongly opposed to slavery by the height of the Civil War, and strongly opposed the Confederacy. In Maryland state politics and charity leadership, Kennedy was also known to help other minority groups, notably Jews and Irish Catholics. When the Emancipation Proclamation did not end slavery in Maryland, Kennedy played a key leadership role in campaigning for the end of slavery in the state.
Because Maryland was not in the Confederacy, the Emancipation Proclamation did not apply to the state and slavery continued there. Since there was no active insurrection in Maryland, President Lincoln did not feel constitutionally authorized to extend the Emancipation Proclamation to Maryland. Only the state itself could end slavery at this point, and this was not a certain outcome as Maryland was a slave state with strong Confederate sympathies. John Pendleton Kennedy and other antislavery leaders, therefore, organized a political gathering. On December 16, 1863, a special meeting of the Central Committee of the Union Party of Maryland was called on the issue of slavery in the state (the Union Party was a powerful political party in the state at the time).
At the meeting, Thomas Swann, a state politician, put forward a motion calling for the party to work for "Immediate emancipation (of all slaves) in Maryland". John Pendleton Kennedy spoke next and seconded the motion. Since Kennedy was the former speaker of the Maryland General Assembly, as well as a respected author, his support carried enormous weight in the party. A vote was taken and the motion passed. However the people of Maryland as a whole were by then divided on the issue and so twelve months of campaigning and lobbying on the matter of slavery continued throughout the state. During this effort, Kennedy signed his name to a party pamphlet calling for "immediate emancipation" of all slaves that was widely circulated. On November 1, 1864, after a year-long debate, a state referendum was put forth on the slavery question. The citizens of Maryland voted to abolish slavery, though only by a 1,000 vote margin, as the Southern part of the state remained heavily dependent on the slave economy.
Work with cultural and educational institutions
Kennedy, in his close association with George Peabody, was instrumental in the establishment of the Peabody Institute, which later evolved and split into the Peabody Library and the Peabody Conservatory of Music, which are now both part of Johns Hopkins University. He also served on the first board of trustees for the institute and did the first writing that outlined its mission. He also recorded minutes for the board's earliest meetings. Kennedy is known to have worked for years to help lay the groundwork for these institutions.
Kennedy also played key roles in the establishment of St. Mary's Female Seminary which is now known as St. Mary's College of Maryland, the state's public honors college. The school was established with Kennedy's political support and his reputation as a respected Maryland author, as the state's "Living Monument to religious freedom", memorializing its location on the site of Maryland's first colony, which was also considered to be the birthplace of religious freedom in America as well. The school continues to have this designation to this day. The original concept of commemorating religious freedom was Kennedy's idea.
Kennedy was the primary initial impetus and was also pivotal in gaining early state recognition of its responsibility for protecting, studying and memorializing St. Mary's City, Maryland (the then-abandoned site of Maryland's first colony and capitol, as well as being the birthplace of religious freedom in America), as a key state historic area, placing historical research and preservation mandates under the original auspices of the new state-sponsored St. Mary's Female Seminary, located on the same site. This planted the early seeds of what would eventually become Historic St. Mary's City, a state-run archeological research and historic interpretation area that exists today on the site of Maryland's original colonial settlement.
Historic St. Mary's City also co-runs (jointly with St. Mary's College of Maryland) the now internationally recognized Historical Archaeology Field School, a descendant of Kennedy's idea that a school should be involved in researching and preserving the remains of colonial St. Mary's City.
During his term as U.S. Secretary of the Navy, Kennedy made the request for the establishment of the United States Naval Academy Band in Annapolis in 1852. The band continues to be active to this day.
Roles in science and technology
Federal study and acceptance of the telegraph
While serving in the United States Congress, John Pendleton Kennedy was the primary and decisive force in Congress in securing $30,000 (an enormous sum at the time) for testing Samuel Morse's telegraph communications system. This was the first electronic means of long-distance communication in human history. The government tests corroborated Morse's invention and led to federal adoption of the technology and the subsequent establishment of the American telegraph communications system, which revolutionized communications and the economic development of the United States. Federal acceptance of the telegraph had a major impact on Abraham Lincoln's management of the Civil War as well.
Commissioner of the 1867 Paris Exposition
Kennedy was a commissioner of the 1867 Paris Exposition, an international science, technology and arts fair that was held in Paris, France, in 1867. The fair had participation from 42 nations and had over 50,000 exhibits. It was the second World's Fair.
Retirement from public office
Kennedy retired from elected and appointed offices in March 1853 when President Fillmore left office, but he remained very active in both Federal and state of Maryland politics, supporting Fillmore in 1856, when Fillmore won Maryland's electoral votes and Kennedy's brother Anthony won a U.S. Senate seat. His name was mentioned as one of the vice-presidential prospects on the Republican ticket alongside Abraham Lincoln in 1860 (which would have meant that Abraham Lincoln would be on the same ticket as a man named "John Kennedy"). Instead, Kennedy was the Maryland chairman of the Constitutional Union Party, which nominated John Bell and Edward Everett for the Presidency. Kennedy played an instrumental leadership role in the Union Party's successful effort to end slavery in Maryland in 1864. This had to be done at the state level because the Emancipation Proclamation did not apply to the state. At the end of the American Civil War – during which he forcefully supported the Union – he advocated amnesty for former rebels.
During this time, he had a summer home overlooking the south branch of the Patapsco River upstream near Orange Grove-Avalon-Ilchester off the main western line of the Baltimore and Ohio Railroad now in the area of Patapsco Valley State Park, which was devastated by a disastrous flood in 1868.
Kennedy died in Newport, Rhode Island, on August 18, 1870, and is buried in Greenmount Cemetery in Baltimore, Maryland.
Legacy
In his will, Kennedy wrote the following:
It is my wish that the manuscript volumes containing my journals, my note or common-place books, and the several volumes of my own letters in press copy, as also all my other letters, such as may possess any interest or value (which I desire to be bound in volumes) that are now in loose sheets, shall be returned to my executors, who are requested to have the same packed away in a strong walnut box, closed and locked, and then delivered to the Peabody Institute, to be preserved by them unopened until the year 1900, when the same shall become the property of the Institute, to be kept among its books and records.
Today there are two large special collections of his papers, manuscripts and correspondence; one remains at the Peabody Institute in Baltimore and the other is at the Enoch Pratt Free Library in Baltimore. There are also a number of libraries from Virginia to Boston that have smaller collections of his correspondence (both private and official letters).
The naval ships USS John P. Kennedy and USS Kennedy (DD-306) were named for him.
Books and essays
The Red Book (1818–19, two volumes).
Swallow Barn: Or, A Sojourn in the Old Dominion (1832) [under the pen-name Mark Littleton].
Horse-Shoe Robinson: A Tale of the Tory Ascendency in South Carolina, in 1780 (1835).
Rob of the Bowl: A Legend of St. Inigoe's (1838) [under the pen-name Mark Littleton].
Annals of Quodlibet [under the pen-name Solomon Secondthoughts] (1840).
Defence of the Whigs [under the pen-name A Member of the Twenty-seventh Congress] (1844).
Memoirs of the Life of William Wirt (1849, two volumes).
The Great Drama: An Appeal to Maryland, Baltimore, reprinted from the Washington National Intelligencer of May 9, 1861.
The Border States: Their Power and Duty in the Present Disordered Condition of the Country (1861).
Autograph Leaves of Our Country's Authors [anthology, co-edited by John P. Kennedy and Alexander Bliss] (1864)
Mr. Paul Ambrose's Letters on the Rebellion [under the pen-name Paul Ambrose] (1865).
Collected Works of John Pendleton Kennedy (1870–72, ten volumes).
At Home and Abroad: A Series of Essays: With a Journal in Europe in 1867–68 (1872, essays).
Further reading
Berton, Pierre (1981), Flames across the Border: The Canadian-American Tragedy, 1813-1814, Boston: Atlantic-Little, Brown.
Bohner, Charles H. (1961), John Pendleton Kennedy, Gentleman from Baltimore, Baltimore: Johns Hopkins.
Friedel, Frank (1967), Union Pamphlets of the Civil War, [includes Kennedy's Great Drama], A John Harvard Library Book, Cambridge, MA: Harvard.
Gwathmey, Edward Moseley (1931), John Pendleton Kennedy, New York: Thomas Nelson.
Hare, John L. (2002), Will the Circle be Unbroken?: Family and Sectionalism in the Virginia Novels of Kennedy, Caruthers, and Tucker, 1830—1845, New York: Routledge.
Marine, William Matthew (1913), The British Invasion of Maryland, 1812-1815, Baltimore: Society of the War of 1812 in Maryland.
Ridgely, Joseph Vincent (1966), John Pendleton Kennedy, New York: Twayne.
Tuckerman, Henry Theodore (1871), The Life of John Pendleton Kennedy, Collected Works of Henry Theodore Tuckerman, Volume 10, New York: Putnam.
Black, Andrew R. (2016), "John Pendleton Kennedy, Early American Novelist, Whig Statesman and Ardent Nationalist", Louisiana State University Press, Louisiana.
See also
History of slavery in Maryland
International Exposition (1867), the second Worlds Fair, held in Paris, of which Kennedy was a commissioner
References
External links
Biography at the Naval Historical Center
John Kennedy at Find A Grave
Swallow Barn, vol. 1
Swallow Barn, vol. 2
Horse-Shoe Robinson
|-
|-
|-
|-
1795 births
1870 deaths
19th-century American novelists
African-American history of Maryland
American male novelists
19th-century American memoirists
American military personnel of the War of 1812
American people of Irish descent
Burials at Green Mount Cemetery
Fillmore administration cabinet members
History of Baltimore
History of slavery in Maryland
Johns Hopkins University
Maryland lawyers
Maryland Whigs
Members of the United States House of Representatives from Maryland
Politicians from Baltimore
Speakers of the Maryland House of Delegates
St. Mary's City, Maryland
St. Mary's College of Maryland
St. Mary's County, Maryland
United States Secretaries of the Navy
Whig Party members of the United States House of Representatives
19th-century American politicians
Writers from Baltimore
American abolitionists
Novelists from Maryland
American male non-fiction writers |
1601276 | https://en.wikipedia.org/wiki/Computer%20emergency%20response%20team | Computer emergency response team | A computer emergency response team (CERT) is an expert group that handles computer security incidents. Alternative names for such groups include computer emergency readiness team and computer security incident response team (CSIRT). A more modern representation of the CSIRT acronym is Cyber Security Incident Response Team.
History
The name "Computer Emergency Response Team" was first used in 1988 by the CERT Coordination Center (CERT-CC) at Carnegie Mellon University (CMU). The term CERT is registered as a trade and service mark by CMU in multiple countries worldwide. CMU encourages the use of Computer Security Incident Response Team (CSIRT) as a generic term for the handling of computer security incidents. CMU licenses the CERT mark to various organizations that are performing the activities of a CSIRT.
The history of CERT, and of CSIRTS, is linked to the existence of malware, especially computer worms and viruses. Whenever a new technology arrives, its misuse is not long in following. The first worm in the IBM VNET was covered up. Shortly after, a worm hit the Internet on 3 November 1988, when the so-called Morris Worm paralysed a good percentage of it. This led to the formation of the first computer emergency response team at Carnegie Mellon University under U.S. Government contract. With the massive growth in the use of information and communications technologies over the subsequent years, the generic term 'CSIRT' refers to an essential part of most large organisations' structures. In many organisations the CSIRT evolves into an information security operations center.
Global associations and teams
National or economic region teams
See also
Computer security
Digital humanitarianism
Emergency prevention
Proactive cyber defence
White hat (computer security)
Critical infrastructure protection
Incident management
Information security
Responsible disclosure
Vulnerability (computing)
References
External links
CERT-CC website
FIRST website
Carnegie Mellon University
Emergency services |
84404 | https://en.wikipedia.org/wiki/Xenix | Xenix | Xenix is a discontinued version of the Unix operating system for various microcomputer platforms, licensed by Microsoft from AT&T Corporation in the late 1970s. The Santa Cruz Operation (SCO) later acquired exclusive rights to the software, and eventually replaced it with SCO UNIX (now known as SCO OpenServer).
In the mid-to-late 1980s, Xenix was the most common Unix variant, measured according to the number of machines on which it was installed.
Microsoft chairman Bill Gates said at Unix Expo in 1996 that, for a long time, Microsoft had the highest-volume AT&T Unix license.
History
Bell Labs, the developer of UNIX, was part of the regulated Bell System and could not sell UNIX directly to most end users (academic and research institutions excepted); it could, however, license it to software vendors who would then resell it to end users (or their own resellers), combined with their own added features. Microsoft, which expected that UNIX would be its operating system of the future when personal computers became powerful enough, purchased a license for Version 7 UNIX from AT&T in 1978, and announced on August 25, 1980, that it would make it available for the 16-bit microcomputer market. Because Microsoft was not able to license the "UNIX" name itself, the company gave it an original name.
Microsoft called XENIX "a universal operating environment". It did not sell XENIX directly to end users, but licensed the software to OEMs such as IBM, Intel, Management Systems Development, Tandy, Altos, SCO, and Siemens (SINIX) which then ported it to their own proprietary computer architectures.
In 1981, Microsoft said the first version of XENIX was "very close to the original UNIX version 7 source" on the PDP-11, and later versions were to incorporate its own fixes and improvements. The company stated that it intended to port the operating system to the Zilog Z8000 series, Digital LSI-11, Intel 8086 and 80286, Motorola 68000, and possibly "numerous other processors", and provide Microsoft's "full line of system software products", including BASIC and other languages. The first port was for the Z8001 16-bit processor: the first customer ship was January 1981 for Central Data Corporation of Illinois, followed in March 1981 by Paradyne Corporation's Z8001 product.
The first 8086 port was for the Altos Computer Systems' non-PC-compatible 8600-series computers (first customer ship date Q1 1982).
Intel sold complete computers with XENIX under their Intel System 86 brand (with specific models such as 86/330 or 86/380X); they also offered the individual boards that made these computers under their iSBC brand. This included processor boards like iSBC 86/12 and also MMU boards such as the iSBC 309. The first Intel XENIX systems shipped in July 1982. Tandy more than doubled the XENIX installed base when it made TRS-XENIX the default operating system for its TRS-80 Model 16 68000-based computer in early 1983, and was the largest UNIX vendor in 1984. Seattle Computer Products also made (PC-incompatible) 8086 computers bundled with XENIX, like their Gazelle II, which used the S-100 bus and was available in late 1983 or early 1984. There was also a port for IBM System 9000.
SCO had initially worked on its own PDP-11 port of V7, called Dynix, but then struck an agreement with Microsoft for joint development and technology exchange on XENIX in 1982. Microsoft and SCO then further engaged Human Computing Resources Corporation (HCR) in Canada, and a software products group within Logica plc in the United Kingdom, as part of making further improvements to XENIX and porting XENIX to other platforms. In doing so, Microsoft gave HCR and Logica the rights to do XENIX ports and to license XENIX binary distributions in those territories.
In 1984, a port to the 68000-based Apple Lisa 2 was jointly developed by SCO and Microsoft and it was the first shrink-wrapped binary product sold by SCO. The Multiplan spreadsheet was released for it.
In its 1983 OEM directory, Microsoft said the difficulty in porting to the various 8086 and Z8000-based machines had been the lack of a standardized memory management unit and protection facilities. Hardware manufacturers compensated by designing their own hardware, but the ensuing complexity made it "extremely difficult if not impossible for the very small manufacturer to develop a computer capable of supporting a system such as XENIX from scratch," and "the XENIX kernel must be custom-tailored to each new hardware environment."
A generally available port to the unmapped Intel 8086/8088 architecture was done by The Santa Cruz Operation around 1983. SCO XENIX for the PC XT shipped sometime in 1984 and contained some enhancement from 4.2BSD; it also supported the Micnet local area networking.
The later 286 version of XENIX leveraged the integrated MMU present on this chip, by running in 286 protected mode. The 286 XENIX was accompanied by new hardware from XENIX OEMs. For example, the Sperry PC/IT, an IBM PC AT clone, was advertised as capable of supporting eight simultaneous dumb terminal users under this version.
While XENIX 2.0 was still based on Version 7 UNIX, version 3.0 was upgraded to a UNIX System III code base, a 1984 Intel manual for XENIX 286 noted that the XENIX kernel had about 10,000 lines at this time. It was followed by a System V R2 codebase in XENIX 5.0 (a.k.a. XENIX System V).
"Microsoft hopes that XENIX will become the preferred choice for software production and exchange", the company stated in 1981. Microsoft referred to its own MS-DOS as its "single-user, single-tasking operating system", and advised customers that wanted multiuser or multitasking support to buy XENIX. It planned to over time improve MS-DOS so it would be almost indistinguishable from single-user XENIX, or XEDOS, which would also run on the 68000, Z8000, and LSI-11; they would be upwardly compatible with XENIX, which BYTE in 1983 described as "the multi-user MS-DOS of the future". Microsoft's Chris Larson described MS-DOS 2.0's XENIX compatibility as "the second most important feature". His company advertised DOS and XENIX together, listing the shared features of its "single-user OS" and "the multi-user, multi-tasking, UNIX-derived operating system", and promising easy porting between them.
AT&T started selling System V, however, after the breakup of the Bell System. Microsoft, believing that it could not compete with UNIX's developer, decided to abandon XENIX. The decision was not immediately transparent, which led to the term vaporware. It agreed with IBM to develop OS/2, and the XENIX team (together with the best MS-DOS developers) was assigned to that project. In 1987, Microsoft transferred ownership of XENIX to SCO in an agreement that left Microsoft owning slightly less than 20% of SCO (this amount prevented both companies from having to disclose the exact amount in the event of an SCO IPO). And SCO would acquire both of the other companies that had XENIX rights, Logica's software products group in 1986 and HCR in 1990. When Microsoft eventually lost interest in OS/2 as well, the company based its further high-end strategy on Windows NT.
In 1987, SCO ported XENIX to the 386 processor, a 32-bit chip, after securing knowledge from Microsoft insiders that Microsoft was no longer developing XENIX. XENIX System V Release 2.3.1 introduced support for i386, SCSI and TCP/IP. SCO's XENIX System V/386 was the first 32-bit operating system available on the market for the x86 CPU architecture.
Microsoft continued to use XENIX internally, submitting a patch to support functionality in UNIX to AT&T in 1987, which trickled down to the code base of both XENIX and SCO UNIX. Microsoft is said to have used XENIX on Sun workstations and VAX minicomputers extensively within their company as late as 1988. All internal Microsoft email transport was done on XENIX-based 68000 systems until 1995–1996, when the company moved to its own Exchange Server product.
SCO released its SCO UNIX as a higher-end product, based on System V R3 and offering a number of technical advances over XENIX; XENIX remained in the product line. In the meantime, AT&T and Sun Microsystems completed the merge of XENIX, BSD, SunOS and System V R3 into System V R4. The last version of SCO XENIX/386 itself was System V R2.3.4, released in 1991.
Features
Aside from its AT&T UNIX base, XENIX incorporated elements from BSD, notably the vi text editor and its supporting libraries (termcap and curses). Its kernel featured some original extensions by Microsoft, notably file locking and semaphores, while to the userland Microsoft added a "visual shell" for menu-driven operation instead of the traditional UNIX shell. A limited form of local networking over serial lines (RS-232 ports) was possible through the "micnet" software, which supported file transfer and electronic mail, although UUCP was still used for networking via modems.
OEMs often added further modifications to the XENIX system.
Trusted XENIX
Trusted XENIX was a variant initially developed by IBM, under the name Secure XENIX; later versions, under the Trusted XENIX name, were developed by Trusted Information Systems. It incorporated the Bell-LaPadula model of multilevel security, and had a multilevel secure interface for the STU-III secure communications device (that is, an STU-III connection would be made available only to those applications running at the same privilege level as the key loaded in the STU-III). It was evaluated by formal methods and achieved a B2 security rating under the DoD's Trusted Computer System Evaluation Criteria—the second highest rating ever achieved by an evaluated operating system. Version 2.0 was released in January 1991, version 3.0 in April 1992, and version 4.0 in September 1993. It was still in use as late as 1995.
See also
AT&T 6300 Plus
PC/IX
Venix
Concurrent DOS
Notes
References
Further reading
; review of the beta SCO XENIX on an XT
Covers and compares PC/IX, XENIX and VENIX.
External links
XENIX timeline
XENIX documentation and books for Download
XENIX man pages
Intel Multibus System 320 for XENIX (or iRMX86)
Welcome to comp.unix.xenix.sco (v1.64)
https://groups.google.com/d/msg/comp.sys.tandy/UbeLIMssHsE/9isYZrRW-LgJ
1980 software
Discontinued Microsoft operating systems
Lightweight Unix-like systems
Microsoft operating systems
UNIX System V
Unix variants
Discontinued operating systems |
255295 | https://en.wikipedia.org/wiki/Finale%20%28scorewriter%29 | Finale (scorewriter) | Finale is a proprietary music notation software developed and released by MakeMusic for Microsoft Windows and macOS. Finale has been regarded as one of the industry standards for music notation software.
Finale is used by composers, songwriters, transcribers, and arrangers for creating sheet music, including the score for an entire ensemble (e.g., orchestra, concert band, big band, educational ensembles, etc.) and parts for the individual musicians.
MakeMusic also offers several less expensive versions of Finale, which do not contain all of the main program's features. These include PrintMusic and a freeware program, Finale Notepad, which allows only rudimentary editing and playback. Discontinued versions include Finale Guitar, Notepad Plus, Allegro, SongWriter, and the free Finale Reader.
Appearance
The default Untitled document is a 31-measure piece for a single treble clef instrument. A Setup Wizard, an alternative method of starting a project, consists of a sequence of dialogs allowing the user to specify the instrumentation, time signature, key signature, pick-up measure, title, composer, and some aspects of score and page layout. Finale's current default music notation font is Maestro.
Functionality
Finale's tools are organized into multiple hierarchically organized palettes, and the corresponding tool must be selected to add or edit any particular class of score element, (e.g., the Smart Shape tool to generate and edit trill lines and dynamics "hairpins" (so-named because the symbols resemble hair pins); the Staff tool to add and edit the parameters of individual staves). Alongside these tools, additional controls are available to view or hide up to four superimposed layers of music that can be entered onto any particular staff, for purposes of organizing multiple contrapuntal voices on the same staff. Several of Finale's tools provide an associated menu just to the left of the Help menu, available only when that particular tool is selected. Thus, the operation of Finale bears at least some surface similarities to Adobe Photoshop.
On the screen, Finale provides the ability to color code several elements of the score as a visual aid; on the print-out all score elements are black (unless color print-out is explicitly chosen). With the corresponding tool selected, fine adjustment of each set of objects in a score are possible either by clicking and dragging or by entering measurements in a dialog box. A more generalized selection tool is also available to select large measure regions for editing key and time signatures, or transposing, among other uses. This tool also provides the ability to reposition several classes of score object directly, and more recent versions of the software have implemented extensive contextual menuing via this tool.
Finale automatically manages many of the basic rules of harmony and music notation, such as correct beaming, stem direction, vertical alignment of multiple rhythmic values, and established rules for positioning noteheads on chords. In other situations, without careful advance user customization, the program makes what can be described as good guesses, especially in the area of enharmonic spelling of newly entered data generated from a MIDI keyboard, while respecting the current key signature. It is "smart" enough to spell an enharmonic pitch when secondary dominants are used in a piece.
For the majority of Western tonal music, Finale chooses the correct spelling for chords of the tonic and dominant keys correctly, but when the music wanders to tonal regions further away from the tonic, Finale tends to make enharmonic "spelling" mistakes by treating chords as if they belonged to the tonic key in some way. When using a nonstandard key, experts have recommended that the user "assign a spelling for each pitch in the chromatic scale" using a dialog box available from the Preferences menu.
Version history
The lead programmer for Finale version 1.0 in 1988 was Phil Farrand, known in some circles as an author of Nitpicker's Guides for Star Trek and The X-Files. He wrote the original version software for Coda Music Software, which was later sold to Net4Music and then became MakeMusic. After Finale version 3.7, Finale's marketers made the switch to years as identifiers for each new release, starting with Finale 97. Those early versions of Finale used a file format called Enigma Transportable File with extension ETF.
Finale 2004, released in early 2004, was the first release to run natively on Macintosh computers running Mac OS X Panther (v10.3). This was considered a late release by MakeMusic, and full support for the features of Mac OS X was limited at first. More comprehensive support was brought "on-line" through maintenance releases going forward into 2004. Finale 2004 also continued to support PowerPC Macs running Mac OS 9. This release shortened the development cycle for Finale 2005, which was released the following August. While the number of new features in Finale '05 were necessarily limited, this was the first release to have both Windows and Mac versions on the same distribution CD.
The most advertised new feature of Finale 2006 (released in the summer of 2005) included the Garritan Personal Orchestra, an integrated sound library with upgradeable selections from the Garritan Personal Orchestra for more lifelike playback than the SmartMusic SoftSynth (which is still included in the program). A limited-functionality music-scanning module, SmartScore Lite, was also included. Along with Page View and Scroll View, the 2006 release added StudioView, a display mode which is similar to Scroll View with the addition of a sequencer interface. This feature offers an environment for creation, evaluation, and experimentation with different musical ideas in a multi-track environment. In StudioView, an additional staff appears above the notation, called TempoTap, allowing for complete control over tempo changes like rubati, accelerandi, and ritardandi.
A key new feature of the Finale 2007 release was an integrated "linked" score and part management system. A properly-set-up "full score for extraction" could now contain all the data and formatting necessary to generate a full set of linked ensemble parts, ensconced within a single Finale master document. Limitations on the scope of format and layout control between parts and conductor score (including measure numbers and staff system breaks) suggested that this new feature was targeted to media production work, where quick turnaround and accuracy is a crucial factor, rather than publishing, though publishers still may use some aspects of linked parts to improve the part creation process. The 2007 release was a Universal binary, and runs natively on both PowerPC and Intel-based Macs.
Finale 2008 was the first version to come out with full Vista (32-bit only) support. It also changed the way several editing modes are accessed, by introducing the multi-purpose “selection tool” described above. The 2008 release offers the importation and/or recording of synchronized real-time audio as an additional single track in a document.
Finale 2009 was identified as the 20th Anniversary edition. It offers many fundamental workflow changes not seen since the program's inception, such as the organization of expressions by category. Also notable is the re-designed Page View, which enables the viewing and editing of multiple pages within the same document window: these pages may either be arranged in a horizontal line or tiled vertically within a window. Finale 2009 includes Garritan's new Aria Player Engine, and has new samples for this. The older Kontakt 2 Player is still supported, and the samples load under this also.
Finale 2010 was released in June 2009 with improvements to percussion notation and chord symbols. This version also introduced measure number enhancements, auto-ordered rehearsal marks, support for additional graphic formats, and a new Broadway Copyist font option resembling the look of handwritten scores.
Finale 2011 was released in June 2010 with additional Garritan Sounds, Alpha Notes (notation with note names inside), a new lyric entry window and other lyric enhancements, and, most notably, a reworking of staff, system, and page layout handling. In Finale versions before 2011, systems could be optimized to remove empty staves from them and also permit staves in a system to be positioned independently from other systems. Eliminating empty staves from systems with many staves (sometimes called French scoring) is a common notation practice used to economize (or optimize) the use of the page. Users needed to take caution while optimizing, because if measures with notes were moved into an optimized system, or notes were added to staves while viewing the score in Scroll View that had been optimized out, they could be omitted in the printed score. The recommended solution was to always optimize as the last step in the score editing process, immediately before printing. Finale resolved this with a number of solutions in Finale 2011, including the new "Hide Empty Staves" command under the Staff menu, which hides all empty staves in systems. If notes are added to the system, the staff reappears automatically. (The capability of intentionally hiding staves containing notes is still available using a Staff Style). Also, any staff or staves can be positioned in systems independently (based on the selection). These improvements resolved some of the longstanding frustrations novice and advanced users could encounter when working with multi-staff scores. Other improvements to this Finale version include easier capo chords and a new Aria Player.
Finale 2012 was released in October 2011 with new functions as Finale's ScoreManager™, unicode text support, creation of PDF files, an updated setup Wizard, improved sound management and more Garritan sounds built-in.
Finale 2014 was released in November 2013 with new functions. As with all previous releases, a new file format was introduced, which is incompatible with older versions of Finale. However, this time easier file exchange with future versions of Finale was promised. Finale 2014's new functions include a rewritten file format for forward and backward file compatibility, improved Apple OS X support, a new audio engine, additional Garritan sounds, and a new user interface. 2014d is the last version.
An updated version, Finale 2014.5 fixes several problems.
Finale Version 25 was released on August 16, 2016. The Finale Blog says that "highlights of the new release include 64-bit support, additional Garritan sounds [100+], transposed instrument entry, ReWire compatibility, significant streamlining, and more." Some new features include: ReWire support, so that Finale can be used simultaneously and in sync with digital audio software, including Logic, Pro Tools, and Digital Performer. Allows 64-bit sound libraries to be used directly in Finale, without 3rd-party software. The new "Aria Player" speeds up and simplifies the choosing of Garritan instruments. Workflow was simplified. Beginning with this version, the user manual is found entirely online. Band-in-a-Box and a couple of other plug-ins were removed. The ability to import scanned documents was removed. Several other features were added.
Finale Version 26 was released on October 10, 2018. This release includes new features such as automatically stacking articulations, automated slur collision avoidance, expedited processes for entering chord symbols and expressions, and additional templates.
Finale Version 27 was released on June 15, 2021. This release includes new features like interactive music sharing functionality, Standard Music Font Layout (SMuFL) support, an improved instrument list, MusicXML 4.0, and numerous bug fixes (such as unexpected installer behavior, unusable display scaling in Windows, crashes from macOS quirks, and printing problems).
Abilities
Finale 2007 introduced linked parts, which allow ensemble parts to remain linked to the master score, so that changes to the master score will be instantly reflected in the parts.
Finale can notate anything from a textbook chorale to a cut-out score including new symbols invented by the composer. It is also capable of working with guitar tablature and includes a jazz font similar to that used in the Real Book. Nearly all score elements can be positioned or adjusted, either by dragging (with the appropriate tool selected) or by using dialog boxes with measurements in inches, centimeters or picas.
Music can be entered in a variety of ways: using the computer keyboard alone in real time or via a command line window; using user-determined combinations of mouse clicks, computer keyboard, and MIDI piano keyboard; or by MIDI keyboard alone. It also includes a function for optically recognising printed music from a scan, similar to OCRring text. From Finale 2001 onward, the program included MicNotator, a module able to notate melodic pitches played on a single-pitch acoustic instrument via a microphone connected to the computer.
Finale can import and export MIDI files, and it can play back music using a large range of audio samples, notably from the Garritan library. As of Finale 2009, it can use VST and AU plug-ins. A feature called 'Human playback' aims to create a less mechanical feel, by incorporating playing styles into the playback, including ornaments, ritardandos and accelerandos. Finale can export audio files as .aif, .wav or .mp3.
Finale 2004 also introduced FinaleScript, a scripting language for the automation of tasks such as transcribing music for other instruments to use.
Prominent users
It is used by prominent composers such as Brian Ferneyhough, large publishers such as Alfred Music, the Hal Leonard Corporation</ref>, and smaller, specialist publishers such as G. Henle Verlag, Edition HH, Promethean Editions, and Acoustic Guitar magazine. It is also used by institutions such as the New England Conservatory, the Juilliard School, Millikin University, the Berklee College of Music, the Lemmensinstituut, and George Mason University.
Academy Award-nominated films such as Million Dollar Baby, The Aviator, Spider-Man 2, Sideways, Polar Express, The Village, Harry Potter and the Prisoner of Azkaban, The Passion of the Christ, Finding Neverland, Ratatouille, Michael Clayton, and The Golden Compass were all scored with Finale.
Alfred Music involvement
In 2013, MakeMusic signed an exclusive distribution agreement with Alfred Music, a music publishing company with a focus on materials for music education. Under this agreement, Alfred Music is now the sole distributor of Finale and Garritan products in several markets, including North America. The move builds upon the existing relationship between the two companies, in which Alfred Music had previously licensed content for use in MakeMusic’s SmartMusic software.
Awards
Best Book/Video/Software at the 2015 Music & Sound Awards
See also
List of scorewriters
Colored music notation
List of music software
References
General references
Matthew Nicholl & Richard Grudzinski, Music Notation: Preparing Scores and Parts, ed. Jonathan Feist. Boston: Berklee Press (2007)
Inline citations
External links
Scorewriters
Music software |
9120109 | https://en.wikipedia.org/wiki/Goodwater%2C%20Saskatchewan | Goodwater, Saskatchewan | Goodwater (2016 population: ) is a village in the Canadian province of Saskatchewan within the Rural Municipality of Lomond No. 37 and Census Division No. 2. The village is located approximately south of the City of Weyburn. Goodwater is located on Treaty 4 land, negotiated between the Cree, Saulteaux, and Assiniboine first peoples, and Alexander Morris, second Lieutenant Governor of Manitoba (1872–1877). Goodwater is currently part of the Souris - Moose Mountain federal riding.
Demographics
In the 2016 Census of Population conducted by Statistics Canada, the Village of Goodwater recorded a population of living in of its total private dwellings, a change from its 2011 population of . With a land area of , it had a population density of in 2016.
In the 2011 Census of Population, the Village of Goodwater recorded a population of , a change from its 2006 population of . With a land area of , it had a population density of in 2011.
Goodwater reached its peak population, to-date, of 123 in 1921.
According to the 1926 Census of Prairie Provinces, the population of Goodwater was 104.
By 1955 Goodwater had a population of 82.
History
Goodwater incorporated as a village on May 8, 1911. Goodwater's first village council was held on August 7, 1911. In 2011, Goodwater celebrated its 100-year anniversary from July 22–24 with a three-day event that included singing, two pancake breakfasts, an antique machinery show, and a performance by the BAD Boys.
Name
According to several sources, Goodwater was once called "Juell," prior to the arrival of the Canadian Northern Railway Company, circa 1909–1911. Families named Juell were among the first homesteaders in the area circa 1902, immigrating from Norway by way of the United States, and the creek south of town is known as Juell Creek. Citing research undertaken using the database of Canadian federal ridings since 1867, the genealogical website project Saskatchewan GenWeb states: "There were a few homesteaders living near here under the name "Juell": George L Juell, NE 16-5-13-W2; John Juell, Jr., NE 20-5-1-W2; Chris Ceverian Juell, NW 20-5-1-W2; Sigurd John K Juell, SE 20-5-1-W2; and, John Peter Ludwig Juell, SW 20-5-13-W2."
The Saskatchewan GenWeb project highlights a 1914 reproduction of a Canada Department of Mines map of Alberta, Saskatchewan, and Manitoba, which clearly shows a town "Juell" in the same general area as current-day Goodwater.
The Albert and Edith Lyons entry by "family members" in the 1980 community history, Prairie Gold, recounts the family's 1904 relocation from Boissevain, Manitoba: "The Lyons family sought greener pastures and migrated further west to Jewelltown, North West Territories, later known as Goodwater, Sask."
Like many Saskatchewan place names, the straightforward explanation of Goodwater's current-day name originates with Canadian Northern Railway surveyors. According to a collectively-researched 1968 publication on Saskatchewan place name origins, CNoR surveyors encountered difficulty in finding water while approaching Juell, but when they eventually did, "they struck it at 12 feet--good water and in abundance."
Early Businesses
The village was first surveyed in 1910, however several businesses already existed, including: Kelly and Hobbs general store (a tent); Ralph Graville's cafe; Mr. Pepper's blacksmith shop; and the Stirton and McIntyre hardware store. As early as 1914, a branch location of the Standard Bank of Canada existed in Goodwater; by 1936 the bank closed.
General Store
Arthur Kelly (b. 1850, Devonshire, England) and William "Billie" Hobbs first established their general store in a tent in 1910, selling "everything from needles to threshing machines." In 1925, Arthur Kelly sold his interest in the general store to Billie Hobbs who, in 1933, sold the general store to Kelly's son, Arthur Kelly, Jr. Third-generation Clair Arthur Kelly took over the general store (and served as Postmaster), later selling it in 1953 to Norman Lucas who ran the store and served as Postmaster until 1960.
Stirton and McIntyre Hardware Store
The Stirton and McIntyre Hardware Store was begun in 1910 by US immigrant Edward McIntyre, Percy Speers, and Boissevain tinsmith Arthur Stinton. By 1912 Stirton and McIntyre handled farm insurance and loans, and dealt in farm implements for John Deere and the International Harvester Company. The hardware store closed in 1938, when Edward McIntyre left Goodwater with his family for British Columbia, during an economically difficult time in the Goodwater community.
Railway (ca. 1910-1979)
Established in 1899, the Canadian Northern Railway was formed out of the bankruptcy of the regional Lake Manitoba Railway and Canal Company—a local 27 kilometre "branch line" between Winnipegosis and Lake Manitoba (and, later, Portage La Prairie) in Manitoba. Donald Mann and William Mackenzie, both former employees of the Canadian Pacific Railway (CPR), purchased the defunct LMR&CC and rebranded it as the Canadian Northern Railway (CNoR) with the vision to compete with the CPR by consolidating and constructing alternative "branch lines" serving communities outside the CPR's transcontinental lines.
By 1911, the CNoR was reported to be constructing 300 miles of new rail lines in Saskatchewan, employing 500 teams and 2,500 men. Construction for a new branch line from Luxton to Ceylon, serving Colgate and Goodwater along the way, was authorized in 1908. This branch line was initially begun in 1909 from the main CNoR line at Maryfield, Saskatchewan, just west of the Manitoba border, and is sometimes referred to as the "Maryfield Extension." According to train historian Adam Peltenburg, the CNoR rail line branch through Goodwater was part of, "major developments in the prairies" that began around 1910. In 1911, the trade publication, Daily Consular and Trade Reports, wrote that, "one of the most important of the new lines now under construction in that province is the Maryfield extension, to be carried through the coal fields to Lethbridge, Alberta." Several community accounts report that surveyors of the CNoR were responsible for renaming the town from "Juell" to "Goodwater," circa 1910–1011.
The 89-mile branch line from Luxton to Ceylon was officially completed and opened for traffic on July 11, 1911.
The Luxton to Ceylon branch line through Goodwater was reportedly a "busy line" with numerous trains daily, including passenger trains in both directions running six days a week (except Sunday) from 1914 to 1921. In one published community history anecdote, CNoR train engineer Dalrymple made the Carlyle-to-Radville segment in "a record time of a little over two hours...[making all the stops]," during which his "trainmen on the back of the caboose nervously held on to the "air" and in chorus, uttered a prayer on the Goodwater hill."
According to a 1913 CNoR train schedule, westbound train #27 left Brandon, Manitoba at 9:40 am and passed through Goodwater at 6:02 pm; eastbound train #28 left Radville, Saskatchewan at 8:00 am and passed through Goodwater at 9:08 am. According to a 1917 CNoR train schedule, westbound train #51 left Brandon, Manitoba on Mondays, Wednesdays, and Fridays, and passed through Goodwater at 3:18 pm; eastbound train #52 left Moose Jaw on Tuesdays, Thursdays, and Saturdays at 9:00 am, passing through Goodwater at 2:56 pm. Poor profits for the passenger service eventually ended two-way daily train service, and lead to "mixed trains" carrying passengers and commodities.
The Canadian Northern Railway was absorbed into other railway interests of the Canadian federal government on September 6, 1918, when mounting debt and the realities of profit-lean World War I caused Donald Mann and William Mackenzie to resign as CNoR directors.
Severe winter blizzard weather and snow accumulation during the winter of 1946-47 caused over sixteen days of isolation with no train service or supplies to Goodwater, as well as many other southern Saskatchewan towns. In January, 1947, the Canadian Press reported that "five feet of hard-packed snow covered tracks and some drifts were estimated to be 28 feet high" in Goodwater. On January 22, then-general store merchant, and future Goodwater Postmaster, Clair Archibald Kelly stated that the shortage of coal would be "serious" if Goodwater were forced to wait another day for supplies. The only road open in southern Saskatchewan was the road between Regina and Yorkton, and no trains passed through Goodwater from January 11 until January 24.
During the spring of 1948, flood waters damaged the rail lines between Goodwater and Blewett. According to company records, the Canadian National Railway wrote off a 22.39 mile abandonment during 1948-1952 for the flood-damaged track between Goodwater and Blewett. With the closure of the Goodwater to Blewett section, trains ran only from Radville and Goodwater, then turned back to Radville. Into the 1950s, passenger service declined further and by 1959 regular train service ceased, with train service occurring only for grain cars as needed.
In 1976, local communities including Goodwater filed petition briefs to the Hall Commission on Grain Handling and Transportation, demanding "retention and protection of the rail lines and the rural elevator system." Canadian National Railway ultimately decided to abandon the Radville to Goodwater line, and on December 13, 1979, the final train left Goodwater.
Post Office
George William Thackeray operated the Thacker Post Office located at Sec. 35, Twp. 5, R. 14, W2 as early as December 1907, and closing on November 27, 1911. Thackeray hauled mail from Halbrite, Saskatchewan.
The Goodwater Post Office opened in 1911 and closed in 1985.
The following table of Postmasters is taken from Library and Archives Canada's Records of the Post Office.
"Hot and Dirty Thirties"
The period of the Great Depression significantly impacted the Goodwater community. According to community historian Thelma Ror, in 1936 the bank closed--"quite a blow to the area at the time," and the "hot and dirty thirties...were years of struggle for the town council; taxes were not paid, money had to be borrowed to keep the school operating, and many that were in dire need were given relief vouchers." Significant heat and drought severely affected the agricultural community, along with grasshoppers. Verna Berg, niece of early area businessman Arthur Kelly (of Kelly & Hobbs General Store), writes of the 1930s: "As the soil dried up from lack of rain and the wind blew, we had dust storms so bad you couldn't see across the street. [...] Many people gave up trying to farm or just exist, so, loading up what belongings they could on a wagon, and tying a cow or two behind, they headed for greener pastures, usually Northern Sask. or east to Manitoba. Those that stayed behind and had cattle, took them to the hay fields in Southern Manitoba. The story goes that the cattle had been so used to eating Russian Thistle that when they got good hay, they wouldn't eat it." By early 1938, it was reported that 30% of horses in the Goodwater area were "either sick, dying or dead of starvation," and an examination of horse corpses revealed that, "dirt, sand and sharp Russian Thistle had been consumed by the animals, and internal organs were as delicate as "tissue paper,"." A petition signed by Goodwater farmers was submitted to the United Farmers of Canada, appealing to the provincial government to supply feed, oats, and hay to affected communities.
Agricultural Industry
From its origins, Goodwater has long been a community organized around agricultural grain and livestock production.
Crop yields in 1921 reported fall rye yielding 44 bushels per acre, with spring rye yielding between 20 and 30 bushels.
Saskatchewan Co-operative Elevator Company, Local No. 6
By 1913, Goodwater had two grain elevators: the Johnson & Co. Ltd. elevator with an estimated capacity of 25,000 bushels, and the Saskatchewan Co-Operative Elevator Company elevator with an estimated capacity of 30,000 bushels. Goodwater was Local No. 6 of the Saskatchewan Co-operative Elevator Company, Limited, and its 1919 representative delegate was W. J. Pepper. By 1975, both grain elevators in Goodwater were owned by the Saskatchewan Wheat Pool; Elevator A had a capacity of 91,000 bushels and Elevator B had a capacity of 26,000 bushels.
Lomond 4-H Club
The Lomond Calf Club was organized in the fall of 1939 by Scotch-born Alexander J. (Sandy) McKenzie, and held its first "achievement day" at the outdoor ice rink in the summer of 1940. Writing in a 1923 issue of The Grain Grower's Guide for an article on raising fowl, Alexander J. (Sandy) McKenzie lamented, "Much has been done for the cow and her products in the way of markets. We have a market for dairy products in Saskatchewan as good as any in the Dominion, but what have we got for the hen? Twenty thousand pounds of beeves costs us $64 to market, while the same weight of hens costs us nearly $900."
Geography
Located along the Souris River, the Goodwater community is located less than 10 km from Mainprize Regional Park and its Rafferty Dam Reservoir.
North-West Mounted Police 'March West'
Goodwater is situated along the route taken by George Arthur French, Commissioner of the North-West Mounted Police, during their ill-fated March West in 1874. After 22 days of travel from Fort Dufferin (present day Emerson, Manitoba), Major General French split his force of 300 mounted police on January 29, 1874, sending part of the force north to Fort Ellice, while carrying on westward himself and camping on January 30, 1874, at Long Creek (near present-day Estevan, Saskatchewan). Travelling at roughly 15 miles per day, along the Souris River through damp terrain heavy with mosquitoes and black flies, French's force passed the Goodwater area in the first days of August before reaching Moose Jaw on August 8, 1874. In this area on August 3, 1874, mountie Sub-Inspector John Henry McIllree and Commissioner French spotted and hunted prairie antelope, which are common to the Goodwater area.
Goodwater Hockey
According to local Thelma Ror, writing in 1980, "Residents of Goodwater and surrounding districts have always been sports-minded. A number of hockey teams and ball teams have provided recreation and entertainment through the years." Ice hockey games of shinny were played on Juell Creek as early as the 1910s. In 1952, the "Souris Valley League" was formed.
The Weyburn Farmers' Hockey League (1928-1937) and the Goodwater Eskimos
According to local historian Thelma Ror, the "Farmers League" for hockey was formed in 1928, and included teams from: Goodwater, Colgate, Talmage, Ralph, South Weyburn, and North Weyburn.
The "Maroons" from Ralph won the 1930-31 season championship, defeating a team from East Weyburn 2-0 in Game 3 of a three-game series. An all-star game in the Farmers' League was held in Weyburn on March 6, 1931.
The team from Ralph also won the 1933-34 championship, and a trophy donated by the Weyburn Rotary Club.
The 1934-35 season included teams from: Goodwater, Griffin, North Weyburn, South Weyburn, West Weyburn, and Ralph. In the 1934-35 season final, the Ralph "Indians" defeated the Goodwater "Eskimos" 5-0 to win the community of Ralph its fourth championship in as many years.
In 1936, the Regina Leader-Post documented the "Farmers' Hockey League" as having existed "several seasons as a six-team loop," including teams from: Goodwater, Colgate, Talmage, Ralph, South Weyburn, and McTaggart. Goodwater and Colgate did not field teams for the 1936-37 season.
No teams were fielded for the 1937-38 season of the Farmers' League due to "economic difficulties imposed by another year of drouth (sic)" in the region.
Long-serving Weyburn city clerk, John J. Norman, played in the Weyburn Farmers' League.
Merlin "Dutch" Evers (1915-1950)
Born April 11, 1915, Goodwater native Merlin Evers was a hockey talent in the 1930s and 1940s era, starting play in 1932 with the Goodwater team in the Farmers' League. Evers was a 5' 8" tall Winger, whose playing style (in his final season) was described as, "the best baldheaded back-checker in the loop...never been known to steer clear of bodily contact" who, "stays in the rough company with the big boys."
After several seasons with Goodwater in the early 1930s, Evers made the senior league Weyburn Beavers team in the 1936-37 season at the age of 21. Nicknamed "Dutch" like his father, Evers was reported as playing hockey in San Diego in the Pacific Coast Hockey League for the 1946-47 season. Evers played for the Seattle Ironmen in the 1948-49 season. By 1949, Evers was reported as still "sparkling" after three seasons with the New Westminster Royals in the Pacific Coast Hockey League and at the age of 34. On March 8, 1950, during intermission of a game against the Tacoma Rockets, the hometown New Westminster Royals honored Evers who was "leading the popular player poll in New Westminster." The Royals ultimately defeated the Los Angeles Monarchs in a closely fought seven-game series to win the 1949–50 Phil Henderson Cup (later known as the President's Cup, and the Lester Patrick Cup).
On October 16, 1950, while driving from Portland to Tacoma with three teammates from the Royals, Evers was involved in a car crash and sustained serious injuries to his head and internal organs. Evers died as a result of injuries sustained in the crash.
Goodwater Oil Kings
Team photos of a Goodwater team named the "Oil Kings" date from as early as 1957. Gerald Alexander was captain of the Oil Kings for the 1957-58 season.
Beginning in the 1957-58 season, an Oil Kings team coached by Gord Cooke and managed by Walter Thackeray played in a league with teams from: Colgate, Bromhead, Midale, Torquay, Tribune, and Weyburn.
The Oil Kings coached by Gordon Cooke won the league title in the 1962-63 season. Max White was captain of the championship team.
Goodwater Machine Shop proprietor Lionel Wanner was goalie for the Goodwater Oil Kings in the late-1970s, playing for then-team manager (and his brother) Meryl Wanner.
Since at least 2008, the Goodwater Oil Kings are a team playing in the Weyburn Adult Recreation Hockey League.
Goodwater Memorial Rink
In 1959, a new hockey rink was opened in Goodwater, facilitated by many of the Goodwater Oil Kings. On Saturday, February 7, 1959, Saskatchewan Premier Tommy C. Douglas “formally cut the ribbon to officially declare the rink open, and extend sincere congratulations to the people of Goodwater and district.” Premier Douglas “told a banquet audience [of 400] in the community hall that people working in a group could do things they could not possibly do as individuals.” Construction of the new rink took four days, and was built completely by a group of 65 volunteers with construction materiel costs estimated at more than $15,000.
The new ice surface of 64 by 166 feet was to be the new home of the Goodwater Oil Kings, but not before an official opening performance of figure skating and an exhibition hockey game featuring all-stars from the Souris Valley Hockey League.
On Saturday, January 14, 1961, Premier Tommy C. Douglas returned to the Goodwater Memorial rink, and “took great pleasure in putting a match to the Memorial rink promissory note indicating the rink built only two years ago, was now free from debt.” Congratulating the building fund committee, Douglas stated that, “there are certain things, such as the building of rinks, schools, churches and roads that could not be done by individuals, but by communities as a whole. Over the years the Goodwater community has been a leader in this regard.”
Notable people
James Auburn Pepper - Farmer and progressive NDP politician
See also
List of communities in Saskatchewan
Villages of Saskatchewan
References
Villages in Saskatchewan
Lomond No. 37, Saskatchewan
Division No. 2, Saskatchewan |
30865900 | https://en.wikipedia.org/wiki/Konstanz%20University%20of%20Applied%20Sciences | Konstanz University of Applied Sciences | The Hochschule Konstanz (Hochschule Konstanz für Technik, Wirtschaft und Gestaltung (HTWG)), is a German university located in Konstanz, Baden-Württemberg, Germany, in southern Germany close to the border with Switzerland. The university is a member of Internationale Bodensee-Hochschule (International Lake Constance University).
The Hochschule Konstanz plays a significant role among German universities of applied sciences. It is internationally well known for its outstanding achievements. The Hochschule Konstanz was named 2006 as one of the best German Universities of Applied Sciences in higher education (HE). Its undergraduate courses in business are consistently ranked in the top 15 in Germany.
The university was established in 1906 by Alfred Wachtel and named the "Technicum Konstanz", meaning Technical School of Constance. Initially there were only three departments: engineering, technical studies, and the school for 'Werkmeister' - post graduate work.
Faculties
Architecture and Design
Civil Engineering
Electronic and Information Technology
Computer Sciences
Mechanical Engineering
Business and social sciences
College for Foreign Students (ASK)
Bachelor majors
Architecture
Applied Computer Science (replaced technical computer science and software engineering)
Civil Engineering
Business Administration
Electronic and Information Technology
Communications Design
Mechanical Engineering/Construction and Development
Mechanical Engineering/Production
Transport and Environmental Technology
Business Information Systems
Healthcare Computer Science
Engineering Economics in Construction
Engineering Economics in Electrical Sciences/ Informatics
Engineering Economics in Mechanical Engineering
Economic Language Asia and Management in Chinese
Economic Language Asia and Management in Malayan
Master majors
Architecture
Asian-European Relations and Management
Automotive Systems Engineering
Civil Engineering
Business Information Technology
Computer Science
Communications Design
Mechanical Engineering and International Sales Management
Mechatronics
Environmental/ Production Sciences
Engineering Economics
References
External links
University homepage
Konstanz
Universities and colleges in Baden-Württemberg
Konstanz (district)
1906 establishments in Germany
Educational institutions established in 1906 |
10790339 | https://en.wikipedia.org/wiki/Analog%20%28program%29 | Analog (program) | Analog is a free web log analysis computer program that runs under Windows, macOS, Linux, and most Unix-like operating systems. It was first released on June 21, 1995, by Stephen Turner as generic freeware; the license was changed to the GNU General Public License in November 2004. The software can be downloaded for several computing platforms, or the source code can be downloaded and compiled if desired.
Analog has support for 35 languages, and provides the ability to do reverse DNS lookups on log files, to indicate where web site hits originate. It can analyze several different types of web server logs, including Apache, IIS, and iPlanet. It has over 200 configuration options and can generate 32 reports. It also supports log files for multiple virtual hosts.
The program is comparable to Webalizer or AWStats, though it does not use as many images, preferring to stick with simple bar charts and lists to communicate similar information.
Analog can export reports in a number of formats including HTML, XHTML, XML, Latex and a delimited output mode (for example CSV) for importing into other programs. Delimited or "computer" output from Analog is often used to generate more structured and graphically rich reports using the third party Report Magic program.
The popularity of Analog is largely unknown as no download count information has been released on its historic dissemination. In a 1998 survey by the Graphic, Visualization, & Usability Center (GVU), Analog was reportedly used by 24.9% (up from 19.9% the year before), with its nearest rival, Web Trends holding some 20.3% of the market.
It is not clear how Analog's usage has changed in the decade leading up to 2010, nor how its usage profile has been impacted by on-line analysis services such as Google Analytics. Analog can operate on an individual or web-farm basis from a single process, requiring no modification of web page or web script code in order to use it. It is a stand-alone utility, and it is not possible for visiting clients to block all of the logging of traffic directly from the client.
Analog has not been officially updated since the version 6.0 release in December 2004. The original author moved on to commercial traffic analysis. Updates to Analog continued informally by its user community up until the end of 2009 on the official mailing list. Currently the only formally compiled updated redistributable of Analog is that of Analog CE, which has focused on fixing issues in Analog's XML DTD and on adding new operating system and web browser detection to the original code branch.
History
Analog was first released in June 1995, as research project by its creator Dr. Stephen Turner, then working as a research fellow in Sidney Sussex College in the University of Cambridge. Some of the larger release milestones include:
14 June 1995 Analog 0.8b, the initial full testing build.
29 June 1995 Analog 0.9b was the first public release of Analog.
12 September 1995 Analog 1.0 was the first stable release.
10 February 1997 Analog 2.0 was the initial release of a native Win32 version of Analog.
15 June 1998 Analog 3.0 included support for HTTP/1.1 status codes and included a more refined log parsing engine in addition to the ability to parse non-standard log file formats.
16 November 1999 Analog 4.0 supported new reports including the Organisation Report, Operating System Report, Search Word Report, Search Query Report and Processing Time Report.
1 May 2001 Analog 5.0 is released with support for 24 languages, a range of new configuration commands and a new LaTeX output format.
19 December 2004 Analog 6.0 is released, including support for Palm OS and Symbian OS detection and all other changes from its 21-month beta period. Analog 6.0 was the first stable release made available under GPL license terms.
2 October 2007 Analog 6.0.1 C:Amie Edition the first release of the C:Amie maintenance branch. Included support for Windows Vista, improved support for Windows 3.11 and Windows NT 3.5 detection and allowed for the detection of the NetFront browser.
4 April 2009 Analog 6.0.4 C:Amie Edition was a bug fix release to Analog 6.0, containing bug fixes to Analog's XML output rendering and new configuration options.
18 July 2011 Analog 6.0.8 C:Amie Edition, with support for Windows Phone 7.5 (Mango), Apple iOS 5.0 and all current Android releases.
17 August 2012 Analog 6.0.9 C:Amie Edition, with new operating system identification profiles for Android 4.1 Jelly Bean, Windows Phone 6.5, Windows Phone 8.0, Windows 8 and Windows Server 2012. For the first time, the release expands out the previously grouped operating system detection for Mac OS X so that version number breakdowns are provided where information is available via user agent entries in log-file. The release also includes a number of bug fixes.
7 October 2013 Analog 6.0.11 C:Amie Edition, with improved accuracy in MSIE compatibility mode detection and new detection profiles for iOS, Android, Windows Phone and Windows 8/Server 2012 R2.
28 June 2015 Analog 6.0.12 C:Amie Edition, with new detection profiles for iOS, OS X, Android, Edge, Windows Phone and Windows 10.
27 July 2019Analog CE 6.0.16 added a new ANONYMIZERURL setting to allow the use of a URL forwarding service on reports, a new LINKNOFOLLOW setting to enable/disable hyperlink rel="nofollow" on reports (set to on by default), changed Mac OS X branding with macOS and other improvements to the Operating System report.
A full list of the changes in each release is recorded in the Analog What's New Changelog.
A full list of changes in the maintenance release is recorded on the Analog C:Amie Edition page.
References
External links
Analog CE
Analog Mailing List
Internet Protocol based network software
Free web analytics software
Web log analysis software |
31217952 | https://en.wikipedia.org/wiki/Desura | Desura | Desura was a digital distribution platform for the Microsoft Windows, Linux and OS X platforms. The service distributed games and related media online, with a primary focus on small independent game developers rather than larger companies. Desura contained automated game updates, community features, and developer resources. The client allowed users to create and distribute game mods as well.
Many independent developers (for example Scott Cawthon) and small companies published their content on Desura including Frozenbyte, Frictional Games, Introversion Software, Basilisk Games, S2 Games, Linux Game Publishing, RuneSoft, Running with Scissors, Interplay Entertainment, and Double Fine Productions. Desura sold many games that were previously included in Humble Bundle initiatives, as well as numerous other commercial titles. Desura also provided several freeware and free software games.
Originally, the platform was developed by DesuraNET; it was later sold to Linden Lab, and then to Bad Juju Games, which filed for bankruptcy in June 2015. In October 2016, Desura was acquired by Danish company OnePlay a subscription-based online rental service intended to relaunch Desura, but failed due to its bankruptcy. Desura's website spent 4 years being down, and since 2020, is now unrelated to the original Desura service.
Features
The Desura client was tied to its website through the use of the Chromium Embedded Framework. Most of its services were provided through its online interface, with the exception of the game launcher, installer, and update features. This means that the Desura interface remained consistent across multiple platforms.
The interface itself offered various selections based on what feature a user may want to access, with installed games being offered through the "Play" tab, games available for download or purchase being offered through the "Games" tab, user interaction and social networking features from the "Community" tab, information and features for game developers through the "Development" tab, and technical support and client settings through the "Support" tab.
Desura did not implement digital rights management, and Desura employees have commented against its use in the past, recommending that content creators ship without DRM or use a CD Key system instead. However, Desura itself was DRM-neutral, and publishers and developers could sell games that require such technologies to be used. Desura made sure users purchasing these titles are aware of the DRM it ships with and how it works.
Competitors
Desura competed in the same market as Valve's Steam platform. However, Scott Reismanis, the founder of DesuraNET, did not consider it a competitor, but rather an attempt to address a different segment of the market.
Desura primarily hosted indie games, which are games by smaller developers who do not have enough popularity or power to negotiate deals with Steam. Desura believed that its tighter links to a dedicated community would foster better relationships between players and developers.
Desura used to be tied to the Mod DB community, as both were run by the same company. Desura therefore highlighted content distribution for mods as one of its features.
History
Desura was initially developed in secret by DesuraNET for many years. The project was first publicly announced on December 16, 2009. Near its launch, it publicized itself by offering free keys for games to augment the purchases of the same games made through Humble Indie Bundle #2. The Desura Windows client was released to the public on December 18, 2010. On July 10, 2013, Desura was bought by Linden Lab.
Linux support
Development on a Linux client was announced during the Summer of 2011, utilizing wxWidgets and GTK+ as the toolkit, and was introduced as a limited beta program in the Fall. The client was publicly available for download and execution, but users could not log into the online service unless they were a selected beta tester. On November 16, 2011 the Desura Linux client was publicly released with an initial offering of over 65 games.
Although Desura was not the only game distribution platform available for Linux, predated by several traditional online sellers such as Tux Games, Gameolith and Wupra, as well as many Linux distributions distributing games through their package management systems, Desura was the first and most prominent purely digital Linux game distributor with a dedicated client delivery application. The Ubuntu Software Center began selling commercial software packages just prior to the Linux Desura client release, but was not specialized for games, offering a substantially smaller catalog.
Source release
On November 9, 2011 it was announced that Desura would be made partially free software in order to facilitate its further development. The client itself would be released under the GNU General Public License, while the server-side portion of the distribution platform would remain proprietary. The media assets and trademarks would also remain property of DesuraNET. The free software release and development was handled in a manner similar to Google's Chromium project. The free project, named "Desurium", was publicly made available on January 21, 2012.
Ownership changes
On July 10, 2013, Linden Lab announced that they had acquired Desura. The service would continue uninterrupted for current customers and the team and technology become a part of Linden Lab. After acquiring Desura, Linden Lab changed their Terms of Service to include the wording that they have future rights to use and adapt content from their virtual citizens.
It was announced on November 5, 2014 that Linden Lab had sold the Desura service to Bad Juju Games. It faced backlash by indie developers for not paying for sales or keeping developers in the loop onto the situation. Bad Juju later filed for bankruptcy in June 2015. The Desura service went offline on March 19, 2016, but came back on March 29. Desura went offline again in September 2016, and has remained disconnected since then.
On October 28, 2016, the desura.com home page showed the following message: "OnePlay has recently bought the Desura and Royale assets from Bad Juju. We are working hard behind the scenes to relaunch your favorite indie gaming platform." The change in ownership news is dated October 21, 2016.
In the summer of 2020, Desura changed its owner again, the site was bought at an auction by the Finnish company Behemouse, one of whose activities is the development of Flash Games and promotion of other websites related.
References
External links
Linux software
Online-only retailers of video games
MacOS software
Software that uses wxWidgets
Windows software
Defunct online companies |
481813 | https://en.wikipedia.org/wiki/Timing%20attack | Timing attack | In cryptography, a timing attack is a side-channel attack in which the attacker attempts to compromise a cryptosystem by analyzing the time taken to execute cryptographic algorithms. Every logical operation in a computer takes time to execute, and the time can differ based on the input; with precise measurements of the time for each operation, an attacker can work backwards to the input. Finding secrets through timing information may be significantly easier than using cryptanalysis of known plaintext, ciphertext pairs. Sometimes timing information is combined with cryptanalysis to increase the rate of information leakage.
Information can leak from a system through measurement of the time it takes to respond to certain queries. How much this information can help an attacker depends on many variables: cryptographic system design, the CPU running the system, the algorithms used, assorted implementation details, timing attack countermeasures, the accuracy of the timing measurements, etc. Timing attacks can be applied to any algorithm that has data-dependent timing variation. Removing timing-dependencies is difficult in some algorithms that use low-level operations that frequently exhibit varied execution time.
Timing attacks are often overlooked in the design phase because they are so dependent on the implementation and can be introduced unintentionally with compiler optimizations. Avoidance of timing attacks involves design of constant-time functions and careful testing of the final executable code.
Avoidance
Many cryptographic algorithms can be implemented (or masked by a proxy) in a way that reduces or eliminates data-dependent timing information, a constant-time algorithm. Consider an implementation in which every call to a subroutine always returns in exactly x seconds, where x is the maximum time it ever takes to execute that routine on every possible authorized input. In such an implementation, the timing of the algorithm is less likely to leak information about the data supplied to that invocation. The downside of this approach is that the time used for all executions becomes that of the worst-case performance of the function.
The data-dependency of timing may stem from one of the following:
Non-local memory access, as the CPU may cache the data. Software run on a CPU with a data cache will exhibit data-dependent timing variations as a result of memory looks into the cache.
Conditional jumps. Modern CPUs try to speculatively execute past jumps by guessing. Guessing wrong (not uncommon with essentially random secret data) entails a measurable large delay as the CPU tries to backtrack. This requires writing branch-free code.
Some "complicated" mathematical operations, depending on the actual CPU hardware:
Integer division is almost always non-constant time. The CPU uses a microcode loop that uses a different code path when either the divisor or the dividend is small.
CPUs without a barrel shifter runs shifts and rotations in a loop, one position at a time. As a result, the amount to shift must not be secret.
Older CPUs run multiplications in a way similar to division.
Examples
The execution time for the square-and-multiply algorithm used in modular exponentiation depends linearly on the number of '1' bits in the key. While the number of '1' bits alone is not nearly enough information to make finding the key easy, repeated executions with the same key and different inputs can be used to perform statistical correlation analysis of timing information to recover the key completely, even by a passive attacker. Observed timing measurements often include noise (from such sources as network latency, or disk drive access differences from access to access, and the error correction techniques used to recover from transmission errors). Nevertheless, timing attacks are practical against a number of encryption algorithms, including RSA, ElGamal, and the Digital Signature Algorithm.
In 2003, Boneh and Brumley demonstrated a practical network-based timing attack on SSL-enabled web servers, based on a different vulnerability having to do with the use of RSA with Chinese remainder theorem optimizations. The actual network distance was small in their experiments, but the attack successfully recovered a server private key in a matter of hours. This demonstration led to the widespread deployment and use of blinding techniques in SSL implementations. In this context, blinding is intended to remove correlations between key and encryption time.
Some versions of Unix use a relatively expensive implementation of the crypt library function for hashing an 8-character password into an 11-character string. On older hardware, this computation took a deliberately and measurably long time: as much as two or three seconds in some cases. The login program in early versions of Unix executed the crypt function only when the login name was recognized by the system. This leaked information through timing about the validity of the login name, even when the password was incorrect. An attacker could exploit such leaks by first applying brute-force to produce a list of login names known to be valid, then attempt to gain access by combining only these names with a large set of passwords known to be frequently used. Without any information on the validity of login names the time needed to execute such an approach would increase by orders of magnitude, effectively rendering it useless. Later versions of Unix have fixed this leak by always executing the crypt function, regardless of login name validity.
Two otherwise securely isolated processes running on a single system with either cache memory or virtual memory can communicate by deliberately causing page faults and/or cache misses in one process, then monitoring the resulting changes in access times from the other. Likewise, if an application is trusted, but its paging/caching is affected by branching logic, it may be possible for a second application to determine the values of the data compared to the branch condition by monitoring access time changes; in extreme examples, this can allow recovery of cryptographic key bits.
The 2017 Meltdown and Spectre attacks which forced CPU manufacturers (including Intel, AMD, ARM, and IBM) to redesign their CPUs both rely on timing attacks. As of early 2018, almost every computer system in the world is affected by Spectre, making it the most powerful example of a timing attack in history.
Algorithm
The following C code demonstrates a typical insecure string comparison which stops testing as soon as a character doesn't match. For example, when comparing "ABCDE" with "ABxDE" it will return after 3 loop iterations:
bool insecureStringCompare(const void *a, const void *b, size_t length) {
const char *ca = a, *cb = b;
for (size_t i = 0; i < length; i++)
if (ca[i] != cb[i])
return false;
return true;
}
By comparison, the following version runs in constant-time by testing all characters and using a bitwise operation to accumulate the result:
bool constantTimeStringCompare(const void *a, const void *b, size_t length) {
const char *ca = a, *cb = b;
bool result = true;
for (size_t i = 0; i < length; i++)
result &= ca[i] == cb[i];
return result;
}
In the world of C library functions, the first function is analogous to , while the latter is analogous to NetBSD's or OpenBSD's and . On other systems, the comparison function from cryptographic libraries like OpenSSL and libsodium can be used.
Notes
Timing attacks are easier to mount if the adversary knows the internals of the hardware implementation, and even more so, the cryptographic system in use. Since cryptographic security should never depend on the obscurity of either (see security through obscurity, specifically both Shannon's Maxim and Kerckhoffs's principle), resistance to timing attacks should not either. If nothing else, an exemplar can be purchased and reverse engineered. Timing attacks and other side-channel attacks may also be useful in identifying, or possibly reverse-engineering, a cryptographic algorithm used by some device.
References
Further reading
Paul C. Kocher. Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems. CRYPTO 1996: 104–113
Describes dudect, a simple program that times a piece of code on different data.
Side-channel attacks |
3708321 | https://en.wikipedia.org/wiki/3-D%20Secure | 3-D Secure | 3-D Secure is a protocol designed to be an additional security layer for online credit and debit card transactions. The name refers to the "three domains" which interact using the protocol: the merchant/acquirer domain, the issuer domain, and the interoperability domain.
Originally developed in the autumn of 1999 by Celo Communications AB (later Gemplus, Gemalto and now Thales Group) for Visa Inc. in a project named "p42" ("p" from Pole vault as the project was a big challenge and "42" as the answer from the book The Hitchhiker's Guide to the Galaxy).
A new updated version was developed by Gemplus between 2000-2001.
In 2001 Arcot Systems (now CA Technologies) and Visa Inc. with the intention of improving the security of Internet payments, and offered to customers under the Verified by Visa brand (later rebranded as Visa Secure). Services based on the protocol have also been adopted by Mastercard as SecureCode, by Discover as ProtectBuy, by JCB International as J/Secure, and by American Express as American Express SafeKey. Later revisions of the protocol have been produced by EMVCo under the name EMV 3-D Secure. Version 2 of the protocol was published in 2016 with the aim of complying with new EU authentication
requirements and resolving some of the short-comings of the original protocol.
Analysis of the first version of the protocol by academia has shown it to have many security issues that affect the consumer, including a greater surface area for phishing and a shift of liability in the case of fraudulent payments.
Description and basic aspects
The basic concept of the protocol is to tie the financial authorization process with online authentication. This additional security authentication is based on a three-domain model (hence the 3-D in the name itself). The three domains are:
Acquirer domain (the bank and the merchant to which the money is being paid).
Issuer domain (the card issuer of the card being used).
Interoperability domain (the infrastructure provided by the card scheme, credit, debit, prepaid or other types of a payment card, to support the 3-D Secure protocol). It includes the Internet, merchant plug-in, access control server, and other software providers
The protocol uses XML messages sent over SSL connections with client authentication (this ensures the authenticity of both peers, the server and the client, using digital certificates).
A transaction using Verified-by-Visa or SecureCode will initiate a redirection to the website of the card issuer to authorize the transaction. Each issuer could use any kind of authentication method (the protocol does not cover this) but typically, a password tied to the card is entered when making online purchases. The Verified-by-Visa protocol recommends the card issuer's verification page to load in an inline frame session. In this way, the card issuer's systems can be held responsible for most security breaches. Today it is easy to send a one-time password as part of an SMS text message to users' mobile phones and emails for authentication, at least during enrollment and for forgotten passwords.
The main difference between Visa and Mastercard implementations lies in the method to generate the UCAF (Universal Cardholder Authentication Field): Mastercard uses AAV (Accountholder Authentication Value) and Visa uses CAVV (Cardholder Authentication Verification Value).
ACS providers
In the 3-D Secure protocol, the ACS (access control server) is on the card issuer side. Currently, most card issuers outsource ACS to a third party. Commonly, the buyer's web browser shows the domain name of the ACS provider, rather than the card issuer's domain name; however, this is not required by the protocol. Dependent on the ACS provider, it is possible to specify a card issuer-owned domain name for use by the ACS.
MPI providers
Each 3-D Secure version 1 transaction involves two Internet request/response pairs: VEReq/VERes and PAReq/PARes. Visa and Mastercard do not permit merchants to send requests directly to their servers. Merchants must instead use MPI (merchant plug-in) providers.
Merchants
The advantage for merchants is the reduction of "unauthorized transaction" chargebacks. One disadvantage for merchants is that they have to purchase a merchant plug-in (MPI) to connect to the Visa or Mastercard directory server. This is expensive (setup fee, monthly fee, and per-transaction fee); at the same time, it represents additional revenue for MPI providers. Supporting 3-D Secure is complicated and, at times, creates transaction failures. Perhaps the biggest disadvantage for merchants is that many users view the additional authentication step as a nuisance or obstacle, which results in a substantial increase in transaction abandonment and lost revenue.
Buyers and credit card holders
In most current implementations of 3-D Secure, the card issuer or its ACS provider prompts the buyer for a password that is known only to the card issuer or ACS provider and the buyer. Since the merchant does not know this password and is not responsible for capturing it, it can be used by the card issuer as evidence that the purchaser is indeed their cardholder. This is intended to help decrease risk in two ways:
Copying card details, either by writing down the numbers on the card itself or by way of modified terminals or ATMs, does not result in the ability to purchase over the Internet because of the additional password, which is not stored on or written on the card.
Since the merchant does not capture the password, there is a reduced risk from security incidents at online merchants; while an incident may still result in hackers obtaining other card details, there is no way for them to get the associated password.
3-D Secure does not strictly require the use of password authentication. It is said to be possible to use it in conjunction with smart card readers, security tokens and the like. These types of devices might provide a better user experience for customers as they free the purchaser from having to use a secure password. Some issuers are now using such devices as part of the Chip Authentication Program or Dynamic Passcode Authentication schemes.
One significant disadvantage is that cardholders are likely to see their browser connect to unfamiliar domain names as a result of vendors' MPI implementations and the use of outsourced ACS implementations by card issuers, which might make it easier to perform phishing attacks on cardholders.
General criticism
Verifiability of site identity
The system involves a pop-up window or inline frame appearing during the online transaction process, requiring the cardholder to enter a password which, if the transaction is legitimate, their card issuer will be able to authenticate. The problem for the cardholder is determining if the pop-up window or frame is really from their card issuer when it could be from a fraudulent website attempting to harvest the cardholder's details. Such pop-up windows or script-based frames lack any access to any security certificate, eliminating any way to confirm the credentials of the implementation of 3-DS.
The Verified-by-Visa system has drawn some criticism, since it is hard for users to differentiate between the legitimate Verified-by-Visa pop-up window or inline frame, and a fraudulent phishing site. This is because the pop-up window is served from a domain which is:
Not the site where the user is shopping
Not the card issuer
Not visa.com or mastercard.com
In some cases, the Verified-by-Visa system has been mistaken by users for a phishing scam and has itself become the target of some phishing scams. The newer recommendation to use an inline frame (iframe) instead of a pop-up has reduced user confusion, at the cost of making it harder, if not impossible, for the user to verify that the page is genuine in the first place. , most web browsers do not provide a way to check the security certificate for the contents of an iframe. Some of these concerns in site validity for Verified-by-Visa are mitigated, however, as its current implementation of the enrollment process requires entering a personal message which is displayed in later Verified-by-Visa pop-ups to provide some assurance to the user the pop-ups are genuine.
Some card issuers also use activation-during-shopping (ADS), in which cardholders who are not registered with the scheme are offered the opportunity of signing up (or forced into signing up) during the purchase process. This will typically take them to a form in which they are expected to confirm their identity by answering security questions which should be known to their card issuer. Again, this is done within the iframe where they cannot easily verify the site they are providing this information to—a cracked site or illegitimate merchant could in this way gather all the details they need to pose as the customer.
Implementation of 3-D Secure sign-up will often not allow a user to proceed with a purchase until they have agreed to sign up to 3-D Secure and its terms and conditions, not offering any alternative way of navigating away from the page than closing it, thus suspending the transaction.
Cardholders who are unwilling to take the risk of registering their card during a purchase, with the commerce site controlling the browser to some extent, can in some cases go to their card issuer's web site in a separate browser window and register from there. When they return to the commerce site and start over they should see that their card is registered. The presence on the password page of the personal assurance message (PAM) that they chose when registering is their confirmation that the page is coming from the card issuer. This still leaves some possibility of a man-in-the-middle attack if the cardholder cannot verify the SSL server certificate for the password page. Some commerce sites will devote the full browser page to the authentication rather than using a frame (not necessarily an iFrame), which is a less secure object. In this case, the lock icon in the browser should show the identity of either the card issuer or the operator of the verification site. The cardholder can confirm that this is in the same domain that they visited when registering their card if it is not the domain of their card issuer.
Mobile browsers present particular problems for 3-D Secure, due to the common lack of certain features such as frames and pop-ups. Even if the merchant has a mobile web site, unless the issuer is also mobile-aware, the authentication pages may fail to render properly, or even at all. In the end, many analysts have concluded that the activation-during-shopping (ADS) protocols invite more risk than they remove and furthermore transfer this increased risk to the consumer.
In some cases, 3-D Secure ends up providing little security to the cardholder, and can act as a device to pass liability for fraudulent transactions from the card issuer or retailer to the cardholder. Legal conditions applied to the 3-D Secure service are sometimes worded in a way that makes it difficult for the cardholder to escape liability from fraudulent "cardholder not present" transactions.
Geographic discrimination
Card issuers and merchants may use 3-D Secure systems unevenly with regard to card issuers that issue cards in several geographic locations, creating differentiation, for example, between the domestic US- and non-US-issued cards. For example, since Visa and Mastercard treat the unincorporated US territory of Puerto Rico as a non-US international, rather than a domestic US location, cardholders there may confront a greater incidence of 3-D Secure queries than cardholders in the fifty states. Complaints to that effect have been received by Puerto Rico Department of Consumer Affairs "equal treatment" economic discrimination site.
3-D Secure as strong customer authentication
Version 2 of 3-D Secure, which incorporates one-time passwords, is a form of software-based strong customer authentication as defined by the EU's Revised Directive on Payment Services (PSD2); earlier variants used static passwords, which are not sufficient to meet the directive's requirements.
3-D Secure relies upon the issuer actively being involved and ensuring that any card issued becomes enrolled by the cardholder; as such, acquirers must either accept unenrolled cards without performing strong customer authentication, or reject such transactions, including those from smaller card schemes which do not have 3-D Secure implementations.
Alternative approaches perform authentication on the acquiring side, without requiring prior enrolment with the issuer. For instance, PayPal's patented 'verification' uses one or more dummy transactions are directed towards a credit card, and the cardholder must confirm the value of these transactions, although the resulting authentication can't be directly related to a specific transaction between merchant and cardholder. A patented system called iSignthis splits the agreed transaction amount into two (or more) random amounts, with the cardholder then proving that they are the owner of the account by confirming the amounts on their statement.
ACCC blocks 3-D Secure proposal
A proposal to make 3-D Secure mandatory in Australia was blocked by the Australian Competition & Consumer Commission (ACCC) after numerous objections and flaw-related submissions were received.
India
Some countries like India made use of not only CVV2, but 3-D Secure mandatory, a SMS code sent from a card issuer and typed in browser when you are redirected when you click "purchase" to payment system or card issuer system site where you type that code and only then the operation is accepted. Nevertheless, Amazon can still do transactions from other countries with turned on 3-D Secure.
3-D Secure 2.0
In October 2016, EMVCo published the specification for 3-D Secure 2.0; it is designed to be less intrusive than the first version of the specification, allowing more contextual data to be sent to the customer's card issuer (including mailing addresses and transaction history) to verify and assess the risk of the transaction. The customer would only be required to pass an authentication challenge if their transaction is determined to be of a high risk. In addition, the workflow for authentication is designed so that it no longer requires redirects to a separate page, and can also activate out-of-band authentication via an institution's mobile app (which, in turn, can also be used with biometric authentication). 3-D Secure 2.0 is compliant with EU "strong customer authentication" mandates.
See also
Secure electronic transaction (SET)
Merchant plug-in (MPI)
References
External links
American Express SafeKey (consumer site)
American Express SafeKey (global partner site)
Verified by Visa
Activating Verified by Visa
Verified by Visa Partner Network
Mastercard SecureCode home page
usa.visa.com
Discover Global Network ProtectBuy
Cryptographic protocols
Financial industry XML-based standards |
7274485 | https://en.wikipedia.org/wiki/Retrospect%20%28software%29 | Retrospect (software) | Retrospect is a family of software applications that back up computers running the macOS, Microsoft Windows, and Linux (and until 2019 classic Mac OS) operating systems. It uses the client–server backup model.
The product is focused on the small and medium enterprise (SME) market. It performs three types of backup: "A Recycle backup deletes a backup set and adds all files, and a New Media backup creates a new backup set, copying all the files not included. Again this represents all files. Once installed, scripts can also be introduced to enable Scheduled backup using predetermined information supplied by the administrator. This information contains source, destination and other criteria, which enables a backup session to scan and back up one volume at a time, requiring less memory than an immediate backup."
The product is used for GUI-scripted backup.
History
The software was first developed by Dantz Development Corporation in 1989, initially for the Macintosh platform and continuing later for Windows. With sales split evenly between the two variants and the Macintosh variant claiming 90% of its market, Dantz Development Corporation was acquired by EMC Corporation in 2004. In 2006 version 7.5, the refined first release of the Windows variant under EMC, added performance features needed by SMEs.
Acquisition by EMC, under its Insignia brand, led to the product being briefly mothballed when Insignia was shut down in 2007. It was revived in 2008 and transferred to EMC's new acquisition Iomega. A "premature" release of Retrospect 8 in 2009 undermined its market after Apple introduced its competing Time Machine in late 2007. In 2010, Retrospect was sold to Roxio, owned by Sonic Solutions, which was then in turn acquired by Rovi. Rovi decided that it was not a core business, but a team who had worked on the product approached Rovi with the idea of spinning out as a separate company. Retrospect, Inc. was formed by a core team most of whom had worked on the product for ten years or more. Retrospect 9 was introduced in 2012, to positive reviews.
In June 2019 the holding company StorCentric—which also owns Drobo—announced that it had acquired Retrospect Inc., which it will operate as an wholly owned independent subsidiary.
See also
Apple Tape Backup 40SC
Notes
References
External links
Backup software
Dell EMC
Classic Mac OS software
Roxio software
Backup software for macOS
Backup software for Windows |
68042 | https://en.wikipedia.org/wiki/Fischer%20random%20chess | Fischer random chess | Fischer random chess, also known as Chess960, is a variation of the game of chess invented by the former World Chess Champion Bobby Fischer. Fischer announced this variation on June 19, 1996, in Buenos Aires, Argentina. Fischer random chess employs the same board and pieces as classical chess, but the starting position of the pieces on the players' is randomized, following certain rules. The random setup makes gaining an advantage through the memorization of openings impracticable; players instead must rely more on their skill and creativity .
Randomizing the main pieces had long been known as shuffle chess, but Fischer random chess introduces new rules for the initial random setup, "preserving the dynamic nature of the game by retaining for each player and the right to castle for both sides". The result is 960 unique possible starting positions.
In 2008, FIDE added Chess960 to an appendix of the Laws of Chess. The first world championship officially sanctioned by FIDE, the FIDE World Fischer Random Chess Championship 2019, brought additional prominence to the variant.
Setup
Before the game, a starting position is randomly determined and set up, subject to certain requirements. White's pieces (not pawns) are placed randomly on the first , following two rules:
The bishops must be placed on opposite-color squares.
The king must be placed on a square between the rooks.
Black's pieces are placed equal-and-opposite to White's pieces. (For example, if the white king is randomly determined to start on f1, then the black king is placed on f8.) Pawns are placed on the players' second ranks as in classical chess.
After setup, the game is played the same as classical chess in all respects, with the exception of castling from the different possible starting positions for king and rooks.
Creating starting positions
Usually, the players accept the conditions of the organizer to generate the starting position with software, as it was used in the 2019 World Fischer Random Championship.
If the software is not available or the players don't accept it, there are several procedures for generating random starting positions with equal probability.
There are 4 × 4 × 6 × 10 × 1 = 4 × 4 × 15 × 4 × 1 = 960
legal starting positions:
4 possible squares for the light-squared bishop
4 possible squares for the dark-squared bishop
6 remaining squares for the queen and 5! / (3! × 2!) = 5 × 4 / 2 = 10 ways to place the two (identical) knights on the remaining 5 squares
or
6! / (4! × 2!) = 5 × 6 / 2 = 15 ways to place the two (identical) knights on the remaining 6 squares and 4 remaining squares for the queen
1 way to place the two rooks and king on the remaining 3 squares, since the king must be between the rooks.
In 1998, Ingo Althöfer proposed a method that requires only a single standard die. (Re-roll if needed to get values in the range 1–4 or 1–5).
If a full set of polyhedral dice is available (a tetrahedron (d4), cube (d6), octahedron (d8), dodecahedron (d12), and an icosahedron (d20)), one never needs to reroll any dice.
Tossing a coin to create binary numbers and convert them to decimal.
Shuffling objects (cards, pieces, pawns, dominoes tiles, scrabble letters) and use the permutations.
Since all 960 starting positions have been assigned a number (per the Fischer Random Chess numbering scheme) any method that generates a random number between 0 and 959 can be used to generate a starting position.
Castling rules
As in classical chess, each player may castle once per game, moving both the king and a rook in a single move; however, the castling rules were reinterpreted in Fischer random chess to support the different possible initial positions of king and rook. After castling, the final positions of king and rook are exactly the same as in classical chess, namely:
After a-side castling (/long castling in classical chess), the king finishes on the c- and the a-side rook finishes on the d-file. The move is notated 0-0-0 as in classical chess.
After h-side castling (/short castling in classical chess), the king finishes on the g-file and the h-side rook finishes on the f-file. The move is notated 0-0 as in classical chess.
Castling prerequisites are the same as in classical chess, namely:
The king and the castling rook must not have previously moved.
No square from the king's initial square to its final square may be under attack by an enemy piece.
All the squares between the king's initial and final squares (including the final square), and all the squares between the castling rook's initial and final squares (including the final square), must be vacant except for the king and castling rook.
FIDE's recommended procedure for castling unambiguously is first to move the king outside the playing area next to its final square, then to move the rook to its final square, then to move the king to its final square. Another recommendation is to verbally announce the intent to castle before doing so.
Observations
In some starting positions, squares can remain occupied during castling that would be required to be vacant under standard rules. Castling a-side (0-0-0) could still be possible despite the home rank a-, b-, or e-file squares being occupied, and similarly for the e- and h-files for h-side castling (0-0). In other positions, it can happen that the king or rook does not move during the castling maneuver since it already occupies its destination square – e.g., an h-side rook that starts on the f-file; in this case, only the king moves. No initial position allows a castling where neither piece moves, as the king must start between the rooks.
Another unusual possibility is for castling to be available as the first move of the game, as happened in the 11th game of the tournament match between Hikaru Nakamura and Magnus Carlsen, Fischer Random Blitz 2018. The starting position had kings at f1/f8 and h-side rooks at g1/g8. Both players took the opportunity to castle on the first move (1.0-0 0-0).
Unlike in standard chess, there are exactly 90 starting positions where players have to give up castling rights on one side in order to castle on the other side. In just these 90 positions, a rook has to be moved (or captured) on one side in order to castle on the other side. This is seen by calculating that this happens 18 times in each of five possible groups of starting positions namely RKRXXXXX, RKXRXXXX, XRKRXXXX, XXXXXRKR and RXKRXXXX. For example, in the starting position RKRBBNNQ, which is in the first group RKRXXXXX, a player intending to castle a-side must first move the c-file rook (or let it be captured).
The Sesse evaluations show that white has about a 7% increased advantage in these 90 positions (Evaluation is 0.1913) compared to the remaining 870 positions (Evaluation is 0.1790).
Theory
The study of openings in Fischer random chess is in its infancy , but fundamental opening principles still apply, including: protect the king, control the central squares (directly or indirectly), and develop rapidly, starting with the less valuable pieces.
Unprotected pawns may also need to be dealt with quickly. Many starting positions have unprotected pawns, and some starting positions have up to two that can be attacked on the first move. For example, in some Fischer random chess starting positions (see diagram), White can attack an unprotected black pawn on the first move, whereas in classical chess it takes two moves for White to attack, and there are no unprotected pawns.
White's advantage
It has been argued that two games should be played from each starting position, with players alternating colors, since the advantage offered to White in some initial positions may be greater than in classical chess.
Indeed, the Sesse evaluations (which used Stockfish 9) evaluate the starting positions between 0.00 and 0.57 pawns advantage for White. However, they give an average of around 0.18 and a standard deviation of around 0.0955 but evaluate the standard chess starting position (SP 518) at 0.22. This is a 22.22% difference (between 0.18 and 0.22), over one-fifth of reduction of white's advantage on average.
Incidentally, around 68% of starting positions have an evaluation within 1 standard deviation of the mean, i.e. 68% of the evaluations are within the interval (0.0847, 0.2755).
History
Fischer random chess is a variant of shuffle chess, which had been suggested as early as 1792 with games played as early as 1842.
Fischer's modification "imposes certain restrictions, arguably an improvement on the anarchy of the fully randomized game in which one player is almost certain to start at an advantage". Fischer started to develop his new version of chess after the 1992 return match with Boris Spassky. The result was the formulation of the rules of Fischer random chess in September 1993, introduced formally to the public on June 19, 1996, in Buenos Aires, Argentina. Fischer describes the rules here.
Fischer's goal was to eliminate what he considered the complete dominance of openings preparation in classical chess, replacing it with creativity and talent. In a situation where the starting position was random it would be impossible to fix every move of the game. Since the "opening book" for 960 possible opening systems would be too difficult to devote to memory, the players must create every move originally. From the first move, both players must devise original strategies and cannot use well-established patterns. Fischer believed that eliminating memorized book moves would level the playing field.
During the summer of 1993, Bobby Fischer visited László Polgár and his family in Hungary. All of the Polgar sisters (Judit Polgár, Susan Polgar, and Sofia Polgar) played many games of Fischer random chess with Fischer. At one point Sofia beat Fischer three games in a row. Fischer was not pleased when the father, László, showed Fischer an old chess book that described what appeared to be a forerunner of Fischer random chess. The book was written by Izidor Gross and published in 1910. Fischer then changed the rules of his variation in order to make it different.
Tournaments
1996 – The first Fischer random chess tournament was held in Vojvodina, Yugoslavia, in the spring of 1996, and was won by International Grandmaster (GM) Péter Lékó with 9½/11, ahead of GM Stanimir Nikolić with 9 points.
2006–present – The first Fischer Random Championships of the Netherlands was held by Fischer Z chess club and has since been held annually. GM Dimitri Reinderman has won this title for three years, champion in 2010, 2014, and 2015. Two grandmasters have won the title twice, GM Yasser Seirawan and Dutch GM Dennis de Vreugt.
2010 – In 2010 the US Chess Federation sponsored its first Chess960 tournament, at the Jerry Hanken Memorial US Open tournament in Irvine, California. This one-day event, directed by Damian Nash, saw a first-place tie between GM Larry Kaufman and FM Mark Duckworth.
2012 – The British Chess960 Championship was held at the Mind Sports Olympiad, won by Ankush Khandelwal.
2018 – The first edition of the European Fischer Random Cup was held in Reykjavik on March 9, 2018, on Fischer's 75th birthday. It was won by Aleksandr Lenderman.
2019 – The Icelandic Chess Federation organized the European Fischer Random Championship on the rest day of 34th edition of The GAMMA Reykjavik Open on April 12, 2019. The tournament was won by the then 15-year-old Iranian prodigy Alireza Firouzja, a full point ahead of US's Andrew Tang, who was second on tiebreaks.
2019 – The FIDE World Fischer Random Chess Championship 2019 started on April 28, 2019, with the first qualifying tournaments, which took place online and were open to all interested participants. After several rounds, finalists Wesley So and Magnus Carlsen played for the crown. The inaugural official FRC Champion was Wesley So.
Mainz Championships
Note: None of the Mainz championships were recognized by FIDE. Furthermore, they were all played with rapid time controls.
2001 – In 2001, Lékó became the first Fischer random chess world champion, defeating GM Michael Adams in an eight-game match played as part of the Mainz Chess Classic. There were no qualifying matches (also true of the first classical chess world chess champion titleholders), but both players were in the top five in the January 2001 world rankings for classical chess. Lékó was chosen because of the many novelties he has introduced to known chess theories, as well as his previous tournament win; in addition, Lékó has supposedly played Fischer random chess games with Fischer himself. Adams was chosen because he was the world number one in blitz (rapid) chess and is regarded as an extremely strong player in unfamiliar positions. The match was won by a narrow margin, 4½ to 3½.
2002 – In 2002 at Mainz, an open tournament was held which was attended by 131 players, with Peter Svidler taking first place. Fischer random chess was selected as the April 2002 "Recognized Variant of the Month" by The Chess Variant Pages (ChessVariants.org). The book Shall We Play Fischerandom Chess? was published in 2002, authored by Yugoslavian GM Svetozar Gligorić.
2003 – At the 2003 Mainz Chess Classic, Svidler beat Lékó in an eight-game match for the World Championship title by a score of 4½–3½. The Chess960 open tournament drew 179 players, including 50 GMs. It was won by Levon Aronian, the 2002 World Junior Champion. Svidler is the official first World New Chess Association (WNCA) world champion inaugurated on August 14, 2003, with Jens Beutel, Mayor of Mainz as the President and Hans-Walter Schmitt, Chess Classic organiser as Secretary. The WNCA maintains an own dedicated Chess960 rating list.
2004 – Aronian played Svidler for the title at the 2004 Mainz Chess Classic, losing 4½–3½. At the same tournament in 2004, Aronian played two Chess960 games against the Dutch computer chess program The Baron, developed by Richard Pijl. Both games ended in a draw. It was the first ever man against machine match in Chess960. Zoltán Almási won the Chess960 open tournament in 2004.
2005 – Almási and Svidler played an eight-game match at the 2005 Mainz Chess Classic. Once again, Svidler defended his title, winning 5–3. Levon Aronian won the Chess960 open tournament in 2005. During the Chess Classic 2005 in Mainz, initiated by Mark Vogelgesang and Eric van Reem, the first-ever Chess960 computer chess world championship was played. Nineteen programs, including the powerful Shredder, played in this tournament. As a result of this tournament, Spike became the first Chess960 computer world champion.
2006 – The 2006 Mainz Chess Classic saw Svidler defending his championship in a rematch against Levon Aronian. This time, Aronian won the match 5–3 to become the third ever Fischer random chess world champion. Étienne Bacrot won the Chess960 open tournament, earning him a title match against Aronian in 2007. Three new Chess960 world championship matches were held, in the women, junior and senior categories. In the women category, Alexandra Kosteniuk became the first Chess960 Women World Champion by beating Elisabeth Pähtz 5½ to 2½. The 2006 Senior Chess960 World Champion was Vlastimil Hort, and the 2006 Junior Chess960 World Champion was Pentala Harikrishna. Shredder won the computer championship, making it Chess960 computer world champion 2006.
2007 – In 2007 Mainz Chess Classic Aronian successfully defended his title of Chess960 World Champion over Viswanathan Anand, while Victor Bologan won the Chess960 open tournament. Rybka won the 2007 computer championship.
2008 – Hikaru Nakamura won the 2008 Finet Chess960 Open (Mainz).
2009 – The last Mainz tournament was held in 2009. Hikaru Nakamura won the Chess960 World Championship against Aronian, while Alexander Grischuk won the Chess960 open tournament.
Computers
In 2005, chess program The Baron played two Fischer random chess games against Chess960 World Champion Peter Svidler, who won 1½–½. The chess program Shredder, developed by Stefan Meyer-Kahlen of Germany, played two games against Zoltán Almási from Hungary, where Shredder won 2–0.
Matches
From February 9 to 13, 2018, a Fischer random chess match between reigning classical World Chess Champion Magnus Carlsen and the unofficial Fischer random chess world champion Hikaru Nakamura was held in Høvikodden, Norway. The match consisted of 8 rapid and 8 blitz games, with the rapid games counting double. Each position was used in two games, with colors reversed. Carlsen prevailed with a score of 14–10.
From September 11 to 14, 2018, the Saint Louis Chess Club held a Fischer random chess event. The playing format included five individual matches, each pair of players facing the same five different starting positions, with 6 rapid games (counting 2 points each) and 14 blitz games (counting 1 point each). The players and scores: Veselin Topalov (14½–11½) defeated Garry Kasparov; Hikaru Nakamura (14–12) defeated Peter Svidler; Wesley So (14½–11½) defeated Anish Giri; Maxime Vachier-Lagrave (17½–8½) defeated Sam Shankland; Levon Aronian (17½–8½) defeated Leinier Dominguez.
World Championship
The first world championship in Fischer random chess officially recognized by FIDE were announced April 20, 2019, and ended on November 2, 2019. Wesley So defeated Magnus Carlsen 13½–2½ in the last round, and became the first world champion in Fischer random chess.
During the announcement FIDE president Arkady Dvorkovich commented: It is an unprecedented move that the International Chess Federation recognizes a new variety of chess, so this was a decision that required to be carefully thought out. But we believe that Fischer Random is a positive innovation: It injects new energies and enthusiasm into our game, but at the same time it doesn't mean a rupture with our classical chess and its tradition. It is probably for this reason that Fischer Random chess has won the favor of the chess community, including the top players and the world champion himself. FIDE couldn't be oblivious to that: It was time to embrace and incorporate this modality of chess.
Naming
The variant has held a number of different names. It was originally known as "Fischerandom", the name given by Fischer. Fischer random chess is the official term, used by FIDE.
Hans-Walter Schmitt, chairman of the Frankfurt Chess Tigers e.V. and an advocate of the variant, started a brainstorming
process for creating a new name, which had to meet the requirements of leading grandmasters; specifically, the new name and its parts:
should not contain part of the name of any grandmaster;
should not include negatively biased or "spongy" elements (such as "random" or "freestyle"); and
should be universally understood.
The effort culminated in the name choice "Chess960" – derived from the number of different possible starting positions. Fischer never publicly expressed an opinion on the name "Chess960".
Reinhard Scharnagl, another proponent of the variant, advocated the term "FullChess". Today he uses FullChess, however, to refer to variants which consistently embed classical chess (e.g. Chess960 and similar variants). He recommends the name Chess960 for the variant in preference to Fischer random chess.
It is known as Chess9LX, by the Saint Louis Chess Club.
Coding games and positions
Recorded games must convey the Fischer random chess starting position. Games recorded using the Portable Game Notation (PGN) can record the initial position using Forsyth–Edwards Notation (FEN), as the value of the "FEN" tag. Castling is notated the same as in classical chess (except PGN requires letter O not number 0). Note that not all chess programs can handle castling correctly in Fischer random chess games. To correctly record a Fischer random chess game in PGN, an additional "Variant" tag (not "Variation" tag, which has a different meaning) must be used to identify the rules; the rule named "Fischerandom" is accepted by many chess programs as identifying Fischer random chess, though "Chess960" should be accepted as well. This means that in a PGN-recorded game, one of the PGN tags (after the initial seven tags) would look like this: [Variant "Fischerandom"].
FEN is capable of expressing all possible starting positions of Fischer random chess; however, unmodified FEN cannot express all possible positions of a Chess960 game. In a game, a rook may move into the back row on the same side of the king as the other rook, or pawn(s) may be underpromoted into rook(s) and moved into the back row. If a rook is unmoved and can still castle, yet there is more than one rook on that side, FEN notation as traditionally interpreted is ambiguous. This is because FEN records that castling is possible on that side, but not which rook is still allowed to castle.
A modification of FEN, X-FEN, has been devised by Reinhard Scharnagl to remove this ambiguity. In X-FEN, the castling markings "KQkq" have their expected meanings: "Q" and "q" mean a-side castling is still legal (for White and Black respectively), and "K" and "k" mean h-side castling is still legal (for White and Black respectively). However, if there is more than one rook on the baseline on the same side of the king, and the rook that can castle is not the outermost rook on that side, then the file letter (uppercase for White) of the rook that can castle is used instead of "K", "k", "Q", or "q"; in X-FEN notation, castling potentials belong to the outermost rooks by default. The maximum length of the castling value is still four characters. X-FEN is upwardly compatible with FEN, that is, a program supporting X-FEN will automatically use the normal FEN codes for a traditional chess starting position without requiring any special programming. As a benefit all 18 pseudo FRC positions (positions with traditional placements of rooks and king) still remain uniquely encoded.
The solution implemented by chess engines like Shredder and Fritz is to use the letters of the columns on which the rooks began the game. This scheme is sometimes called Shredder-FEN. For the traditional setup, Shredder-FEN would use HAha instead of KQkq.
Views of grandmasters
Fischer's proposed "new chess" has elicited various comments from grandmasters.
"I think in general the future of classical chess as it is now is a little bit dubious. I would love to see more Fischer [Random] Chess being played over-the-board in a classical format. That would be very interesting to me, because I feel that that particular format is pretty well suited to classical chess as basically you need a lot of time in order to be able to play the game even remotely decently. And you can see that in the way that Fischer [Random] Chess is being played now when it is played in a rapid format. The quality of the games isn't very high because we make such fundamental mistakes in the opening. We don't understand it nearly enough and I think that would increase a lot if we were given a classical time control there. So I would definitely hope for that." —Magnus Carlsen, November 2020
"Of course, if people do not want to do any work then it is better to start the game from a random position." —Garry Kasparov, December 2001
"Random chess lets me enjoy myself and get publicity for chess without having to disrupt my life for months of preparation." —Garry Kasparov, August 2018
"I think we're making theory or even making history because we're opening not even a new chapter but basically a new book on the game of chess. That's why I think all players are excited." —Garry Kasparov, September 2018
"To me, mainly chess is art — that's why I like Fischer Random a lot; there is a lot of creativity." —Wesley So, November 2019
"My favorite form of chess is actually chess960. Because there's not much theory, not much preparation, it's very original. With the traditional format, the engines are just getting super strong, and it feels like you have to memorize the first 20-25 moves just to get a game. Bobby Fischer once said that the problem with chess is that you get the same exact starting position over and over. These days, there's 10 million games in the database already, so it's very hard to create original play, while chess960 is really your brain against mine. After the first or second move, you're already thinking." —Wesley So, April 2019
"I have to say that I love Chess960! I like to be creative and I really enjoy the Chess960 events in Mainz." —Alexandra Kosteniuk, August 2010
"I don't see any drawbacks in Fischer Random chess. The only slight shortcoming is the start position, otherwise there are just advantages. That's why I support it in full. If all the chess professionals played Fischer Random, our game could have been much more popular." —Alexander Grischuk, March 2018 [translated from Russian]
"Chess is already complicated enough." —Vassily Ivanchuk
"It’s a game I really love and I see it as the future of chess." —Levon Aronian, July 2011
"I think chess960 is great as it is simply pure intuition and understanding without theory or computers." —Hikaru Nakamura, February 2014
"Personally, It is refreshing to watch the Chess960 match between Carlsen and Nakamura. As a chess player and a fan, this is an exciting change. Could this be the future?" —Vidit Gujrathi, February 2018
"No more theory means more creativity." —Artur Yusupov
"[...] the play is much improved over traditional chess because you don't need to analyze or memorize any book openings. Therefore, your play becomes truly creative and real." —Svetozar Gligorić
"Finally, one is no longer obliged to spend the whole night long troubling oneself with the next opponent's opening moves. The best preparation consists just of sleeping well!" —Péter Lékó
"I tried many different starting positions and all these were somehow very unharmonious. And this is not surprising as in many of these positions there is immediate forced play: the pieces are placed so badly at the start that there is a need to improve their positions in one way only, which decreases the number of choices." —Vladimir Kramnik [translated from Russian]
"Fischer Random is an interesting format, but it has its drawbacks. In particular, the nontraditional starting positions make it difficult for many amateurs to enjoy the game until more familiar positions are achieved. The same is true for world-class players, as many have confessed to me privately. Finally, it also seems to lack an aesthetic quality found in traditional chess, which makes it less appealing for both players and viewers, even if it does occasionally result in an exciting game." —Vladimir Kramnik
"Both players have bad positions." —Helmut Pfleger, commentating on Lékó–Adams, Mainz 2001, game 4
"The changes in chess concern the perfection of computers and the breakthrough of high technology. Under this influence the game is losing its charm and reducing more and more the number of creative players. [...] I am a great advocate of Fischer's idea of completely changing the rules of chess, of creating a practically new game. It is the only way out, because then there would be no previous experience on which a machine could be programmed, at least until this new chess itself becomes exhausted. Fischer is a genius and I believe that his project would save the game." —Ljubomir Ljubojević
"In my opinion, we should start moving towards Chess960, just like we started to generate energy with renewable energy sources a while ago. If we start now, then by the time it reaches a crisis point, we will have a viable alternative ready." —Srinath Narayanan, August 2017
Comments by Fischer
"Teach people to play new chess, right away. Why do you offer them a black and white television set, when there is a set in color?" —Fischer, in the only meeting with FIDE President Kirsan Ilyumzhinov, responding to the latter advocating "step by step" changes mindful of the heritage of chess
"I don't know when, but I think we are approaching that [the end of chess] very rapidly. I think we need a change in the rules of chess. For example, I think it would be a good idea to shuffle the first row of the pieces by computer ... and this way you will get rid of all the theory. One reason that computers are strong in chess is that they have access to enormous theory [...] I think if you can turn off the computer's book, which I've done when I've played the computer, they are still rather weak, at least at the opening part of the game, so I think this would be a good improvement, and also just for humans. It is much better, I think, because chess is becoming more and more simply memorization, because the power of memorization is so tremendous in chess now. Theory is so advanced, it used to be theory to maybe 10 or 15 moves, 18 moves; now, theory is going to 30 moves, 40 moves. I think I saw one game in Informator, the Yugoslav chess publication, where they give an N [theoretical novelty] to a new move, and I recall this new move was around move 50. [...] I think it is true, we are coming to the end of the history of chess with the present rules, but I don't say we have to do away with the present rules. I mean, people can still play, but I think it's time for those who want to start playing on new rules that I think are better." —Fischer (September 1, 1992)
Similar variants
There are several variants based on randomization of the initial setup. "Randomized Chess, in one or other of its many reincarnations, continues to attract support even, or perhaps especially, that of top players."
Remarks
Shuffle chess
The castling is not possible in any case or is possible only when king and rook are on their traditional starting squares.
Chess2880
The castling is possible as follows:
After castling with the nearest rook to the column:
"h", the king will be in column "g" and the rook will be in column "f".
"a", the king will be in column "c" and the rook will be in column "d".
Chess480
In "Castling in Chess960: An appeal for simplicity", John Kipling Lewis proposes alternative castling rules which Lewis has named "Orthodoxed Castling".
The preconditions for castling are the same as in Chess960, but when castling, [...] the king is transferred from its original square two squares towards (or over) the rook, then that rook is transferred to the square the king has just crossed (if it is not already there). If the king and rook are adjacent in a corner and the king cannot move two spaces over the rook, then the king and rook exchange squares.
Unlike Fischer random chess, the final position after castling in Chess480 will usually not be the same as the final position of a castling move in traditional chess. Lewis argues that this alternative better conforms to how the castling move was historically developed.
Lewis has named this chess variation "Chess480"; it follows the rules of Chess960 with the exception of the castling rules. Although a Chess480 game can start with any of 960 starting positions, the castling rules are symmetrical (whereas the Chess960 castling rules are not), so that mirror-image positions have identical strategies; thus there are only 480 effectively different positions. The number of starting positions could be reduced to 480 without losing any possibilities, for example by requiring the white king to start on a light (or dark) square.
There are other claims to the nomenclature "Chess480"; Reinhard Scharnagl defines it as the white queen is always to the left of the white king.
David O'Shaughnessy argues in "Castling in Chess480: An appeal for sanity" that the Chess480 rules are often not useful from a gameplay perspective. In about 66% of starting positions, players have the options of castling deeper into the wing the king started on, or castling into the center of the board (when the king starts on the b-, c-, f-, or g-files). From Wikipedia article Castling: "Castling is an important goal in the early part of a game, because it serves two valuable purposes: it moves the king into a safer position away from the center of the board, and it moves the rook to a more active position in the center of the board." An example of poor castling options is a position where the kings start on g1 and g8 respectively. There will be no possibility of "opposite-side castling" where each player's pawns are free to be used in , as the kings' scope for movement is very restricted (it can only move to the h- or e-file). These "problem positions" play well with Chess960 castling rules.
Non-random setups
The initial setup need not necessarily be random.
The players or a tournament setting may decide on a specific position in advance, for example. Tournament Directors prefer that all boards in a single round play the same random position, as to maintain order and abbreviate the setup time for each round.
Edward Northam suggests the following approach for allowing players to jointly create a position without randomizing tools: First, the back ranks are cleared of pieces, and the white bishops, knights, and queen are gathered together. Starting with Black, the players, in turn, place one of these pieces on White's back rank, where it must stay. The only restriction is that the bishops must go on opposite-color squares. There will be a vacant square of the required color for the second bishop, no matter where the previous pieces have been placed. Some variety could be introduced into this process by allowing each player to exercise a one-time option of moving a piece already on the board instead of putting a new piece on the board. After all five pieces have been put on the board, the king must be placed on the middle of the three vacant back rank squares that remain. Rooks go on the other two.
This approach to the opening setup has much in common with Pre-Chess, the variant in which White and Black, alternately and independently, fill in their respective back ranks. Pre-Chess could be played with the additional requirement of ending up with a legal Fischer random chess opening position. A chess clock could even be used during this phase as well as during normal play.
Without some limitation on which pieces go on the board first, it is possible to reach impasse positions, which cannot be completed to legal Fischer random chess starting positions. Example: Q.RB.N.N If the players want to work with all eight pieces, they must have a prior agreement about how to correct illegal opening positions that may arise. If the bishops end up on same color squares, a simple action, such as moving the a-side bishop one square toward the h-file, might be agreeable, since there is no question of preserving randomness. Once the bishops are on opposite-color squares, if the king is not between the rooks, it should trade places with the nearest rook.
References
Bibliography
Further reading
External links
The birth of Fischer Random Chess by Eric van Reem, The Chess Variant Pages
CCRL 404FRC Computer Chess Rating List for FRC 40/4 time control
Chess Book from Castle Long publisher information on book by Gene Milener
Chess960.net Chess960 information: Why, how, what, where
Fischer Describes his Fischer Random Chess Rules audio clip of Bobby Fischer
Fischer Random Chess various authors, The Chess Variant Pages
Lichess free online Chess960 play against an engine or human
Chess 960 playable online at Green Chess
Fischer Hated Chess
Chess960 generator
chessgames.com's Fischerandom chess generator
Chess960 Start Positions
Bobby Fischer
Chess variants
1996 in chess
Board games introduced in 1996 |
64499620 | https://en.wikipedia.org/wiki/Electronic%20voting%20in%20the%20United%20States | Electronic voting in the United States | Electronic voting in the United States involves several types of machines: touch screens for voters to mark choices, scanners to read paper ballots, scanners to verify signatures on envelopes of absentee ballots, and web servers to display tallies to the public. Aside from voting, there are also computer systems to maintain voter registrations and display these electoral rolls to polling place staff.
Most election offices handle thousands of ballots, with an average of 17 contests per ballot,
so machine-counting can be faster and less expensive than hand-counting.
Voluntary guidelines
The Election Assistance Commission (EAC) is an independent agency of the United States government which developed the 2005 Voluntary Voting System Guidelines (VVSG). These guidelines address some of the security and accessibility needs of elections. The EAC also accredits three test laboratories which manufacturers hire to review their equipment. Based on reports from these laboratories the EAC certifies when voting equipment complies with the voluntary guidelines.
Twelve states require EAC certification for machines used in their states. Seventeen states require testing by an EAC-accredited lab, but not certification. Nine states and DC require testing to federal standards, by any lab. Four other states refer to federal standards but make their own decisions. The remaining eight states do not refer to federal standards.
Certification takes two years, costs a million dollars, and is needed again for any equipment update, so election machines are a difficult market.
A revision to the guidelines, known as the VVSG 1.1, was prepared in 2009 and approved in 2015. Voting machine manufacturers can choose which guidelines they follow. A new version has been written known as the VVSG 2.0 or the VVSG Next Iteration, which is being reviewed.
Optical scan counting
In an optical scan voting system, each voter's choices are marked on one or more pieces of paper, which then go through a scanner. The scanner creates an electronic image of each ballot, interprets it, creates a tally for each candidate, and usually stores the image for later review.
The voter may mark the paper directly, usually in a specific location for each candidate, then mail it or put it in a ballot box.
Or the voter may select choices on an electronic screen, which then prints the chosen names, usually with a bar code or QR code summarizing all choices, on a sheet of paper to put in the scanner. This screen and printer is called an electronic ballot marker (EBM) or ballot marking device (BMD), and voters with disabilities can communicate with it by headphones, large buttons, sip and puff, or paddles, if they cannot interact with the screen or paper directly. Typically the ballot marking device does not store or tally votes. The paper it prints is the official ballot, put into a scanning system which counts the barcodes, or the printed names can be hand-counted, as a check on the machines.
Most voters do not look at the machine-printed paper to ensure it reflects their choices. When there is a mistake, an experiment found that 81% of registered voters do not report errors to poll workers. No state requires central reporting of errors reported by voters, so the occasional report cannot lead to software correction. Hand-marked paper ballots more clearly have been reviewed by voters, but some places allow correction fluid and tape so ballots can be changed later.
Two companies, Hart and Clear Ballot, have scanners which count the printed names, which voters had a chance to check, rather than bar codes and QR codes, which voters are unable to check. When scanners use the bar code or QR code, the candidates are represented in the bar code or QR code as numbers, and the scanner counts those codes, not the names. If a bug or hack makes the numbering system in the ballot marking device different from the numbering system in the scanner, votes will be tallied for the wrong candidates. This numbering mismatch has appeared with direct recording electronic machines (below).
Errors in optical scans
Scanners have a row of photo-sensors which the paper passes by, and they record light and dark pixels from the ballot. A black streak results when a scratch or paper dust causes a sensor to record black continuously. A white streak can result when a sensor fails.
In the right place, such lines can indicate a vote for every candidate or no votes for anyone. Some offices blow compressed air over the scanners after every 200 ballots to remove dust.
Software can miscount; if it miscounts drastically enough, people notice and check. Staff rarely can say who caused an error, so they do not know whether it was accidental or a hack.
in 2020 elections in Collier and Volusia Counties, FL, the election's optical scanners mis-interpreted voters' marks on 0.1% and 0.2% of ballot sheets respectively. These were not enough to change any outcomes, and involved voters' marks which barely touched the ovals intended to record votes. They were discovered by independent software re-examining all the ballot images.
in a 2020 election in Antrim County, MI, last minute updates to some ballots were not applied to all scanners, so the scanners had inconsistent numeric codes for different candidates and styles of ballots, causing errors of thousands of votes. Corrections happened in stages, leading to less and less confidence in the results.
In a 2020 election in Windham, New Hampshire, fold lines in the wrong places and dust on scanner sensors caused many fold lines to count as votes.
In a 2020 election in Baltimore, Maryland, the private company which printed ballots shifted the location of some candidates on some ballots up one line, so the scanner looked in the wrong places on the paper and reported the wrong numbers. It was caught because a popular incumbent got implausibly few votes.
In a 2019 election in Northampton county, Pennsylvania, the software under-counted one candidate by 99%, reporting 164 votes, compared to 26,142 found in a subsequent hand-count, which changed the candidate's loss to a win.
In a 2018 New York City election when the air was humid, ballots jammed in the scanner, or multiple ballots went through a scanner at once, hiding all but one.
In a 2016 Maryland election, a comparison of two scanning systems on the same ballots revealed that (a) 1,972 ballot images were incorrectly left out of one system, (b) one system incorrectly ignored many votes for write-in candidates, (c) shadows from paper folds were sometimes interpreted as names written in on the ballot, (d) the scanner sometimes pulled two ballots at once, scanning only the top one, (e) the ballot printers sometimes left off certain candidates, (f) voters often put a check or X instead of filling in an oval, which software has to adapt to, and (g) a scratch or dirt on a scanner sensor put a black line on many ballot images, causing the appearance of voting for more than the allowed number of candidates, so those votes were incorrectly ignored.
In 2016 Wisconsin elections statewide, some voting machines did not detect some of the inks used by voters.
In a 2016 Rhode Island election, machines were misprogrammed with only one ballot style, though there were two. Results were surprising enough so officials investigated and found the error.
In a 2014 Stoughton, Wisconsin, election, all voters' choices on a referendum were ignored, because the scanner was programmed to look in the wrong spot on the ballot.
In a 2010 New York election, 20,000 votes for governor and 30,000-40,000 votes for other offices were ignored, because the scanners overheated and disqualified the ballots by reading multiple votes in races where voters had properly only voted once.
Errors from 2002 to 2008 were listed and analyzed by the Brennan Center in 2010.
In a 2004 Yakima, Washington, election 24 voters' choices on 4 races were ignored by a faulty scanner which created a white streak down the ballot.
In a 2004 Medford, Wisconsin, election, all 600 voters who voted a straight party ticket had all their votes ignored, because the manufacturer forgot to program the machines for a partisan election. Election officials did not notice any problem. The consultant who found the lost 600 voters also reported a Michigan precinct with zero votes, since staff put ballots in the scanner upside down.
In a 2000 Bernalillo County (Albuquerque area), New Mexico, election, a programming error meant that straight-party votes on paper ballots were not counted for the individual candidates. The number of ballots was thus much larger than the number of votes in each contest. The software was fixed, and the ballots were re-scanned to get correct counts.
In the 2000 Florida presidential race the most common optical scanning error was to treat as an overvote a ballot where the voter marked a candidate and wrote in the same candidate.
Researchers find security flaws in all election computers, which let voters, staff members or outsiders disrupt or change results, often without detection.
Security reviews and audits are discussed below.
Recreated ballots
Recreated ballots are paper
or electronic
ballots created by election staff when originals cannot be counted for some reason. Reasons include tears, water damage, folds which prevent feeding through scanners and voters selecting candidates by circling them or other abnormal marks. Reasons also include citizens abroad who use the Federal Write-In Absentee Ballot because of not receiving their regular ballot in time. As many as 8% of ballots in an election may be recreated.
When auditing an election, audits are done with the original ballots, not the recreated ones, to catch mistakes in recreating them.
Cost of scanning systems
If most voters mark their own paper ballots and one marking device is available at each polling place for voters with disabilities, Georgia's total cost of machines and maintenance for 10 years, starting 2020, has been estimated at $12 per voter ($84 million total). Pre-printed ballots for voters to mark would cost $4 to $20 per voter ($113 million to $224 million total machines, maintenance and printing). The low estimate includes $0.40 to print each ballot, and more than enough ballots for historic turnout levels. The high estimate includes $0.55 to print each ballot, and enough ballots for every registered voter, including three ballots (of different parties) for each registered voter in primary elections with historically low turnout. The estimate is $29 per voter ($203 million total) if all voters use ballot marking devices, including $0.10 per ballot for paper.
The capital cost of machines in 2019 in Pennsylvania is $11 per voter if most voters mark their own paper ballots and a marking device is available at each polling place for voters with disabilities, compared to $23 per voter if all voters use ballot marking devices. This cost does not include printing ballots.
New York has an undated comparison of capital costs and a system where all voters use ballot marking devices costing over twice as much as a system where most do not. The authors say extra machine maintenance would exacerbate that difference, and printing cost would be comparable in both approaches. Their assumption of equal printing costs differs from the Georgia estimates of $0.40 or $0.50 to print a ballot in advance, and $0.10 to print it in a ballot marking device.
Direct-recording electronic counting
A touch screen displays choices to the voter, who selects choices, and can change her mind as often as needed, before casting the vote. Staff initialize each voter once on the machine, to avoid repeat voting. Voting data and ballot images are recorded in memory components, and can be copied out at the end of the election.
The system may also provide a means for communicating with a central location for reporting results and receiving updates, which is an access point for hacks and bugs to arrive.
Some of these machines also print names of chosen candidates on paper for the voter to verify. These names on paper can be used for election audits and recounts if needed. The tally of the voting data is stored in a removable memory component and in bar codes on the paper tape. The paper tape is called a Voter-verified paper audit trail (VVPAT). The VVPATs can be counted at 20–43 seconds of staff time per vote (not per ballot).
For machines without VVPAT, there is no record of individual votes to check.
Errors in direct-recording electronic voting
This approach can have software errors. It does not include scanners, so there are no scanner errors. When there is no paper record, it is hard to notice or research most errors.
The only forensic examination which has been done of direct-recording software files was in Georgia in 2020, and found that one or more unauthorized intruders had entered the files and erased records of what it did to them. In 2014–2017 an intruder had control of the state computer in Georgia which programmed vote-counting machines for all counties. The same computer also held voter registration records. The intrusion exposed all election files in Georgia since then to compromise and malware. Public disclosure came in 2020 from a court case. Georgia did not have paper ballots to measure the amount of error in electronic tallies. The FBI studied that computer in 2017, and did not report the intrusion.
A 2018 study of direct-recording voting machines (iVotronic) without VVPAT in South Carolina found that every election from 2010 to 2018 had some memory cards fail. The investigator also found that lists of candidates were different in the central and precinct machines, so 420 votes which were properly cast in the precinct were erroneously added to a different contest in the central official tally, and unknown numbers were added to other contests in the central official tallies. The investigator found the same had happened in 2010. There were also votes lost by garbled transmissions, which the state election commission saw but did not report as an issue. 49 machines reported that their three internal memory counts disagreed, an average of 240 errors per machine, but the machines stayed in use, and the state evaluation did not report the issue, and there were other error codes and time stamp errors.
In a 2017 York County, Pennsylvania, election, a programming error in a county's machines without VVPAT let voters vote more than once for the same candidate. Some candidates had filed as both Democrat and Republican, so they were listed twice in races where voters could select up to three candidates, so voters could select both instances of the same name. They recounted the DRE machines' electronic records of votes and found 2,904 pairs of double votes.
In a 2015 Memphis, Tennessee, city election, the central processing system lost 1,001 votes which showed on poll tapes posted at precincts: "at Unity Christian... precinct’s poll tape... 546 people had cast ballots... Shelby County’s first breakdown of each precinct’s voting... for Unity Christian showed only 330 votes. Forty percent of the votes had disappeared... At first it looked like votes were missing from not just one precinct but 20. After more investigation, he appeared to narrow that number to four... In all, 1,001 votes had been dropped from the election night count."
In a 2011 Fairfield Township, New Jersey, election a programming error in a machine without a VVPAT gave two candidates low counts. They collected more affidavits by voters who voted for them than the computer tally gave them, so a judge ordered a new election which they won.
Errors from 2002 to 2008 were listed and analyzed by the Brennan Center in 2010.
In 2004, 4,812 voting machine problems were reported to a system managed by Verified Voting and Computer Professionals for Social Responsibility. Most of these problems were in states which were primarily using direct-recording electronic voting equipment as of 2006.
Security reviews and audits are discussed below.
Online, email and fax voting
Email, fax, phone apps, modems, and web portals transmit information through the internet, between computers at both ends, so they are subject to errors and hacks at the origin, destination and in between.
Election machines online
As of 2018–19, election machines are online, to transmit results between precinct scanners and central tabulators, in some counties in Florida, Illinois, Indiana, Iowa, Michigan, Minnesota, Rhode Island, Tennessee and Wisconsin.
Receiving ballots online
In many states, voters with a computer and printer can download a ballot to their computer, fill it out on the computer, print it and mail it back. This "remote access vote by mail" (RAVBM) avoids transmitting votes online, while letting distant voters avoid waiting for a mailed ballot, and letting voters with disabilities use assistive technologies to fill in the ballot privately and independently, such as screen readers, paddles or sip and puff if they already have them on their computer. The voter also receives a form with tracking numbers and a signature line, to mail back inside or outside the envelope with the ballot, so staff can review eligibility of the voter and prevent multiple votes from the same voter. Many states accept mailed ballots after election day, to allow time for mail from distant voters to arrive. The printed ballot may show just the choices and a bar code or QR code, not all the candidates and unvoted contests.
The voter's choices are not put online, which is an advantage for the voter's privacy. However the system does not work for people who have no printer or no computer. For people, such as soldiers, with a shared computer or printer, votes can be divulged by keystroke logging, by the print queue, or by people seeing ballots on the printer. Alternatives for distant voters are to get a paper ballot from the election office or the Federal Write-In Absentee Ballot. Alternatives for local voters with disabilities are to use a ballot marking device (BMD) at a polling place, if they can get there, or have election staff bring a BMD and ballot box to the voter. The voter's printer does not necessarily use the weight and size of paper expected by the election scanners, so, after separating ballot from identifiers, staff copy the voter's choices onto a standard ballot for scanning. This copying has scope for error.
In California people send a signed application by mail, email or fax and receive a code by email, so there are signature checks both on the application and when the ballot envelope arrives. In Washington, people access the ballot electronically with name and birth date, so signature checks when the ballot envelope arrive are the method to authenticate ballots.
Individuals voting online
States which allow individual voters to submit completed ballots electronically in the United States are:
Hawaii allows email voting by any permanent absentee voter who has not received a ballot by five days before an election
Idaho allows email and fax voting in declared emergencies
Louisiana allows fax voting for voters with a disability
Utah allows email and fax voting for those with disabilities
Other states have tried or considered software, with problems discussed below.
The Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) lets overseas citizens and all military and merchant marine voters get ballots electronically (email, fax, or web site). They then submit ballots by mail to 20 states. Four states allow submission through secure web sites: AZ, CO (if needed), MO, and WV. These four and the remaining 27 states have a mix of rules allowing email or fax: AK, CA, DE, DC, FL, HI, IN, IA, KS, LA, ME, MA, MS, MT, NE, NV, NJ, NM, NC, MD, OK, OR, RI, SC, TX (for danger, combat zones or space), UT, and WA. The Federal Voting Assistance Program converts emails to fax at voter request, so states which require fax receive ballots which started as emails.
Problems in online voting
Security experts have found security problems in every attempt at online voting,
including systems in Australia,
Estonia,
Switzerland,
Russia,
and the United States.
in 2019-2020 researchers found insecurities in online voting systems from Voatz,
and Democracy Live.
In 2010, graduate students from the University of Michigan hacked into the District of Columbia online voting systems during an online voting mock test run and changed all the cast ballots to cater to their preferred candidates. This voting system was being tested for military voters and overseas citizens, allowing them to vote on the Web, and was scheduled to run later that year. It only took the hackers, a team of computer scientists, thirty-six hours to find the list of the government's passwords and break into the system.
In March 2000 the 2000 Arizona Democratic presidential primary internet election was conducted over the internet using the private company votation.com. Each registered member of the party received a personal identification number in the mail. They could vote in person or over the internet, using their PIN and answering two questions such as date and place of birth. During the election older browsers failed, but no hacks were identified.
Electronic processing of postal and absentee ballots
Checking signatures on envelopes of absentee ballots is hard, and is often computerized in jurisdictions with many absentee ballots. The envelope is scanned, and the voter's signature on the outside of the envelope is instantly compared with one or more signatures on file. The machine sets aside non-matches in a separate bin. Temporary staff then double-check the rejections, and in some places check the accepted envelopes too.
Error rates of computerized signature reviews are not published. "A wide range of algorithms and standards, each particular to that machine's manufacturer, are used to verify signatures. In addition, counties have discretion in managing the settings and implementing manufacturers' guidelines… there are no statewide standards for automatic signature verification… most counties do not have a publicly available, written explanation of the signature verification criteria and processes they use"
Handwriting experts agree "it is extremely difficult for anyone to be able to figure out if a signature or other very limited writing sample has been forged,"
The National Vote at Home Institute reports that 17 states do not mandate a signature verification process.
The Election Assistance Commission says that machines should be set only to accept nearly perfect signature matches, and humans should doublecheck a sample, but EAC does not discuss acceptable error rates or sample sizes.
In the November 2016 general election, rejections ranged from none in Alabama and Puerto Rico, to 6% of ballots returned in Arkansas, Georgia, Kentucky and New York.
Where reasons for rejection were known, in 2018, 114,000 ballots arrived late, 67,000 failed signature verification, 55,000 lacked voter signatures, and 11,000 lacked witness signatures in states which require them. The intent of the signature verification step was to catch and reject forged signatures on ballot envelopes.
The highest error rates in signature verification are found among lay people, higher than for computers, which in turn make more errors than experts.
Researchers have published error rates for computerized signature verification. They compare different systems on a common database of true and false signatures. The best system falsely rejects 10% of true signatures, while it accepts 10% of forgeries. Another system has error rates on both of 14%, and the third-best has error rates of 17%.
It is possible to be less stringent and reject fewer true signatures, at the cost of also rejecting fewer forgeries, which means erroneously accepting more forgeries.
Vendors of automated signature verification claim accuracy, and do not publish their error rates.
Voters with short names are at a disadvantage, since even experts make more mistakes on signatures with fewer "turning points and intersections."
State and local websites for election results
Election offices display election results on the web by transferring USB drives between offline election computers, and online computers which display results to the public. USB drives can take infections from the online computers to the election computers. Local governments communicate electronically with their state governments so the state can display results, with the result that problems at the state level can affect all or many local offices.
Election-reporting websites run software to aggregate and display results. These have had programming errors which showed erroneous partial results during the evening, and the wrong winner.
Before the 2016 general election, Russians gained access to at least one employee's account at a vendor which manages election-reporting websites. During the 2018 general election, a hacker in India gained administrative access to the Alaska election-reporting website.
Studies by McAfee and ProPublica in 2020 found that most election websites have inadequate security. McAfee analyzed swing states. ProPublica analyzed Super Tuesday states. They found many offices using outdated, insecure, dangerous and inappropriate software, including unsupported operating systems, and using the same few web hosts, which they said is dangerous for critical infrastructure, since finding a flaw in one can lead to access to them all. They criticized offices for not using https encryption, and for public sitenames ending in .com or .org, since it leads voters to trust sites which are not .gov, and voters can easily be tricked by a similar name.
Election security
Decentralized system
In 2016 Homeland Security and the Director of National Intelligence said that United States elections are hard to hack, because they are decentralized, with many types of machines and thousands of separate election offices operating under 51 sets of state laws. Others have made similar statements.
An official at the Center for Strategic and International Studies said a nation state would target hacks in key counties. A McAfee expert said decentralization makes defense hard and for "a very determined group, trying to compromise this system, I think it ends up playing more into their favor than against them." Each city or county election is run by one office, and a few large offices affect state elections. County staff cannot in practice defend against foreign governments.
Security reviews
The Brennan Center summarized almost 200 errors in election machines from 2002 to 2008, many of which happened repeatedly in different jurisdictions, which had no clearinghouse to learn from each other.
More errors have happened since then. Cleveland State University listed formal studies of voting systems done by several groups through 2008.
Machines in use are not examined to determine if they have been hacked, so no hacks of machines in use have been documented. Researchers have hacked all machines they have tried, and have shown how they can be undetectably hacked by manufacturers, election office staff, pollworkers, voters and outsiders and by the public. The public can access unattended machines in polling places the night before elections. Some of the hacks can spread among machines on the removable memory cards which tell the machines which races to display, and carry results back to the central tally location.
The CEO of Free and Fair, an open source vendor, said the cheapest way to improve security is for each election office to hire a computer student as a white hat hacker to conduct penetration tests.
Audits
Five states check all contests by hand tallies in a small percent of locations, AK, CA, PA, UT, WV, though California excludes about half the ballots, the ones counted after election day, and Alaska excludes small precincts.
Two states check all contests by machines independent of the election machines, in a small percent of locations, NY, VT.
Seventeen states check one or a few contests by hand, usually federal races and the governor; most local contests are not checked.
Four states reuse the same machines or ballot images as the election, so errors can persist, CT, IL, MD, NV.
Sixteen states do not require audits, or only in special circumstances.
In seven states many voters still lack paper ballots, so audits are not possible. IN, KY, LA, MS, NJ, TN, TX.
Even where audits are done, no state has adequate security on the paper ballots, so they can be damaged to impede audits, or altered to match erroneous machine tallies. Even insiders have breached security.
Public attitudes
The Pew Research Center found in October 2018 wide mistrust of election security in both parties, especially among Democrats
8% of voters were "very confident that election systems are secure from hacking and other technological threats."
37% were "somewhat confident", and the remaining 55% were not confident
13% of Republicans were very confident, and 41% were not confident
4% of Democrats were very confident, and 66%% were not confident
An MIT professor's survey found that Republicans think domestic hackers are more likely than foreigners; Democrats think the opposite.
Stanford and Wisconsin researchers in 2019 found that only 89% of voters disapprove if a foreign country would "hack into voting machines and change the official vote count to give [a] candidate extra votes" and the candidate wins. This 89% disapproval is not much more than the 88% who disapprove of a foreign country making campaign contributions and 78-84% against them spreading lies. Only 73-79% disapprove if their party got help, while 94-95% disapprove of the other party getting help. If a foreign country thought about interfering, but did not, 21% distrust the results anyway. This rises only to 84% distrusting final results after a foreign country hacked and changed results.
For any of the foreign actions (hacks, contributions or lies), 72% of voters support economic sanctions, 59% support cutting diplomatic relations, 25% support a military threat, and 15% support a military strike. There was less support for action, by 4-20 percentage points, if the foreign country helped one's own party win, so the researchers point out that retaliation is unlikely, since there is little support for it in a winning party. Deep investigation creates more certainty about who is to blame, which they find raises support for retaliation very little. They randomly listed China, Pakistan or Turkey as the interfering country, and do not report any different reactions to them.
A Monmouth University poll in May 2019 found that 73% thought Russia interfered in the 2016 election (not necessarily by hacking), 49% thought it damaged American democracy a lot, 57% thought Russia interfered in the 2018 election, and 60% thought the US government is not doing enough to stop it. Margin of error is ±3.5%.
Election companies
Three vendors sell most of the machines used for voting and for counting votes. As of September 2016, the American Election Systems & Software (ES&S) served 80 million registered voters, Canadian Dominion Voting Systems 70 million, American Hart InterCivic 20 million, and smaller companies less than 4 million each.
More companies sell signature verification machines: ES&S, Olympus, Vantage, Pitney Bowes, Runbeck, and Bell & Howell.
Amazon provides election websites in 40 states, including election-reporting sites in some of them. A Spanish company, Scytl, manages election-reporting websites statewide in 12 U.S. states, and in another 980 local jurisdictions in 28 states.
Another website management company is VR Systems, active in 8 states.
Maryland's election website is managed by a company owned by an associate of Russian President Putin.
Timeline of development
1964: The Norden-Coleman optical scan voting system, the first such system to see actual use, was adopted for use in Orange County, California.
1974: The Video Voter, the first DRE voting machine used in a government election, developed by the Frank Thornber Company in Chicago, Illinois, saw its first trial use in 1974 near Chicago.
Mar. 1975:The U.S. Government is given a report by Roy Saltman, a consultant in developing election technology and policies, in which the certification of voting machines is analyzed for the first time.
August 28, 1986: The Uniformed and Overseas Citizen Absentee Voting Act of 1986 (UOCAVA) requires that US states allow certain groups of citizens to register and vote absentee in elections for federal offices.
1990: The FEC (Federal Election Commission) released a universalized standard for computerized voting.
1996: The Reform Party uses I-Voting (Internet Voting) to select their presidential candidate. This election is the first governmental election to use this method in the U.S.
May 2002:The FEC revised the standards established for electronic voting from 1990.
Nov 2004: 4,438 of votes in the general election is lost by North Carolina's electronic voting machines. The machines continued to count electronic votes past the device's memory capacity and the votes were irretrievably lost.
Dec 2005: Black Box Voting showed how easy it is to hack an electronic voting system. Computer experts in Leon County, Fl lead a simulation where they changed the outcome of a mock election by tampering with the tabulator without leaving evidence of their actions.
September 13, 2006: It was demonstrated that Diebold Electronic Voting Machine can be hacked in less than a minute. Princeton's Professor of Computer Science, Edward Felten who installed a malware which could steal votes and replace them with fraudulent numbers without physically coming in contact with the voting machine or its memory card. The malware can also program a virus that can spread from machine to machine.
September 21, 2006: The governor of Maryland, Bob Ehrlich (R), advised against casting electronic votes as an alternative method for casting paper absentee ballots. This was a complete turn around since Maryland became one of the first states to accept electronic voting systems statewide during his term.
September 3, 2009: Diebold, responsible for much of the technology in the election-systems business, sells their hold to Election Systems & Software, Inc for $5 Million, less than 1/5 of its price seven years earlier.
October 28, 2009: The federal Military and Overseas Voters Empowerment Act (MOVE) requires US states to provide ballots to UOCAVA voters in at least one electronic format (email, fax, or an online delivery system).
January 3, 2013: Voter Empowerment Act of 2013 – This act requires each US state to make available public websites for online voter registration.
Spring 2019: Department of Defense DARPA announces $10 million contract for secure, open-source election system prototypes based on the agency's SSITH secure hardware platform work: a touch screen ballot-marking device to demo at the annual DEF CON hacker conference in summer 2019 and an optical scan system to read hand-marked paper ballots targeted for DEF CON 2020.
Legislation
In the summer of 2004, the Legislative Affairs Committee of the Association of Information Technology Professionals issued a nine-point proposal for national standards for electronic voting. In an accompanying article, the committee's chair, Charles Oriez, described some of the problems that had arisen around the country.
Legislation has been introduced in the United States Congress regarding electronic voting, including the Nelson-Whitehouse bill. This bill would appropriate as much as 1 billion dollars to fund states' replacement of touch screen systems with optical scan voting system. The legislation also addresses requiring audits of 3% of precincts in all federal elections. It also mandates some form of paper trail audits for all electronic voting machines by the year 2012 on any type of voting technology.
Another bill, HR.811 (The Voter Confidence and Increased Accessibility Act of 2003), proposed by Representative Rush D. Holt, Jr., a Democrat from New Jersey, would act as an amendment to the Help America Vote Act of 2002 and require electronic voting machines to produce a paper audit trail for every vote. The U.S. Senate companion bill version introduced by Senator Bill Nelson from Florida on November 1, 2007, necessitates the Director of the National Institute of Standards and Technology to continue researching and to provide methods of paper ballot voting for those with disabilities, those who do not primarily speak English, and those who do not have a high literacy rating. Also, it requires states to provide the federal office with audit reports from the hand counting of the voter verified paper ballots. Currently, this bill has been turned over to the United States Senate Committee on Rules and Administration and a vote date has not been set.
During 2008, Congressman Holt, because of an increasing concern regarding the insecurities surrounding the use of electronic voting technology, submitted additional bills to Congress regarding the future of electronic voting. One, called the "Emergency Assistance for Secure Elections Act of 2008" (HR5036), states that the General Services Administration will reimburse states for the extra costs of providing paper ballots to citizens, and the costs needed to hire people to count them. This bill was introduced to the House on January 17, 2008. This bill estimates that $500 million will be given to cover costs of the reconversion to paper ballots; $100 million given to pay the voting auditors; and $30 million given to pay the hand counters. This bill provides the public with the choice to vote manually if they do not trust the electronic voting machines. A voting date has not yet been determined.
The Secure America's Future Elections Act or the SAFE Act (HR 1562) was among the relevant legislation introduced in the 115th Congress. The bill's provisions include designation of the infrastructure used to administer elections as critical infrastructure; funding for states to upgrade the security of the information technology and cybersecurity elements of election-related IT systems; and requirements for durable, readable paper ballots and manual audits of results of elections.
References
Elections in the United States
United States
Voting in the United States
Computer hardware standards
Articles containing video clips |
8803539 | https://en.wikipedia.org/wiki/SoftSide | SoftSide | SoftSide is a defunct computer magazine, begun in October 1978 by Roger Robitaille and published by SoftSide Publications of Milford, New Hampshire.
History
Dedicated to personal computer programming, SoftSide was a unique publication with articles and line-by-line program listings that users manually keyed in. The TRS-80 edition was first, launched in 1978. An Apple II specific version began in January 1980, followed by more individual versions supporting the Atari 400/800 and IBM-PC, as well as one for BASIC language programmers, Prog80. The platform-specific versions were combined into a single monthly edition in August 1980.
In the first few years of publication, users often had problems with the legibility of the dot-matrix program listings. By the time the printout was photographed and printed in the magazine, it had become a bit illegible. One reader commented, "after a short while of typing, you felt like you needed some of the 'coke bottle bottom' eye glasses!" Subscriptions were offered that included the printed magazine and a cassette tape, and later 5¼-inch floppy disks, to be literally "played" into the input port to load the complete program into the subscriber's personal computer.
Like many computer publications of the time, SoftSide faced considerable financial pressure and competition in an industry-wide shakeout of personal computer publications in 1983. As a result, Robitaille reorganized the publication into two new magazines: SoftSide 2.0 (directed towards the computer user) and Code (for the programmer), each with its own disk-based featured software included. Neither magazine found sufficient market to become fully established, and SoftSide ended with its March 1984 issue.
Early on, in 1978 or 1979, SoftSide was joined by a sister company called TRS-80 Software Exchange (or TSE), a software publisher. Many titles sold by this company were magazine submissions that were either very high quality or written in languages that the magazine did not support (which was mainly various dialects of BASIC). Due to a copyright challenge by Tandy, owner of the TRS-80, the business name was changed to The Software Exchange or just TSE. By mid-1979, hardware systems and peripherals of the day could be ordered via mail order/phone order from the newest branch of the business, named HardSide.
It is notable that this magazine launched the careers of many programmers, many of whom are still active in the profession. It also provided experience and support for several entrepreneurs who went on to create companies including MicroMint, The Bottom Line, Campbell Communications, The Gollan Letter.
Scott Adams took out the first ad for a commercial software game (Adventureland) in Softside Magazine in 1978.
Software
SoftSide published numerous computer games and utilities for the TRS-80, Apple II, Atari 8-bit, and Commodore PET over its six-year history. The following titles were collected in the Apple edition of The Best of SoftSide (1983) and released on accompanying 5¼-inch floppy disks.
Android Nim by Leo Christopherson (TRS-80 version) and Don Dennis (Commodore PET version)
Arena of Octos by Steve D. Kropinak (Apple version) and Al Johnston (TRS-80 version)
Battlefield by Joe Humphrey
Database by Mark Pelczarski
Escape from the Dungeons of the Gods by Ray Sato (Apple version by Alex Lee)
Flight of the Bumblebee by William Morris and John Cope
Galaxia by Michael Prescott
Gambler by Randy Hawkins (Apple version by Rich Bouchard)
Leyte by Victor A. Vernon, Jr.
Magical Shape Machine by Tom Keith
Melody Dice by Gary Cage
Microtext 1.2 by Jon R. Voskuil
Minigolf by Mitch Voth (Apple version by Steve Justus)
Operation Sabotage by Ray Sato (Apple version by Ron Shaker)
Quest 1 by Brian Reynolds (Apple version by Rich Bouchard)
Solitaire by Larry Williams
Space Rescue by Matt Rutter
SWAT by Jon R. Voskuil
Titan by William Morris and John Cope
Word Search Puzzle Generator by David W. Durkee
Adventure of the Month Club
Arabian Adventure (June 1981)
Alien Adventure (July 1981)
Treasure Island Adventure (August 1981)
Jack The Ripper Adventure (September 1981)
Crime Adventure (October 1981)
Around the World in Eighty Days (November 1981)
Black Hole Adventure (December 1981)
Windsloe Mansion Adventure (January 1982)
Klondike Adventure (February 1982)
James Brand Adventure (March 1982)
Witches Brew Adventure (April 1982)
Titanic Adventure (May 1982)
Arrow One (June 1982)
Robin Hood (July 1982)
The Mouse That Ate Chicago (August 1982)
Menagerie (September 1982)
The Deadly Game (October 1982)
The Dalton Gang (November 1982)
Alaskan Adventure (December 1982)
Danger is My Business (January 1983)
Reception
Bruce Campbell reviewed SoftSide in 1982 in The Space Gamer No. 61. Campbell commented that "SoftSide has evolved from a pulp tabloid to a slick, professional magazine. A wide variety of programs are featured: arcade games, adventures, economic situations, board games, educational programs, and more. In general, I have found these of higher quality than most listings in books and magazines."
References
https://web.archive.org/web/20010724053858/http://apple2history.org/history/ah20.html#05
(Nigel) Alan J Zett contributed the sections on TSE, The Bottom Line, Campbell Communications
External links
Photos of the first issues of Softside
Photos of nearly all of the TRS-80 Edition Softside Magazines on www.trs-80.com
Monthly magazines published in the United States
Defunct computer magazines published in the United States
Magazines established in 1978
Magazines disestablished in 1984
Magazines published in New Hampshire
Home computer magazines |
21297521 | https://en.wikipedia.org/wiki/MetaCASE%20tool | MetaCASE tool | A metaCASE tool is a type of application software that provides the possibility to create one or more modeling methods, languages or notations for use within the process of software development. Often the result is a modeling tool for that language. MetaCASE tools are thus a kind of language workbench, generally considered as being focused on graphical modeling languages.
Another definition: MetaCASE tools are software tools that support the design and generation of CASE tools.
In general, metaCASE tools should provide generic CASE tool components that can be customised and instantiated into particular CASE tools.
The intent of metaCASE tools is to capture the specification of the required CASE tool and then generate the tool from the specification.
Overview
Quick CASE tools overview
Building large-scale software applications is very complicated process which is not easy to handle. Software companies must have good system of cooperation throughout developing teams and good displicine is highly required.
Nevertheless, using CASE tools is modern way how to speed up software development and ensure higher level of application design. However, there are another issues which has to be kept in mind. First of all usage of these tools doesn't guarantee good results because they are usually large, complex and extremely costly to produce and adopt.
CASE tools can be classified as either front-end or back-end tools depending on the phase of software development they are intended to support: for example, “Front-end’ analysis and design tools versus “Back-end” implementation tools. For a software engineers working on a particular application project, the choice of CASE tool would typically be
determined by factors such as size of project, methodology used, availability of tools, project budget, and numbers of people involved. For some applications, a suitable tool
may not be available or the project may be too small to benefit from one.
CASE tools support a fixed number of methodologies but software development organizations dynamically change their adopted methodologies.
Quick metaCASE tools overview
MetaCASE products are usually highly specialised, application development environments which produce a custom tool(set) from a high level description of the required tools.
So in other words metaCASE technology approaches the methodology automation from a dynamic perspective.
MetaCASE tools allow definition and construction of CASE tools that support arbitrary methodologies. A CASE tool customizer first specifies the desired methodology and customizes the corresponding CASE tool. Then software developers use that CASE tool to develop software systems. An advantage of this approach is that the same tool is used with different methodologies, which in turn, reduces the learning curve and consequently the cost. Any desired methodology can be automated or modified by the developing organization which provides a dynamic capability in today's dynamic and competitive world. From another perspective this technology can be used as a practical teaching tool considering the shortened length of development and learning times that suits academic course periods.
Differences between metaCASE and CASE tools
Most CASE tools for object-oriented modeling are heavily based on the UML method. A method also dictates other CASE tool functions, such as how models can be made, checked and analyzed, and how code can be generated. For example, a tool can generate CORBA IDL definitions only if the modeling language can adequately specify and analyze CORBA compliant interfaces. If
the tool (and method) does not generate them, it offers very little, if any, support for work on interface design and implementation.
When using methods developers often face similar difficulties. They can not specify the domain and system under development adequately because the method does not provide concepts or notations for the task at hand. End-users may find the models difficult to read and understand because they are unfamiliar with the modeling concepts. Typically they also find it difficult to map the concepts and semantics used in the models to their application domain. After creating the models, which fail even to illustrate the application domain adequately, the tool does not provide the necessary reports nor does it generate the required code.
What is needed then is the ability to easily capture the specifications of any method and then to generate CASE tools automatically from these specifications. Later when the situation in the application domain evolves and the development environment changes you may incrementally update the method support in your CASE tool. This is exactly what metaCASE technology offers.
How metaCASE works
Traditional CASE tools are based on a two-level architecture: system designs are stored into a repository, whose schema is programmed and compiled into the CASE tool. This hard-coded part defines what kind of models can be made and how they can be analyzed. Most importantly, only the tool vendor can modify the method, because it is fixed in the code.
MetaCASE technology removes this limitation by providing flexible methods.
This is achieved by adding one level above the method level.
MetaCASE tools are based on a three-level architecture:
The lowest, the model level, is similar to that of CASE tools. It includes system designs as models.
The middle level contains a model of the method, i.e. a metamodel. A metamodel includes the concepts, rules and diagramming notations of a given method. For example, a metamodel may specify concepts like a class and an inheritance, how they are related, and how they are represented. However, instead of being embedded in code in the tool, as in a fixed CASE tool, the method is stored as data in the repository. The use of metamodels has recently become more popular. Many method books now include metamodels of their method, and several important innovations, such as XMI, are metamodel-based. Unlike a CASE tool, a metaCASE tool allows the user to modify the metamodel. Hence, metaCASE is based on the flexibility of the method specifications.
This is achieved by having a third, higher level that includes the metamodeling language for specifying methods. This level is the hard-coded part of the metaCASE software.
All the three levels are tightly related: a model is based on a metamodel, which in turn is based on a metamodeling language. Clearly, no modeling is possible without some sort of metamodel. This dependency structure is similar to that between objects, classes and metaclasses in some object-oriented programming languages.
metaCASE tools
This is a list of currently available metaCASE tools; many other modeling tools may also offer some measure of metamodeling functionality
DOME
GME
MetaEdit+
MetaDONE
Obeo Designer
Whole Platform
ConceptBase
Real benefits of using metaCASE tools
Jackson recognises the vital difference between an application’s domain and its code: two different worlds, each with its own language, experts, ways of thinking etc. A finished application forms the intersection between these worlds. The difficult job of the software engineer is to build a bridge between these worlds, at the same time as solving problems in both worlds.
Empirical studies have consistently shown that only around half of all development projects use methods. Among those using methods, over 50% either modify the methods to better fit to their need or even develop their own methods
In a standard CASE tool, the method supported by the tool is fixed: it cannot be changed. In a metaCASE tool, there is complete freedom to change the method, or even develop an entirely new method. Both models and metamodels (method descriptions) are stored as first-class elements in the repository. This allows an organisation to develop a method that suits their situation and needs, and to store and disseminate that knowledge to all developers. The tool and method then guide developers, provide a common framework for them to work in, and integrate the work of the whole team.
Research prototypes and even commercial metaCASE tools have existed for many years, but only recently have there been tools which are mature, user-friendly and stable for both the method developer and the method user. One of the most widely known and used metaCASE tools is MetaEdit+.
Following list represents several kinds of ways how these tools can be used within software development:
can reduce the time and cost to develop a computer-aided environment
can support formal software development methods
can be used as an information systems modeling tool
can support the creation of a wide range of modeling languages
can support CASE and modeling language training
can support modeling language comparison and integration
These tools should also possess the following characteristics:
enabling users to create method support for their own software engineering methods with low learning curve
to have easy to use graphical CASE tools to support simple and efficient user interactions
to have the capability to check the consistency of a model, even at run-time
to have standard report generation facility
to possess complexity management tool that provides restricted views and granular model representations
to possess sophisticated input dialogs for creation and modification of model data
to possess customizable multi-method support
See also
Domain-specific modeling
Method engineering
CASE tool
References
Computer-aided software engineering tools
Software for modeling software |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.